VENOM is a virtual machine bug which exploits the floppy disk controller used in some hypervisors and allows an attacker to break out of a guest O/S and escape into a host O/S.

“VENOM, CVE-2015-3456, is a security vulnerability in the virtual floppy drive code used by many computer virtualization platforms. This vulnerability may allow an attacker to escape from the confines of an affected virtual machine (VM) guest and potentially obtain code-execution access to the host. Absent mitigation, this VM escape could open access to the host system and all other VMs running on that host, potentially giving adversaries significant elevated access to the host’s local network and adjacent systems.”

Not only did I scratch my head on my blog, but did so very publicly on LinkedIn too. In all honesty, I really appreciated the input from some very smart people and I do understand the logic a lot more now. Admitting that you don’t have the answer to every question is liberating sometimes and personally beneficial almost every time.

Basically, Oracle are going big on engineered systems. If customers really are serious about migrating to THE CLOUD(TM) and have made a strategic decision to never,ever buy any hardware ever again – I often find that the most reasoned decision involves limiting your options on ideological grounds – Oracle will add these systems to their PaaS offering instead of selling them for on-site use. Win-win.

It’s still doesn’t really tessellate perfectly for me, but at least it makes more sense now. I’m sure you’ve all seen the data sheets by now, so here’s a few pennies for my thoughts:

A full-rack can read and write 4m IOPS: I presume this is four MILLION IOPS, which is a seriously impressive number. To put it into context, the X3-2 quarter-rack was rated for 6,000 IOPS!

The Oracle Database Appliance now comes with FlashCache and InfiniBand: which should make the ODA worthy of very serious consideration for a lot of small-to-medium-sized enterprises.

Goodbye High Performance drives: they’ve been replaced with a flash-only option. Not only is it Flash, but it’s “Extreme Flash“, no less.

Do I trust all-Flash storage?No.Since moving off V2 and leaving Bug Central, have I encountered any problems whatsoever with the FlashCache?No.Can I justify my distrust in Flash storage? Without delving into personality defects, probably not.

There’s a “gotcha” with the Extreme Flash drives: the license costs are DOUBLE that of High Capacity drives. I don’t understand the reasoning behind this, unless Oracle are specifically targeting clients for whom money is no option with this (and they probably ARE in a way).

Configuration elasticity is cool: you can pick and choose how many compute nodes / storage cells you buy. I do remember in the days of the V1 and V2 when you couldn’t even buy more storage to an existing machine. The rationale being that you’d mess the scaling all up (offloading, etc).

It’s a really great move for Oracle to make this very flexible and will go some way to silencing those who claim that Exadata is monolithic (and, don’t forget, expensive).

You can now virtualize your Exadata machine with OVM: I haven’t had the best of luck ever getting OVM to work properly, so I’ll defer my views on that for the time being, though the purist thinks they’re dumbing down the package by offering virtualization at all. Isn’t that what the Exalytics machine is for?

OK, fine, they want to bring Exadata to the masses and it’s an extension of the “consolidation” drive they’ve been on for a couple of years, but it’s a bit like buying a top-end Cadillac and not wanting to use high-grade gasoline because it’s too expensive.

Other cool-sounding new Exadata features that made my ears prick up:

faster pure columnar flash caching

database snapshots

flash cache resource management – via the ever-improving IORM

near-instant server death detection – this SOUNDS badass, but could be a bit of a sales gimmick; don’t they already do that?

I/O latency capping – if access to one copy of the data is “slow”, it’ll try the other copy/copies instead.

offload of JSON and XML analytics – cool, I presume this is offloaded to the cells.

I didn’t have the chance to listen to Oracle’s vision of the “data center of the future” – I think it had something to do with their Virtual Compute Appliance competing against Cisco’s offerings and “twice the price at half the cost“.

Oracle’s problem is still going to be persuading customers to consider VALUE instead of COST. “Exadata is outrageously expensive” is something I’m sure everyone hears all the time and to claim it’s “cheap” isn’t going to work because managers with sign-off approval can count.

Is it expensive? Of course. Is it worth it? Yes, if you need it.

This is why I’m unconvinced that customers will buy an Exadata machine and then virtualize it. The customers who are seriously considering Exadata are likely doing so because they NEED that extreme performance. You can make a valid argument for taking advantage of in-house expertise once your DBA team has their foot in the door – best of breed, largest pool of talent and knowledge, etc.

However, so many companies are focusing solely on the short-term and some exclude their SMEs from strategic discussions altogether. Getting to a point where the DBA team is able to enforce Exadata as the gold standard in an IT organization is going to be incredibly difficult without some sort of sea change across the entire industry and … well, the whole economy, really.

I’m not sure what caused it, but I came away with a feeling that these major leaps in performance were very distant to me. Maybe it’s because I don’t personally see much evidence of companies INVESTING in technology, but still attempting to do “more with less” (see all THE CLOUD(TM) hype).

I’m really not convinced there is much appetite out there to maximize data as an asset or to gain a competitive advantage through greatly enhancing business functionality so much as there is to minimize IT expenditure as much as possible. Cost still feels seems to be the exclusive driver behind business decisions, which is a real shame because it’s difficult to imagine a BETTER time to spend to invest in enterprise data than right now.

Version X5 of Oracle’s engineered systems – presumably Exadata, Exalogic and Exalytics with a garnishing of a ZFS/ZDLR appliance or two – will be finally unveiled tomorrow.

No doubt more of everything will be involved (Flash, memory, CPU, cupcakes), making DBA geeks drool and widening the performance chasm between Oracle’s engineered systems and a lot of the “industry trends” we read so much about right now. Hopefully, those who have been on the waiting list since they stopped shipping the X4s will feel it’s been worth the wait. Enjoy your new gadgets!

As a technologist, it’s difficult not to be impressed with exponentially-improving kit, especially when it feels like the industry is collectively yearning for 1990s technology.

Huh? Isn’t pushing a new class of engineered systems (lots of lovely CapEx … mmm-hmm!) and then pushing CLOUDCLOUDCLOUD (CapEx, be GONE!) a week later a juxtaposition?

And what about this quote:

” … on-premises software sales grew 6% in constant currency. I continue to expect this business to grow nicely while our cloud business continues to maintain hypergrowth … “

Really?

Oracle believes CIOs are going to maintain spending in “traditional” infrastructure AND invest big in THE CLOUD(TM) at the same time? Hmm.

And isn’t THE CLOUD(TM) fantastic and magical and revolutionary because organizations plan to eliminate spending on support groups and hardware and transfer their budgets to OpEx instead, saving tons of cash? (We’ll put the many and varied issues of doing this to one side for the moment).

Am I the only one confused by this?

That being said …

Unlike most THE CLOUD(TM) vendors, Oracle’s cloud offering includes Platform-as-a-Service, which provides the first “real” managed database service in THE CLOUD(TM) including Exadata and all the performance and security cost options you can buy for the Oracle database “on-site”.

Even as someone who isn’t exactly a strong advocate of THE CLOUD(TM), it’s difficult to dispute that this addresses some – though by no means all – of the problems associated with cloud computing.

Up until now, most providers have been offering more of an Infrastructure-as-a-Service solution, which is geared almost entirely towards cost savings. With PaaS, a viable argument can be made that functionality and performance can be as good, if not better, than internally managed systems.

I’ll admit that this all had my curiosity, but now has my attention.

Maybe Oracle is one of two companies (IBM, perhaps?) who can afford to invest the massive sums needed to cover both bases well enough, though it should be noted that Amazon STILL hasn’t made a profit on AWS yet. And how will they avoid their sales pitches becoming confusing muddles of uncertainty involving DOUBLE the salespeople (one set for engineered systems, one set for THE CLOUD(TM))?

I’ll be honest, this still doesn’t make sense to me – I just don’t get it.

I have no doubt Oracle will be pushing Exadata’s suitability for THE CLOUD(TM) tomorrow by introducing new elastic/scalable/on-demand features, but engineered systems and THE CLOUD(TM) seem so diametrically opposed that they’re all but mutually exclusive.

We’ll know soon enough, I guess! The cloud is coming, whether we agrees with it or not!

Recently, a client of mine asked me an excellent question about whether they could use AWS as an off-site storage location for their backup media files.

As I may have previously suggested, I have quite a long list of reasons why I think that running important databases from the cloud is a terrible idea, though if a client does really want to pursue this route, I will naturally oblige and make it work as best as I can.

This particular client’s suggestion, however – which, surprisingly, isn’t something that I’d heard made before – was to move at-rest backup media files to Amazon’s S3 storage instead of putting them on tape and shipping them off-site.

I asked around and did some digging but I couldn’t find anyone who was using THE CLOUD(TM) in this manner. Others are migrating their live database to THE CLOUD(TM) – or, at least, attempting to and becoming very frustrated with it – while others have stood up standby databases in AWS as a “DR-of-last-resort”.

I contacted their support about a technical issue I was having and, to their credit, their customer service seems to be just as good for AWS as it is for their .com site.

I made a(n attempt at a) joke about time travel and how they meter usage for their RDS service and, to my surprise, they played along:

Me: Thanks for the resolution. I’ll keep you updated with my time travel travails. Do you happen to have a spare DeLorean car that can hit 88mph?

CSR: I’m glad to hear there was a resolution to your case. Currently, our DeLorean, as well as our Tardis, our Phone Booth, and our HG Well’s Special are all being used to try to prevent bad Fantasy Football drafts, as well as preventing from eating too much for Thanksgiving.

Got to love it when customer service has a sense of humor, especially considering the dog’s abuse they often get.

So, last month I decided to bow to peer (well, “industry”) pressure and check out Amazon Web Services for myself.

That’s right, before the year 2014 was out, I finally started my own personal “journey” to THE CLOUD(TM).

I wish I could say that I experienced a “Road to Damascus” moment and that all the major (i.e. “showstopper”) concerns I had with actually having to migrate databases to THE CLOUD(TM) magically disappeared once I had actually used it myself. Maybe I had been wrong this whole time?

“Just try it and you’ll see”…

Unfortunately, like a cigarette to a teenager, it was exactly what I had expected. No more, no less.