Oracle's Database In-Memory option, like longer-standing competitor products such as SAP's HANA, is designed to speed up workloads driven by analytics with the capacity to cope with billions of data values per second.

Unlike rival products, which either require changes to applications or limit database functionality, the new Oracle technology is transparent to existing applications that use the company's database technology, according to Oracle vice president of product management Tim Shetler.

"One of the reasons we felt it was really important to make this option transparent to the existing applications that run on the Oracle databases was because there are today over 300,000 organisations worldwide that use the Oracle database," he said.

"Just because of that, we expect there will be, at least in terms of numbers, a fairly robust take-up of the option when it's available - as opposed to, say, SAP, which really wasn't in the database business until they purchased Sybase. So they don't have a large installed base to start with.

"They really have to make a response to SAP with HANA now. They've had 10c for a long time, which was the in-memory capabilities, but compared with HANA it makes Oracle look a little bit lame," Longbottom said.

"With so many other companies now coming through with accelerated in-memory analytics capabilities using server-side SSD and large DIM arrays, Oracle is beginning to look a bit long in the tooth."

Oracle acquired in-memory computing pioneer TimesTen in 2005, when "the cost of memory was horrendous, so it was very much a solution for those with the very deepest pockets", according to Longbottom.

It continues to sell that technology but it is distinct from the soon-to-be-released architecture that the company has been working on for several years.

Oracle's Tim Shetler said the Database In-Memory option, to be released at an unspecified price as part of 12.1.0.2, offers a dual-format architecture, which allows advanced analytics and transaction processing workloads on the same database.

"You can run very high-volume real-time analytics against the same database that you're using for your production transaction processing workloads," he said.

"So being able to share a common database across all the workloads means customers will no longer have to make copies of their production data so that they can run reports against it, typically called data marts. You can do that all on one common system, avoid the delays of copies and make sure the data that you're analysing is up to the minute current."

Shetler said Oracle sees its Database In-Memory option as being the most complete implementation because it is not just an in-memory database that can be used with the existing functionality of the Oracle database.

"Because they are integrated together, we effectively have a complete database — not just a fast in-memory column store but also all the infrastructure you'd expect from a database to guarantee transactional integrity, security, robustness," he said.

"If you compare that with SAP HANA, they have effectively built an in-memory data store that's very fast but they're still trying to complete the rest of the database functionality around it."

Shetler said the Database In-Memory option taps into all of the 30-plus years of development that preceded it with the Oracle database.

"All the features, functionality, robustness — everything that has built up around the Oracle database is inherited through the In-Memory option. The database in-memory itself is very similar to other implementations in the marketplace," he said.

"If you think of it from the perspective of a complete database system, then Oracle is probably in the lead in terms of the development of that."

Shetler said another area where Oracle has an advantage is in the size of the database that can be used with the in-memory technology.

"Many implementations require that the database that you're working with must be preloaded all into memory," he said.

"Even though memory is getting bigger and cheaper all the time, it still means that large databases — for example, data warehouses, which can be very large or big-data oriented datasets — those would be very difficult to fit in memory so they are unlikely or less likely to work with in-memory database technology.

Tech Pro Research

"We have developed over the years a number of ways in which large databases can be accommodated. With the in-memory option, you can now spread the in-memory data across many databases and therefore have a much larger database in memory."

Since the unveiling of the in-memory technology by Oracle CEO Larry Ellison at 2013's OpenWorld conference, the company has added improved fault tolerance.

"What we've done is if you are running the database and the Database In-Memory option on one of the Oracle RAC clusters — in other words if you are spreading your data across multiple database servers' memory areas — then you have the option to request that that data be protected in a fault-tolerant way," Shetler said.

"That will ensure that each piece of data is copied in a least one other database server's memory, such that if one of the database servers fails, the RAC software will automatically detect that failure, switch that application to another database server that has a copy of that data that was on the one that failed, and the performance will just continue as if there were no failure at all.

"So not only do we have complete transparency of application, complete transparency of the database functionality, but if there's any failure occurred in the cluster then there is complete transparency of failover and high availability."

The Database In-Memory option will be available not only on Oracle engineered systems and systems shipped by Oracle but on the all the systems where the Oracle 12c database can run.