To use In-Memory OLTP, you define a heavily accessed table as memory optimized. Memory-optimized-tables are fully transactional, durable, and are accessed using Transact-SQL in the same way as disk-based tables. A query can reference both memory-optimized tables and disk-based tables. A transaction can update data in memory-optimized tables and disk-based tables. Stored procedures that only reference memory-optimized tables can be natively compiled into machine code for further performance improvements. The In-Memory OLTP engine is designed for extremely high session concurrency for OLTP type of transactions driven from a highly scaled-out middle-tier. To achieve this, it uses latch-free data structures and optimistic, multi-version concurrency control. The result is predictable, sub-millisecond low latency and high throughput with linear scaling for database transactions. The actual performance gain depends on many factors, but 5-to-20 times performance improvements are common.

The following table summarizes the workload patterns that may benefit most by using In-Memory OLTP:

Implementation Scenario

Implementation Scenario

Benefits of In-Memory OLTP

High data insertion rate from multiple concurrent connections.

Primarily append-only store.

Unable to keep up with the insert workload.

Eliminate contention.

Reduce logging.

Read performance and scale with periodic batch inserts and updates.

High performance read operations, especially when each server request has multiple read operations to perform.

Unable to meet scale-up requirements.

Eliminate contention when new data arrives.

Lower latency data retrieval.

Minimize code execution time.

Intensive business logic processing in the database server.

Insert, update, and delete workload.

Intensive computation inside stored procedures.

Read and write contention.

Eliminate contention.

Minimize code execution time for reduced latency and improved throughput.