RAM has always been a fairly limited resource in most of the computer systems I’ve worked with in my career. Often there is never enough RAM, and I’d always like more, often to speed up the systems. That has somewhat changed with laptops, as 16 GB really works well for me most of the time. Not that I wouldn’t take a 32GB machine, but I’m waiting for them to become more common and smaller.

This has especially been true for database servers. It seems that I’ve rarely had a database server that could fit my entire database in RAM. Even now, I have an over-provisioned server for SQLServerCentral which has plenty of spare capacity, but I’m still slightly short on RAM. The target level for SQL Server is about one GB more than I have set. Not really worth complaining about, but still I don’t have the RAM I’d like.

Last week I wrote about someone that attacked the RDBMS as old and troublesome technology. As a part of this, a method of storing all data in memory was presented. I’m not sure I think this is actually a good or practical idea for most systems, but I did wonder about the idea of data space and size. Certainly I have seen plenty of index space in databases, and certainly there is more index data than other data at times, but I suspect that’s not the case for many databases.

Regardless, I was curious if anyone has large databases that couldn’t fit into RAM these days. If you think about the largest database you have, how big is it, in terms of data size. Not allocated size, but the total data space used. Would this fit into RAM if you could get 1TB or 2TB of memory? If you can, what about index sizes, are they large? There are a few scripts in this thread if you need one.

I suspect there are certainly databases that don’t fit into RAM, and likely plenty of instances with more than 1 database that don’t have enough RAM. I still see plenty of people with less than 64GB on their servers, so that’s a battle still being fought. I certainly wouldn’t advocate an in-memory only database, likely because there are going to be other issues, but it’s still an interesting thought. Certainly my server has only 60GB allocated and the databases are well over that in aggregate.

Maybe asking for a bit more RAM on those critical servers is the way to go, especially if you think you can get the entire database into memory.

I have few 1TB+ databases that have acceded the limitations of the hardware when it comes to RAM. One is using 512GB and another 1TB memory. Even if the hole database is in memory with out proper indexes it takes time to scan 1TB of memory:(