Answer: This is actually a question I get at almost every event at which I speak. It’s a common question and it’s actually VERY complex to fully describe. I suspect it might take me more than one post to tackle all of the issues but I’ll start with the basics here.

First and foremost, I’ll start by saying that partitioning is CRITICAL for VLT. What is VLT? It’s about as descriptive as VLDB and it means very large table. (Yes, I just made it up as a new TLA (three-letter acronym) that I’m planning to start using more. However, I’m also going to quantify it a bit more.) Most people speak of VLDBs (very large databases) and they define that as databases that are 100s of gigabytes (many would say that a database that’s 1TB or larger is a VLDB). For me, and in my experience, *many* customers run into problems long before their databases reach 1 TB; their problems tend to start when they have even just one table that starts to get well into double-digit gigabytes.

Think about it, a single table that’s 60 GB presents a variety of problems. And, to highlight where the problems occur – think about these questions in the context of your larger tables:

Is all of the data recent? How old is the oldest data in your 5 largest tables?

How much of that data changing? Not including the new data coming in – what percentage of the older data needs to be modified?

How often are you accessing the older data?

How long are your maintenance processes against it?

Are you replicating it?

Do you have enough memory to fit the table (and all of its indexes) in cache?

Do you really need to have indexes on ALL of that data? Or, does your data have different access patterns (which might warrant different indexing strategies)?

For many, these questions start to pose many concerns at table sizes in the mid to upper double-digit gigabytes. The reason why that’s when people notice them more? It’s all about resources. And, the most critical one here is memory. No, SQL Server is not required to put your entire table into memory. However, if you don’t have appropriate indexing strategies then you might require the entire table in memory. If that’s the case, you might be wasting one of the most important resources you have.

So, how can you solve it. Yes, indexing is a BIG part of this. However, even if your indexing strategies aren’t perfect – partitioning can also help. However, partitioning is more of a concept (NOT tied directly to either feature: partitioned tables or partitioned views) but instead a concept of breaking something down into smaller chunks. This is almost always a good thing. Breaking something down into smaller chunks often offers more options for maintenance and management and this can in turn offer better performance. Partitioning is not really directly tied to performance but indirectly it can be HUGELY beneficial. So, for today, I’ll start by saying YES to the question that partitioned views still have benefits; they should not be discounted only because they’re an older feature. Both partitioning strategies provide different benefits for partitioning. To architect the RIGHT solution takes understanding ALL of their pros/cons (ut oh, I hear “it depends” in the distance…sorry!). As an architect, you need to evaluate both PVs and PTs and possibly use them together for the most gains.

I’ll tackle more on this discussion in my next post. In fact, I suspect it’s going to take at least 3 or 4 posts to get through the entire discussion! Feel free to email me PV v. PT questions now so I can be sure to cover your specific concerns. Thanks!