Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

I am planning to apply PAGE compression to some of the big tables in my database (DW to be precise). These tables are fairly big with over 15 Billion rows. When I applied the compression in test environment, the whole process took around 22 hours.These tables are accessed daily with quite long running queries. My questions are

While the compression is being applied would there be any affect on the queries that are running? Any lock etc I should be aware of?

This question came from our site for professional and enthusiast programmers.

1

Is this SQL Server, or SSAS? If the former, why not SSAS?
–
podiluskaOct 31 '12 at 10:17

There is an SSAS layer on the top. However this is mostly to do with saving disk space and hopefully speeding up the SSAS processing. Thanks for flagging to dba.se though.
–
swanand kantOct 31 '12 at 10:23

2

The best resource I know of for this is this MS Technical Article. It is possible to perform compression changes ONLINE in Enterprise Edition, or a partition at a time (if that applies) but not both though you could simulate that using SWITCH.
–
Paul WhiteOct 31 '12 at 11:16

2 Answers
2

Offline ALTER ... REBUILD takes a big fat schema modification lock on the table with absolutely 0 concurrency (not even dirty reads can scan the table). Online ALTER ... REBUILD, as the name suggests, allows for any concurrent query or DML operation. The MSDN article How Online Index Operations Work describes the three phases (prepare, build and finalization) and the object locks required in each phase (IS, IS, SCH-M). The builder operates in batches and acquires data locks as it makes progress but it will back off on conflict, and there is some special sauce on handling deadlocks with the builder:

Deadlocks between the index builder transaction that is holding the
batch transaction locks and DML statements are possible, but are
handled internally so that neither the DML operation nor the index
builder transaction should terminate during the build phase due to a
deadlock.

In a Transaction, it takes exclusive locks on the pages, so yes there is potential for blocking, however it will be very less for a big table since it compresses page-by-page, so only large full-table-scan selects can get blocked, not inserts or updates, so the answers are:

Yes it can block large queries, the process will take an exclusive lock on the page that is being compressed

A staggered approach is possible if you use partitions, however that won't help you much if you use Aggregate queries without a filter on the partitioned column, if you are anyway selecting the entire table, it matters not which partition you compress

In addition to disk space, you will also reap performance benefits because you can fit more of the table in memory, you can test this by observing the Page Life expectancy before and after the compression