Database Administrators Stack Exchange is a question and answer site for database professionals who wish to improve their database skills and learn from others in the community. It's 100% free, no registration required.

I have a SQL Server 2008 instance with approximately 150 columns. I have previously populated this table with approximately 12 million entries, but have since cleared the table in preparation for a new data set.

However, commands that once ran instantly on an empty table such as count(*) and select top 1000 in SQL Management Studio now take eons to run.

SELECT COUNT(*) FROM TABLE_NAME

took over 11 minutes to return 0, and SELECT TOP 1000 took almost 10 minutes to return an empty table.

I've also noticed that the free space on my hard drive has literally disappeared (down from about 100G to 20G). The only thing that happened between was a single query I ran:

This question came from our site for professional and enthusiast programmers.

5

What recovery model does your database use? If it's "Full", your SQL instance will store a record of all those deletions, which could be where your free space went. If Full recovery is a requirement, I recommend using TRUNCATE TABLE instead of DELETE FROM.
–
TulloMar 8 '12 at 0:22

2

150 columns? That table may be overfull - most tuples should be far smaller. Impossible to say without full context, however.
–
Clockwork-MuseMar 8 '12 at 0:30

4 Answers
4

DELETE statements delete rows from a table one at a time, logging each row in the transaction log, as well as maintaining log sequence number (LSN) information. Since you mentioned your table had huge data (12 million records), after deletion of which your Hard disk is out of space, check the size of your Database Log file. It would have most probably grown.

+1, exactly the same true for Oracle DB. If you use statements which causing records to log it will slow down database over time.
–
Petro SemeniukMar 8 '12 at 0:27

@PetroSemeniuk From what I recall on Oracle, delete leaves the highwater mark alone, truncate will reset the highwater mark. I believe any operations that require a full scan will scan the blocks until the high water mark. Hence deleting might aid indexed operations, but not scans. TRUNCATE was the proper operation.
–
GlennMar 8 '12 at 0:38

(NOTE: I am not a DBA) DELETE is a logged operation, and it doesn't free the space used. You probably have a large transaction log taking up space and table scans running on the 'empty' table space. I'd guess you need to clear the transaction log and shrink your database. This StackOverflow article should get you started.

And use TRUNCATE TABLE when you want to do this in the future.

EDIT: My statement about TRUNCATE not being logged was in error. removed.

Lock usage: TRUNCATE will generally use less locks, since it takes a table lock and page locks, as opposed to DELETE which uses row locks**.

IDENTITY sequences: TRUNCATE resets the identity sequence on a table, if present.

(* An extent = 8 pages. TRUNCATE will log/remove extents if they're all from that one table, otherwise it'll log/remove pages from mixed extents.

** One side effect of this is that DELETE FROM TABLE can potentially leave empty pages allocated to the table, depending on whether the operation can get an exclusive table lock or not.)

So (back to the original question), TRUNCATE TABLE is conclusively better than DELETE FROM TABLE if you're emptying the table out but want to keep the structure (NB: TRUNCATE can't be used on a table that's referenced by a foreign key from another table).

As noted in @Tullo's comment, also check your database's recovery model - if it's full, then you either need to start taking log backups, or change your recovery model to simple. Once you've done either of those, you'll probably want to shrink your log file as a once-off operation (NB: log file only) in order to reclaim all that free space.

Finally, another thing to be aware of - table statistics. run UPDATE STATISTICS <TABLENAME>' afterTRUNCATE/DELETE` so the query optimiser doesn't get tripped up by old statistics.

You've already been told about why TRUNCATE would be so much faster/better/sexier than DELETE, but there's still a question left to address:

Why is SELECT slower after DELETEcompleted?

That is because DELETE has only ghosted the rows. The table is just as big as when it had 12M rows, even though it has none. To count the rows (0), it takes just as much time it took to count 12M rows. In time the ghost cleanup process will garbage collect these ghosted records and deallocate pages that contained only ghosts, and your SELECTs will speed up. But right now if you check Skipped Ghosted Records/sec in perfmon is probably skyrocketing during SELECT COUNT(*). You could also speed up things by rebuilding the table: ALTER TABLE ... REBUILD.

TRUNCATE would had also taken care of this problem, as it leaves no ghosts behind.