Table variables are represented in the temp database as tables. Have you tried loading 10 million rows in a table variable? SQL Server has no way of knowing how many rows will be loaded in a table variable, so it must create an object in temp. So table variables are the same as temp tables, apart from naming conventions.

I suggest to look in http://support.microsoft.com/kb/305977, which states: "Table variables are created in the tempdb database similar to temporary tables", so if the article says: "This indicates while the table variable has no internal representation in sysobjects, a temporary table does" this just means that Table variables are stored differently in tempdb.

The article says, "You cannot use ...: insert into @table_variable select * from myTable". Well, I do this everyday. Probably the statement should be like: "You cannot use ...: INSERT INTO table_variable EXEC stored_procedure".

My opinion: I like to use table variables very much. I believe the arguments in the above (Q305977) article without hesitation, in my own words: "lighter vehicle, more performance".The article, although nicely written, falls short in some details. I agree with Greg, that we have seen a similar article before. That's not a reason not to post such an article, but I am expecting and looking for an article that surpasses the earlier ones. For myself: I am already convinced to use table variables whenever I feel they are appropriate (I never have to process 10 millions rows, but 50 thousand was no problem). What's beyond my way of working is inspecting recompilation details. The above (Q305977) article also leads us to (Q243586) "Troubleshooting stored procedure recompilation", which provides more details on recompilations.

Another feature of table variables is that they do not get statistics.... which results in some incredibly poor performance when they are misused (particularly by those who think they live in 'memory' not tempdb...) for instance parallelism can be effectively reduced to nil by the judicial use of table variables.

After a certain size (who knows what that size is, tho), table variables can HUGELY degrade performance. I recently changed some reports which hung after a certain size, converted to using temp tables, and all was done in about a minutes

As a quantative example: I say that having just converted a developer's query that used table variables which ran 15 minutes on a SQL2005 SP 1 64bit, 8way, 16GB box box, with fairly decent SAN setup, to use temp tables. That was the only change I made. Since this is a reporting server that doesn't have much usage yet, I ran DBCC DROPCLEANBUFFERS, re-ran the SQL script using #temp tables, and got results in 20 seconds.

A quote from an email I sent of to some of my developer's on the topic a while ago:

Contrary to the popular belief that table variables exists in memory, KB Article 305977 states that "A table variable is not a memory-only structure. Because a table variable might hold more data than can fit in memory, it has to have a place on disk to store data. Table variables are created in the tempdb database similar to temporary tables. If memory is available, both table variables and temporary tables are created and processed while in memory (data cache)."

If you have a minute check the 2 procs - do you get paralellism on the table variable one? I never did, and believe its due to their lack of stats:

Table variables do not maintain statistics like temporary tables can. Statistics cannot be created on table variables through automatic creation or by using the CREATE STATISTICS statement. Therefore, for complex queries on large tables, the lack of statistics may deter the optimizer to determine the best plan for a query, thus affecting the performance of that query.

2. Open another connection and run this query whichs lists space used by temp objects. You should see a strangely named object, something like #67F75FA9. It appears to be a randomly generated name. If you insert rows into the table variable you'll notice the row count matches.

This discussion keeps coming up and lots of misconceptions will no doubt be cast far and wide.

Note that the points raised concerning table variables were not covered in the article, which was, I thought, pretty good at explaining table variables.

Both temp tables and table variables may be created in memory or tempdb, according to data set size, resource etc. You can apply a primary key to a table variable so in effect you can index a table variable, you don't get any performance benefit from a PK but you do get sorting. Small datasets are good in table variables, much better than temp tables, large datasets are not good in any type of temporary data structure.

Why on earth anyone could imagine using a parallel plan against a table variable would be a good thing I just can't think ? Parallelism is good but infers a complex query .. not really where you'd want to be using temporary tables of any type surely?

Yup we all know table variables don't do stats - with small data sets it doesn't matter.

Finally I've been able to obtain some dramatic performance improvements by replacing #temp tables with table variables - but I'm talking about using small data sets as befits an oltp application.

The GrumpyOldDBAwww.grumpyolddba.co.ukhttp://sqlblogcasts.com/blogs/grumpyolddba/

We have an application that was written by a consulting firm; they wrote a ton of UDFs that utiliize Table Variables. We had one report in particular that was really slow and caused some blocking on our database. The only change I made was to use Temp Tables and get rid of the Table variable. It cut the processing time by 70-80% and got rid of the blocking. I'm not suggesting one over the other; what I am suggesting is to test both before deciding one over the other.