What we're doing here is creating a procedure that lives for the life of the connection and which is then later used to insert some data into a table. I wondered whether it was possible to make procedures globally temporary and sure enough it is (note the double hashes ##):

So, what might we use this for? Well I can imagine that when doing a deployment that involves deploying data it may be useful to only have a procedure live for the lifetime of the deployment - this little feature would work perfectly for that.

Did anyone else not know about this or was it just little old me? Can you think of any other uses?

Comment Notification

Comments

I can't think of any good use cases for this feature. I've seen it used on a few projects over the past several years, and none of the cases made sense to me--I converted all of them to normal stored procedures. Not to say this -must- be useless; I'm certainly interested in hearing about a solid use case. I just can't think of any myself...

Here's a use case (albeit somewhat contrived): I've got a Vista Gadget that needs to execute a fairly long SQL query that can't use a stored procedure (no changes allowed to database); it has a fair amount of repetitive code that could be rolled into a temp stored proc thus shortening the written code by quite a bit (there'a a 6 line SELECT that is repeated 8 times with the only difference being the WHERE clause).

This feature used to be heavily in the RDO (prior to ADO) days (early to mid '90s). I have recently seen it re-emerge in some Hibernate systems that use jTDS. It helps lower network traffic but also makes performance analysis trickier b/c you have no choice other than to trace SP:StmtCompleted events (rather than the more traditional RPC:Completed) when looking for inefficient queries..

"Aha ! So, those boring old sticks in the mud who manage the server they won't let me produce my own stored procs because of all these 'controls' eh ? Well, now I can deploy my entire application (with global tables and stored procs, invoked by the first instance of the app not to find them open) into tempdb ! plus since they don't really exist, no need to document them."

Reminds me of some old apps that I have seen where batch files write out and call new batch files with control flow - try debugging that !

I never gave it any thought about being easier or not, but I've seen them used along with temp tables, and I assumed this was for isolation. This script is built within the client app. The temp table names have a unique (ex: #fdd00814a) component that is created on the fly, and the procedure uses the same unique part in it's name. An example of it's use is a table that must gather data for a report to be run from 2 instances of the same app, at roughly the same time, but with different parameter values. Each instance gets its own table with the procedures that reference it.

I believe the examples I've seen were initially developed to migrate this apps methods from client side data gathering (Btreve) to server based.

Tom, the unique part of the #temp table name is internal and should never need to be used in code (in fact, you should get an error if you try to select from #name______fdd00814a rather than #name, though I can't verify at this moment). So why would the *procedure* need to be different? Each connection will have its own copy of the #table and thus there is no need to worry about collision or overlap.

If I'm reading Tom's post correctly he means TWO(+?) different connections will access the procedure and temp table, so I assume they're global.

I saw this exact design in a legacy app I worked on five or six years ago. The app would select a huge amount of data into a global temp table with a GUID appended to the end for uniqueness, spin up a couple of global stored procs, and then pass the GUIDs around to a few connections.

And for what purpose? I never figured it out. Hammer looking for a nail syndrome, perhaps? I ended up re-writing the entire module to report straight off the base tables using dynamic SQL in a stored procedure, improving performance by several hundred percent.

I use #sp when I don't have write access / don't want to modify any database (msdb, master?), but need an function cal in my batch. For example, I modified sp_help_revlogin (which is logically read-only, but needs several calls a sp_hexadecimal) to create sp_hexadecimal as temporary.

I m only using this for a maintenance task and not from an application. I m writing this into SQL Server Management Studio on my dev environment and then pass it to the DBA for validation and execution (As a dev i have no access to production) .

In the case of an application, i would have probably used a prepared statement but it would need more work for validation.

My stated analysis was definitely flawed. Even though Adam is correct in that it used global temp tables, my reason of isolation is possibly misguided. It doesn't appeaar to be necessary or even the best way to do it, but like I said the code evolved from a legacy app. It has a highly evolved schema, and it may have been done for the convenience of avoiding a total rewrite of the original ad hoc sql statements. In other words gradually moving the workload from the client to the server 1 line at a time. I believe there were or are multiple teams of developers involved with this ongoing update.

I use temporary stored procedures all the time to avoid littering the database w/ temporary code for ad-hoc tasks. Most recently, I used them in a script for generating test data.

I have never used them in production code, though. Have yet to find a use-case that justified the pain!

They would be a lot more useful if they could be defined within a batch of other statements. T-SQL needs better constructs! Where are my local function definitions? Where are my local view definitions? CTEs don't cut it!

I really like that fact that for BI data warehouses, I can hide the physical DB from direct user access, using views in a separate schema. These views are business logic as they rename columns for business users and when bugs in ETL are found (e.g. conforming issues, data quality problems arising later etc), data analysts can quickly deploy views that temporarily overcome the issue whilst the SSIS developers work on a robust solution.

Likewise, stored procs can be called from SSIS packages. The packages alter data coming from production capture systems (eg. a sales website) and then place it in the data warehouse. These dimension and fact tables then underpin significant business decisions. The business logic is in a mixture of the "database" and SSIS packages.

I think being able to deploy many object types, including stored procs into SQL Server makes a lot of sense, especially with the velocity of change demanded by business today. It provides for a central repository for DBA's to assess a production system. Trawling through developer repositories can be quite tricky at 3am.

Temporary Stored Procedure will work perfectly for dynamic SQL that I'm building up. The SQL will never be the same and because I'm using a mixture of C# and VBScript, the temporary stored procedure offers me a solution. I can't create a permanent stored procedure, because the procedure will change each time it is called.

An scenario in which I'm working on is: I have a script I nee to turn into a reentrant script, and I have several objects that I need to check if they exist... so, creating a temporary function allows me to save lines of code checking their existence