Insert millions of records in SQL Server table at once

In this post we will see how to insert a really large amount of records in a SQL Server Table at once. I decided to write about this cause recently I found myself having difficulties while trying to deploy a specific table to an Azure SQL Database. There are several tools and options for SQL Schema or SQL Data comparison with the Red Gate’s SQL Compare and Visual Studio’s SQL Server Data Tools being the most significant in my opinion. Sometimes though, even those tools can dissapoint you for unknown reasons while you have the urgent to deploy your new data. In my case, I had a table with 2 millions of records in my local SQL Server and I wanted to deploy all of them to the respective Azure SQL Database table. I ‘d never had problems deploying data to the cloud and even if I had due to certain circumstances (no comparison keys between tables, clustered indexes, etc..), there was always a walkthrough to fix the problem. In this case though, nothing seemed to work so I decided to write some simple code in a console applicaton to deploy the 2 millions of records.
In the following code I read all the records from the local SQL Server and in a foreach loop I insert each record into the cloud table. Let’s see it and discuss it.

While the preceding code is self descripted, easy to understand plus it does the job, it has the huge drawback of it’s slow execution time. Think for a litle what is actually happening: For each record we read, we open a connection to the cloud server, copy it to the table, close the connection and so on until all 2.000.000 records are successfully copied. Taking into the consideration that following this patern, you need approximately half a second to copy a record to the cloud, so in order to copy the entire table you need… 277 hours!! Yeah, I don’t have that much time so I simply changed the code to use the SqlBulkCopy instead the foreach loop. Take a look the change in the following code.

SqlBulkCopy is the ultimate solution when you need to copy a large amount of data into another table. You can find it’s documentation here along with it’s respective options to use it. The main difference between our first attempt and the latter is that the SqlBulkCopy can send a batch of records at once instead of copying one by one each time. It took about half an hour to deploy all the records. The most significant properties are the following (taken from MSDN):

BulkCopyTimeout that is the number of seconds for the operation to complete before it times out

DestinationTableName that is the name of the destination table on the server

BatchSize that is the number of rows in each batch (at the end of each batch, the rows in the batch are sent to the server)

The previous code will encapsulate the hall process under a single cloud transaction which means that you wont see any data into the cloud until the transaction is complete – commited. In my case, I was running in the Azure SQL Server the following commands to ensure that the data were being deploying

That’s it, I hope you found helpfull this post in which we saw how to commit large amount of data to an SQL Server table.

P.S: Reaching the number of 100.000 views in less than a year, I would like to thank you all for visiting this blog, it really means a lot to me. I hope I had more time to write even more. Stay tuned till next time!

I had the opposite need in a project once: to get data from a diverse database and pass it to another, and because of connectivity issues I could not get the entire thing at once. My solution was to get a bulk result set and pass it chunk by chunk. This is a small part of the code I used for getting the data, you might find it handy at similar cases:

The purpose of this blog is to broaden my education, promote experimentation and enhance my professional development. Albert Einstein once said that “If you can’t explain it simply, you don’t understand it well enough” and I strongly believe him!