SQL With Mangal

Search This Blog

Label

Saturday, July 27, 2013

Today I’ll show you some different methods to run a DML operation like UPDATE in small batches for a large table. Though I’m going to show only UPDATE statements, methods are applicable to DELETE and INSERTS as well. When you want to run an UPDATE query(or other DML operations) over a very huge table having over a billion rows or even 100s of Million it is recommended that you should not update them at one go. DML queries on a very large number of rows can cause performance issues, transaction log might get full, affects concurrent users, eats up lot of server resources, runs for hours and many other. Good practice is that you should break the number of records in small batches of few thousand and update them. This way you will use minimum server resources and most importantly you will prevent the transaction log file from getting full.

This is not the most efficient way to generate some random data using a recursive CTE but I like it so I’m using it.

I. Using a TOP and GO:

This is the simplest method to run a query in a small batches. I frequently use this method in development when I want to update some records quickly without thinking much. Note the GO 10 in the query. An integer after GO will execute the preceding query specified number of times. In this case update statement will execute 10 times(I know there are 10000 rows in the table and I’m using TOP 1000, simple math).

All the queries are quite simple and they are self explanatory so I’m not going to explain them in detail.

II. Using ROWCOUNT and GO:

Same query but without the TOP operator. Here I’m using a ROWCOUNT function to limit the number of rows in the batch. ROWCOUNT causes SQL Server to stop processing the query after the specified number of rows are returned.

------------------------------------------------------------------------------------------------------ IMPORTANT: Using SET ROWCOUNT will not affect DELETE, INSERT, and UPDATE statements in a future release of SQL Server. Avoid using SET ROWCOUNT with DELETE, INSERT, and UPDATE statements in new development work, and plan to modify applications that currently use it. For a similar behavior, use the TOP syntax.

III. Using TOP and GOTO:

Now instead of GO I’ll use GOTO to run the batch multiple times. The GOTO statement causes the execution of the T-SQL batch to stop processing the following commands to GOTO and processing continues from the label where GOTO points. Here I’ll use GOTO to keep processing the particular label until it’s @@ROWCOUNT becomes zero.

VI. Using a Sequence column and WHILE:

Now an entirely different method. Not using GO, GOTO or BREAK. This method is more systematic where you have more control on how query is going to execute. Note that when you use TOP or ROWCOUNT you actually have no control on which rows are going to get updated. Sorting is completely dependent on the query plan created by the query engine.

Some notes: Both TOP and ROWCOUNT can be parameterized. We can pass a variable instead of direct number. But of course since ROWCOUNT is soon going to be deprecated from future SQL Server versions one should avoid using it. I was being lazy not to use transactions (commits and rollback ) wherever possible in above examples but one should use them to add more control, clarity to code and also for better understanding.

I have not really tested that which query is better, I leave that up to you. Idea was to share different methods to execute a query in small batches. But I would go for method 6 or 7 on production environment. Do let me know your comments, suggestions and what do think of all these methods.

Sunday, November 14, 2010

Today I am going to talk about one new feature that is been introduced in the SQL Server Denali. And that is: one interesting enhancement in ORDER BY clause. Now with ORDER BY clause you can also specify the OFFSET and FETCH options.

From Books Online of Denali: OFFSET: Specifies the number of rows to skip before it starts to return rows from the query expression. The value can be an integer constant or expression that is greater than or equal to zero.

FETCH: Specifies the number of rows to return after the OFFSET clause has been processed. The value can be an integer constant or expression that is greater than or equal to one.

In the example I wrote 5 as OFFSET. You can see in the output 1st five rows are skipped and we got VendorIds starting from 6. Here SQL Server 1st orders the data on the Column specified in ORDER BY clause(i.e. VendorID). The next query uses the clause OFFSET 5 ROWS to skip the first 5 rows and return all remaining rows.

As you can SQL Server has fetched only 3 rows and that also after skipping 1st five rows. That is because I specified 5 as OFFSET and 3 as FETCH NEXT. Here FETCH NEXT 3 ROWS ONLY to limit the rows returned to 3 rows from the sorted result set.

Also: 1. offset_row_count_expression can be a variable, parameter, or constant scalar subquery. When a subquery is used, it cannot reference any columns defined in the outer query scope. That is, it cannot be correlated with the outer query.

2. ROW and ROWS are synonyms and are provided for ANSI compatibility.

3. In query execution plans, the offset row count value is displayed in the Offset attribute of the TOP query operator.

Wednesday, September 1, 2010

Everybody has different views about VIEWS, that’s what make them interesting topic to discuss. Other thing is if you make some assumptions about VIEWS, they can lead you to problems. As my target audience is SQL beginners, today I’ll talk about few things about VIEWS so some of the obvious mistakes can be avoided. Actually there are so many things we can talk about VIEWS, but I leave them for future posts.

Today I’ll show what happens when you create a view using “SELECT * FROM TableName” and then ALTER the underlying table used in the VIEW. The normal assumption most of the SQL beginners make is: If you create a VIEW using “SELECT * FROM TableName”, all the changes made in the underlying table will automatically reflect in the VIEW as well.

Case 1: You add a column to the table. Now here is a question for you: If you have a table named Employees and a VIEW created top of it with simple query “SELECT * FROM Employees”. Now if I add one column to Employees table, will that column appear in result if I execute query “SELECT * FROM View”?

I had ask this question many times in interviews; 80% of time I heard a thumping “Yes”. And most of the candidate were having experience of well over 3 years. Answer to above question is BIG NO. Let me explain this with actual example, 1st lets create some sample data.

-- Again Verify the data in table and ViewSELECT * FROM Employees GOSELECT * FROM Emp GO------------------------------------------------------------------------------------------------------

As you see HireDate column didn’t appear in VIEW. So question arises: Why a VIEW doesn't get refresh when I add a column to the table? Short answer is when you create the VIEW, the column information/definition of VIEW(metadata of VIEW) gets stored in system tables at the time of creation of VIEW. And that metadata doesn’t get refresh when you alter the underlying table. You have to explicitly refresh the metadata of VIEW. So next question is: How to refresh the VIEW once you modified the underlying table?There are 2 ways to refresh the VIEW: 1.Using the system stored proceduresp_refreshviewFrom Books Online - sp_refreshview: Updates the metadata for the specified non-schema-bound view. Persistent metadata for a view can become outdated because of changes to the underlying objects upon which the view depends. Syntax: EXECUTE sp_refreshview ‘viewname’

2. Or by executing ALTER VIEW statement. When you ALTER the VIEW, SQL Server will pick the latest column definition from underlying table and will update the VIEW metadata.

-- Verify the data in ViewSELECT * FROM Emp GO------------------------------------------------------------------------------------------------------

Note you don’t need to execute both the queries, either of the above can do the trick for you. Now you can see the HireDate column in Emp view as well:

Case 2: You drop a column from the table. Similarly when you drop a column from the table, the VIEW definition doesn’t get updated even though you have used wild card “*” in VIEW definition. Now lets drop the column HireDate from the table and see what happens: Note: In previous step I refreshed the View after adding the HireDate, so now HireDate is part of Emp view as well.

------------------------------------------------------------------------------------------------------ -- First Verify the data before dropping columnSELECT * FROM Employees GOSELECT * FROM Emp

Now in this case HireDate column will get removed from table but the metadata of the VIEW still have its information stored so you will get the following error on selecting data from VIEW. Msg 4502, Level 16, State 1, Line 1 View or function 'Emp' has more column names specified than columns defined.

Case 3: You drop one or more columns and add equal or more number columns to the table. This case is even more dangerous as User selecting data from the VIEW can get wrong data under wrong columns and can create confusion. I won’t explain this in detail, just execute following queries and you will realize what I’m saying.

Now if you see, the LastName column is not present in the table and you can also see the DeptName column in the table. And interesting observation with VIEW is, though the data is exactly matching with the table but columns names are not correct. We again need to refresh the VIEW to correct it.

What is the solution?Obvious prevention is don’t use wild card “*” while creating VIEWS. But even listing out columns is just a prevention or I’d say it is a good practice. Because even after listing out the columns, if you drop a column from the table that has been used in any VIEW you will still face problems.

The solution is creating the view using “WITH SCHEMABINDING” option. From Books Online: Binds the view to the schema of the underlying table or tables. When SCHEMABINDING is specified, the base table or tables cannot be modified in a way that would affect the view definition. The view definition itself must first be modified or dropped to remove dependencies on the table that is to be modified. When you use SCHEMABINDING, the select_statement must include the two-part names (schema.object) of tables, views, or user-defined functions that are referenced. All referenced objects must be in the same database.

Views or tables that participate in a view created with the SCHEMABINDING clause cannot be dropped unless that view is dropped or changed so that it no longer has schema binding. Otherwise, the Database Engine raises an error. Also, executing ALTER TABLE statements on tables that participate in views that have schema binding fail when these statements affect the view definition.

Now you will get the error: Msg 5074, Level 16, State 1, Line 2 The object 'NewEmp' is dependent on column 'EmpID'. Msg 4922, Level 16, State 9, Line 2 ALTER TABLE DROP COLUMN EmpID failed because one or more objects access this column.

Basically WITH SCHEMABINDING has prevented the change that would affect the view definition.

Additional Information:A] If you want see the all the dependant objects on particular table you can use following script:

Tuesday, August 24, 2010

Today 2 of my colleagues from reporting team had this requirement. They had one table where Seconds were stored as INT and in report they wanted to convert the seconds to HH:MM:SS format. They already had 1 solution ready with them. It was something like:

It was working fine, but my colleagues were looking for something different, something elegant. And I jumped on to help them. Now a days I hardly get any chance to write SQL, so i don’t let such opportunities go. I had one solution in mind using CONVERT and Style 108:

But problem with the above query is, it fails when number of seconds are more than 86399(there are total 86400 seconds in a day). So if number of seconds are 86400; above query will show 00 hours instead of showing 24 hours. See following example:

I don’t know which approach is better, rather you tell me which one you liked. Looking at them I think both the queries will give almost identical performance, just that 1st query looks like a Mathematical solution while my approach looks like a SQL solution. If you know any other approach please feel free to share.

Friday, August 20, 2010

After remaining quiet for almost a year I’m back with what I enjoy the most, talking about SQL and sharing whatever little knowledge I have. Many SQL Developers have this misconception: “Primary key => Clustered Index: Only a Clustered Index can exist on a Primary key column”. On numerous occasions I had tough times explaining that this is not the case every time, you can create a Non-Clustered Index on a primary key column. But if this hot discussion is going on across a coffee table and I’m away from Computer I get helpless. So finally I decided to write about this.

You can create a Non-Clustered Index on primary key column. Or if I try to put this in Myth Buster words “A primary key column can exist/survive without a Clustered Index” Yes it is a fact that PRIMARY KEY constraints default to CLUSTERED Index. But it doesn’t mean that you CAN’T create a non-clustered index on a Primary Key Column. And also you can create a Clustered Index on a non-primary key column. Let me show you this with some simple examples.

Case 1: 1st lets see what happens when you specify only PRIMARY KEY and nothing else. In this case YES, by default Clustered Index will be created on Primary Key Column.

USE tempdb GO

CREATE TABLE MyTable1 ( Id INT PRIMARY KEY, Dates DATETIME) GO

You can see as expected a clustered Index got created on ID column.

Case 2: But by just adding a NONCLUSTERED word in front of primary key you can tell SQL Server to create a Non Clustered Index instead of a default Clustered one.

As you can clearly see in the image a non-clustered index got created on ID column which is also a primary key.

Case 3: Now here is the small trick, you can force SQL Server to create a non-clustered index on a primary key column even without writing NONCLUSTERED in front of it. Yes, there is exception to the rule “PRIMARY KEY constraints default to CLUSTERED Index” even if you don’t specify NONCLUSTERED. Question is, How? Well by simply creating a Clustered Index on another column while creating the table.

See the Image, a Non Clustered got created on ID column and a Clustered on Dates column.

The obvious question will be, why SQL Server didn’t create the Clustered on Id column this time? Answer is very simple, if you know the basic rule “You can have only one Clustered Index on a table”. And since in the CREATE statement you forced SQL Server to create a clustered index on dates columns SQL Server had no choice but to create a non clustered on Id column.

What we learned today?Honestly speaking, I didn’t tell anything new. Experts/people with good knowledge about SQL Server already knew this. But interesting thing we can learn here – yes there are some DEFAULTs set by SQL Server, but that doesn’t stop you from telling SQL Server “Boss enough of your DEFAULTs, now let me take the control”. Actually in early days of learning SQL we all get into this habit of relying on DEFAULTs set by SQL Server. And we get so used to them that we start considering them as RULES that can’t be broken.

Actually I think there is no harm in taking little bit extra effort and writing some extra keywords and telling SQL Server this is what I want or this is what is expected.

You also learned how to create a Clustered Index on a column of your choice. This can be very useful when you don’t want a Clustered Index on a primary key especially in cases like where you are using GUID as a primary key(I hate them) and you want Clustered Index to be created on some other column.