From a SQL Server perspective, this isn't very advanced code. We can trap for the text phrase "char(" in a trigger and effectively rollback the transaction. Other examples of these sorts of attacks have intentionally obfuscated the SQL injection code to where we didn't have such an easy time of it and couldn't write simple triggers to protect ourselves when the web application failed. And that's a key point that always needs saying: SQL injection is an application layer weakness, not a database one. The application is permitting bad input to come in and then passing it back to the database server. But in the event that you need to protect the web site and the time it will take to update the application code will take too long, simple exploits like this one can be prevented with a relatively easy to write trigger.

Comments

Posted by Steve Jones on 6 April 2011

Absolutely true. SQL Injection is the SQL Server using legitimate commands that we might run from any query connection, but doing so in a way that we haven't intended the application to work.

It's why we need to make sure that we have a well thought out, minimum level of permissions.

Posted by SQL Server Youngling on 12 April 2011

I don't understand why an "unparsed" qualification hasn't been implemented. Why it can't? The type of a column, variable, parameter, etc. is a qualification. The length of a character string is a qualification. The number of digits after a decimal is a qualification, etc. Add the ability to qualify data to be treated as just data, and SQL injection goes away.

An example of the "unparsed" qualification:

create proc setPreference (

@Preference nvarchar(100) unparsed = 'Unknown'

) ...

/*

Every time @Preference was encountered in the code of the stored procedure it would not be parsed, but treated as data (potentially hostile).

*/

I hedged with "largely" for situations where a SQL statement is built up and then run -- to the SQL parser it's just a text string. Unless the string was built to include something like "cast(@Preference as unparsed)", where @Preference would remain in the original datatype, and remain unparsed.

If the application(s) should stop everything, then by extension why have database constraints at all? After all application(s) only allow valid data through, right?

The database is the place, the one place, where SQL injection should be stopped. Implementing SQL injection defense in all the applications accessing a database results in a weakest link situation -- the worst implementation sets your defense level.

Posted by K. Brian Kelley on 12 April 2011

Let's consider a text string, say NVARCHAR(2000). Users are able to enter content onto the site. That's the business requirement. So that means formatting, etc. That could possibly mean HTML code. You want to build the checks for that code into SQL Server via a trigger?

Remember, the SQL injection attacks we started seeing worked because the information going back to the database was valid. It was text. The problem was that the text was being retrieved and displayed on the application web pages. And embedded in that text were javascript calls, HTML code elements, etc. When we started to get wise to look for specific text strings, they used some of the obfuscation techniques available in Javascript to make things human unreadable... basically you needed a Javascript parser to interpret. Want to build that into the database, too?

Those things should be in the application layer because ultimately, these SQL injection attacks are fitting the proper domain based on what the database server knows and understands. Some more stringent rules can be enforced by triggers, but anything more complex needs to be at the application layer.