CategoryDatabases

I think this is a subjective matter, as there’s strong arguments for both. In my experience, I prefer using ORMs for simple applications in a small team. For enterprise apps with many developers, nothing beats the simplicity and performance of pure TSQL (or SQL for that matter). Yes, there could be a lot of boiler plate code, but that part can be automated.

SQL Server objects could be used cross-platform. A TSQL UDF, View, Stored Procedure could be easily used by any application, regardless of the platform: e.g. a PHP/Java/Ruby/CF app can use the same SQL Server Stored Procedure that a C# app is using.

This is so cool and handy. You can hide and organize database objects (procs, tables, views, databases) in folders. This organization is saved locally to your machine, on a the server level, via extended properties. If you decide to save the info on the server, someone else who has the extension will view it organized accordingly.

Sometimes it can be a pain. Why? Because SSMS (or should I say SSIS) uses the Windows Excel driver, and it infers data types of the columns incorrectly, and will choke when you import. Sometimes it will work, sometimes not. So you have to import it as text. However, first you have to re-save the Excel file as a CSV (which will by default use commas as delimiters, and quotes for qualifiers).

So…

Step 1:
In Excel, save the file as a CSV, in this case, file.csv.

Step 2:
In SSMS, under the Import wizard, choose “Flat File Source:

Step 3:
Leave everything default, except use a one quote in the qualifier textbox:

STEP 4:
In the Advanced section, select all the columns and make sure the OutputColumnWidth is something big enough, larger than the size of the largest column. In this case, we’ll make it 150. Make sure the DataType is string [DT_STR]

Step 5:
Leave the rest of the settings except the target database/table where you’re importing, and Finish the import.

If you need to restore an MDF/LDF from one machine (source) into another machine (target), you copy the source’s MDF/LDF file to the target machine. Then start the target server in single user mode. Then run SSMS as Administrator on the target machine and restore the files. If you don’t run it as Admin, you’ll get the error listed here:

I’ve recently gotten a chance to try out Dapper – a micro ORM library by Sam Saffron, which originally spawned from StackOverflow’s codeset. If you’re looking for an extremely light-weight library to access your SQL Server, I highly recommend it. It focuses more on convention than configuration. It also does not have the overhead that EntityFramework or NHibernate have, and it emphasizes speed and ease-of-use.

Dapper is a collection of Extension Methods that enhance the Connection object (or rather extends the IDbConnection interface). It also embraces the use of inline SQL in your code, which is useful when retrieving sets that are less trivial. With its buddy, Dapper-Extensions by Thad Smith, it makes DB interaction even easier.

Both of these libraries can be downloaded from NuGet, and you’ll need them to run the following code.

There’s really just one thing that I found irritating about the Dapper project: the documentation. While there is documentation found at its Google Code home page, I find it very lacking.

Anywhoot, to get started first we need to create our table. Once we have that and add some data, we’ll use the Dapper to extract the data and map to objects.

OK, so there’s nothing out of the ordinary here. We’re creating the class we want to map our data to, creating a Connection object, opening, closing, and then disposing (via using). Now let’s suck up some data within our Main() method:

That should give us the count. To see how the data types were mapped, we can take a look at the Dapper source code, SqlMapper.cs inside the static method SqlMapper(). For reference, you can take a peek at these two sources to understand SQL Server/CLR/.NET framework data types:

Mismatches and Nulls

In the query that we ran, we were making the assumption that all columns in the db match all the properties in the class. What if we have a mismatch? What if we have this?

IEnumerable customers = conn.Query("SELECT LastName FROM Customer");

Notice that we’re only retrieving the LastName, while we have other properties in our Customer class. Well, this is what you would see:

So from the above, we notice that Dapper will handle Nulls by:

Setting strings as Nulls

Setting a Guid to 00000000-0000-0000-0000-000000000000

Setting an int to 0

Setting bool to false

Setting a DateTime to 1/1/0001 12:00:00 AM

Which will all happen when Dapper cannot find a class property to map to from the column value. Also, if you column name that DOES NOT match a property, it will throw a SqlException of “Invalid Column Name”.

Parameterization

What if we want to pass a parameter into our SQL? Let’s pass a parameter:

// The second @ token does not have to be named "@IsAllowed" and can be named anything as
// long as the C# property in the anonymous object is named the same thing.
IEnumerable customers = conn.Query("GetCustomers @IsAllowed = @IsAllowed", new { IsAllowed = 33 } );

First have a look at Microsoft Whitepaper that gives you an overview of the process. Make sure you get the SQL Server 2008 Upgrade Advisor. It will analyze databases and check for incompatibilities. It does not write any data to database you’re analyzing. I highly recommend it, if you’re making such a big jump as we did. Also, apply SP1 (currently the newest), and Windows updates.

The machine I’m working on is a 64bit machine with 16 GB RAM. Make sure you adjust the max memory taking up by SQL Server. By default, it will try to consume the max you have (the setting will be set at roughly 2 petabytes). To prevent OS starvation of memory, on a machine used for just SQL Server, allot 10-15% of memory to the OS. The rest will be for SQL Server. With 16GB, I set it to 13GB (13312 MB) of memory.

Also, I urge to change the compatibility level to SQL Server 2008 (100), that is of course, the Upgrade Adviser throws a lot of issues and you’re on a tight deadline. Certain features have been deprecated and removed. For example, this will no longer work on 2008 (order by [table alias].[column alias]):

To avoid problems for future upgrades, it’s best to switch the compatibility to 2008.

Also, as an obvious reminder, keep a backup, especially if you’re going to do a detach/attach migration. Once you attach the 2000 db files to 2008, you can’t retach to 2000. I prefer a traditional .bak restore.

I sifted through the data, only a few hundred rows, and I could not see anything wrong with it. I made the columns the right type and increased the column size more than the largest number of characters in the fields. I searched Google and Microsoft Forums trying a handful of solutions and nothing worked.

So what did I do? I had enough and just opened the file from CSV, into Excel, resaved it as an Excel 2007. Re-ran the SSIS import wizard to open an Excel file instead of a CSV, and voila. I’m still baffled though, as to what exactly the problem is.

If you’re still stuck working with a SQL Server 2000 database, there’s no reason why you shouldn’t be using SSMS 2008 R2. There’s lots of great add-ons for it and provides a rich set of features, aside from merging Query Analyzer and Enteprise Manager.

One thing to watch out for, is that if you script out your database objects, while you’re connected to SQL Server 2000, by default, it will script out TSQL that is compatible only with SQL Server 2008. To switch this, there are two ways:

Sometimes while testing on your local db, one has the habit (I admit to have this in the past) of renaming a table before replacing it. One does this as a quick backup. The problem with this strategy is that it doesn’t rename all of the dependencies. For example, if you had a table formsClients and renamed it formsClients_old, and then restored the formsClients table again (let’s say from an SSIS import), you’ll get something like this:

If we inspect the dependencies, we’ll see that in this case, it’s a foreign key constraint:

By default, the C# driver’s serializer takes a null value and assigns a default, rather than just writing the word NULL in the collection when inserting.

So if you had a an object that had 50 properties, only set 1 value in it, and passed it to MongoDB, it would insert all 50 properties, which would be wasteful (depending on how you looked at it).

One can decorate the property with the [BsonIgnoreIfNull] attribute and will not write that element to the db if it’s null. However, if you don’t always want to use this property, one can set it once (and forget it) by creating a MongoDB “ConventionProfile” across the entire application. We have this setup in the Global.asax.cs so it gets created when the application is loaded:

However, even with that, it only appears that it does this only for strings that are null. If you have a property that’s Boolean, Integer, or Date, it will still write the elements to the database, setting Booleans to Falses, Integers to 0, and Dates to an ISO timestamp. It does this partly because these data types don’t implement a NULL property, so they can’t be null. To “make” them NULL, you have to use the question mark after the data type, like so:

Sometime ago, I published some tips on upgrading to 2008. Here are some additional notes to keep in mind.

For upgrading, my plan of attack in the past has been as follows:

1. Run the Upgrade Advisor on all the databases (this doesn’t require much effort, just download, install, point to the db and run)

2. On production, do a custom install and only install what you need. You can do a side-by-side install and have SQL Server 2008 installed while SQL Server 2000 exists on the same server.

3. Analyze the Advisor’s reports and refactor where SQL objects that need to be refactored

4. Refactor TSQL as needed

5. Now, TSQL isn’t just in the DBs, it may also exists in the app code and other web files (.aspx, etc.). This is where it gets more challenging. Check to make sure if the analyzer can open .trc files. If so, run SQL Server profiler while the site is being used and those queries are being run. Then open that .trc file in the analyzer so it can check the validity.

If this doesn’t work, log the profiler results to a table. Then put the TSQL from output of the profiler in a sproc. Point the analyzer to that sproc to check the syntax.

You can automate this task by using Selenium or write a script using WGET to fetch and look for 500 HTTP error codes.

6. Refactor TSQL as needed.

7. Test the websites with 2008. Ideally you want to start testing with the latest compatibility. Having a lesser level of compatibility causes more overhead (since it has to support older features) and can allow the use of deprecated features that MS will abandon in the next release.

10. Before pushing it live, make sure you backup all dbs, including all system dbs.

11. As soon as the switch is flipped, keep an eye out on perform/SSMS monitor. Have handy the appropriate DMVs to monitor.

12. Even if the application(s) did not throw errors while testing on dev, there’s a chance that once you go to production, it starts throwing errors due to compatibility (maybe the more obscure pages were not tested). Be ready to flip the compatibility mode to 2000 – this takes place instantly.