Introduction

DRY Programming

"DRY" programming can be nicely summed up as "Don't Repeat Yourself". Another, more precise, short description comes from The Pragmatic Programmer as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."

Here I'm going to "DRY up" some common code and share a simple, low cost way of making simple database interactions cleaner, easier, and less error prone.

Update - 16 Oct 2009

Astute readers noted that the solution in this article was similar to early versions of Microsoft's Database Access Application Block. Having never investigated it before, I checked it out. I was quite surprised to find that I had come to the same conclusions and solutions they provided. This was both good and bad news for me. I was happy to see that I had independently come to the same solutions as the "best practices" project provided. But I was disappointed to learn that my work was a duplication of effort.

For applications on the 3.5 Framework and later, I recommend using the Patters & Practices - Enterprise Library's Data Access Application Block. The solution in this article was created for an application stuck in the .NET 2.0 Framework. It is essentially a "lite" version of the Microsoft DAAB v2.

Problem Background - Constraints and Goals

It is worth mentioning that solutions rarely exist without constraints. This solution was designed to work in a C# Web Service or web application. While flexible, it assumes that the connection string is stored in either an App.config or Web.config. It also assumes that the connection string will define connection pooling so there is little cost to creating and closing database connections. This means when a connection is closed, it is not actually closed; it is returned to the .NET managed connection pool and is available for the next connection request.

I also want to mention that generally, I prefer using a good ORM (Object Relational Model) framework for interacting with my data. However, this code was created for an existing small project where introducing an ORM would be overkill.

With all that said, let's get started.

Standard - Boilerplate Database Code

You find code like the following everywhere. It is correct and works perfectly. The issue is, only about 4 or 5 lines of code actually deal with solving our business need. The rest is pretty boilerplate. Take a look:

If this was for a one-time solution, it would be fine. However, when the project repeats this type of code multiple times, it becomes wasteful and a hazard. How is it a hazard? What happens if someone misses one of those Close() statements? What if they forget to check if the variable is assigned? What if they don't use a try..finally block? The point is, all the "boilerplate code" is still important and must be done correctly to prevent problems.

What if we could remove most of the boilerplate code and still ensure our cleanup couldn't be forgotten or done incorrectly? What if it also meant we don't have to write so much code to solve a business problem?

We've removed the boilerplate code and we're left with the important code that solves our business problem. All the important things are still being done, only now the important boilerplate code can't get messed up!

Note: CommandParams is a generic list of SqlParameter objects. We want it because it lets us cleanly separate the parameters from both the SqlConnection and SqlCommand objects. It also has some helpful overloaded constructors to make using it easier.

Stored Procedures

Calling a Stored Procedure is equally simple:

CommandParams paramValues = new CommandParams();
paramValues.Add("@CityName", city);
paramValues.Add("@State", state);
using (SqlDataReader reader = SqlClientHelper.ExecuteReader(
"My_Stored_Procedure", paramValues, CommandType.StoredProcedure))
{
// if there is anything to read...
if (reader.Read())
{
// Take some action on the stored procedure results
}
}// connection will close and dispose here

In the Stored Procedure example, we call ExecuteReader directly and provide the CommandType. Also, in this example, we aren't expecting a set of rows, just one row. So, we can take some action if the Read() succeeds.

Where the Real Work Happens

All the real work and main savings is done in the ExecuteReader method:

Do you recognize a lot of the boilerplate code? One other special note is the CommandBehavior.CloseConnection usage. This means when our returned reader is closed, it will automatically close the associated database connection. That alone is handy. Now, when coupled with a using() block, it happens automatically for us!

using (SqlDataReader reader = SqlClientHelper.ExecuteReaderSelect(
"select * from Customers where city = @City",
new CommandParams("@City", cityName)))
{
// some work
} // returned reader is cleaned up.

When execution reaches the closing brace of the using block, the reader's Dispose method is automatically called. This will close both the reader's connection and the associated SqlConnection too because of the CommandBehavior. We just made it simple for ourselves, and others who will maintain our code, to "do the right thing". This also means we just made it much harder to make a mistake with the code.

You may also have noticed the GetConnectionString() call used when creating the SqlConnection object. This was created assuming the connection string could be pulled from either an App.config or Web.config file. If you have other needs, it can be set explicitly using the AssignConnectionSting() method.

Closing Thoughts

The code in the article and the attached file were helpful to me with DRY-ing up common database code. It helped me to make it easier to always do the right thing and harder to mess it up. This benefit is shared with junior developers who come after me and work on my code.

Attached Source File

The attached source file is designed to be easily portable and adaptable to other projects and other developers' needs. Feel free to modify it as you see fit.

Also note that there are other database helping routines included in the source file. Check them out and see if they might help you.

Thanks for sharing how to do what is outlined above in the article but instead with an OleDbConnection and an associated OleDbDataReader object. It would be trivial to adapt the class to use all OleDb objects if that was required for a project.

I have implemented the same kind of class too. The "while(reader.Read())" is also something that returns constantly, so I took it just one step further.By using a Func<SqlDataReader, T> that is executed in the while loop. Now this could be an Action<sqldatareader> too, but in my case the Func works best.

Thanks for your question. No, this was setup for simple interactions. If you would like something more advanced, I recommend you check out the Microsoft Database Access Application Block. Check out the "Update" section towards the top of the article for links.

This looks similar in functionality to the "SQLHelper" class provided in the first couple of generations of the Microsoft Data Access Application Block. Ever heard of that? It would be interesting if you could provide a comparison of your class and the MS "boilerplate replacement" class.

Thank you for your comment. I have heard of the Microsoft Data Access Application Block but hadn't taken the time to investigate it until now. Thanks for prompting me.

After looking at the SQLHelper class, I certainly do see some similarities. As for a short comparison, my class is simpler and doesn't take on as much as the MS one. Mine is also more opinionated. It does more for you. Things like automatically loading the connection string. Caching the connection string so it doesn't need to be provided any more. Mine also assumes more about your DB connection management and cleans it up for you.

The two solutions have more in common than they are dissimilar. The most direct comparison is to the ExecuteReader functions. The DAAB provides more overloaded options and the differences seem few at that point. Mostly syntax and convenience things it would seem.

Yes, it would seem that my article and the Microsoft Data Access Application Block approach similar problems similarly. Thanks for kindly pointing that out to me. I appreciate you taking the time to point me to the DAAB.

I would tend to agree with you: I have been using a similar solution to yours for a number of years where using ORM would add an unnecessary layer of complexity given the scope of the solution. I also found that, as good as they were, the Microsoft Data Access Application Block was a little too 'fat' for what it actually delivers.

I would also say that, for smaller solutions, I now use Linq-to-Sql and have abandoned what was a very practical and light-weight method of interacting with data.

To even abstract this even further you should consider abstract it into an ORM solution such as nhibernate, Entity Framework, etc. This would provide far less problems then straight sql execution here.

Thanks for the comment. I agree that using an ORM would be preferred. I believe I may have even mentioned that in the article.

I wrote this for a situation where introducing an ORM to an existing project would be overkill. Almost all of the business logic was being performed in stored procedures. My goal was to make those stored procedure calls as painless and safe as possible. I thought this solution might be worth sharing as others may find themselves in a similar situation.

But I completely agree that an ORM is a better place for business logic and database abstraction when given the opportunity.