Deprecation Note

The code presented in this article is deprecated and is no longer maintained. It is recommended that a new version of this library be used instead. The new library is not at all compatible with code presented in this article. It is a major rewrite of the whole thing, and should be much friendlier and easier to use. It is available here.

Preface

There are many breaking changes in this version of the code. This version is not backward compatible with previous versions. The article text has been updated to reflect the changes, and the code snippets provided here will work only with the latest version of Light.

Introduction

This article is about a small and simple ORM library. There are many good ORM solutions out there, so why did I decide to write another one? Well, the main reason is simple: I like to know exactly which code runs in my applications and what is going on in it. Moreover, if I get an exception, I'd like to be able to pinpoint the location in the code where it could have originated without turning on the debugger. Other obvious reasons include me wanting to know how to write one of these and not having to code simple CRUD ADO.NET commands for every domain object.

Purpose and Goal

The purpose of this library is to allow client code (user) to run basic database commands for domain objects. The assumption is that an object would represent a record in the database table. I think it is safe to say that most of us who write object-oriented code that deals with the database have these objects in some shape or form. So the goal was to create a small library that would allow me to reuse those objects and not constrain me to any inheritance or interface implementations.

Also, I wanted to remain in control: I definitely did not want something to be generating the tables or classes for me. By the same token, I wanted to stay away from XML files for mapping information because this adds another place to maintain the code. I understand that it adds flexibility, but in my case, it is not required.

Design

One of the things I wanted to accomplish was to leave the user in control of the database connection. The connection is the only resource that the user has to provide for this library to work. This ORM library (Light) allows users to run simple INSERT, UPDATE, DELETE, and SELECT statements against a provided database connection. It does not even attempt to manage foreign keys or operate on multiple related objects at the same time. Instead, Light provides the so-called triggers (see the section about triggers below) that allow you to achieve similar results. So, the scope of the library is: single table/view maps to a single object type.

Using the Code

Light uses attributes and Reflection to figure out which statements it needs to execute to get the job done. There are two very straightforward attributes that are used to describe a table that an object maps to:

TableAttribute - This attribute can be used on a class, interface, or struct. It defines the name of the table and the schema to which objects of this type map. It also lets you specify the name of a database sequence that provides auto-generated numbers for this table (of course, the target database has to support sequences).

ColumnAttribute - This attribute can be used on a property or a field. It defines the column name, its database data type, size (optional for non-string types), and other settings such as precision and scale for decimal numbers.

There are two more attributes that aid with inheritance and interface implementation:

TableRefAttribute - This attribute can be used on a class, interface, or struct. It is useful if you need to delegate table definition to another type.

MappingAttribute - This attribute can be used on a class, interface, or struct. It extends the ColumnAttribute (therefore inheriting all its properties), and adds a property for a member name. This attribute should be used to map inherited members to columns. More on this later, in the code example.

There is another attribute that helps with such things as object validation and management of related objects:

TriggerAttribute - This attribute can only be used on methods with a certain signature. In short, it marks a method as a trigger. These trigger methods are executed either before or after 1 of 4 CRUD operations. More on this later, in the code example.

The most useful class of the Light library is the Dao class. Dao here stands for Data Access Object. Instances of this class provide methods to perform inserts, updates, deletes, and selects of given objects, assuming that objects have been properly decorated with attributes. If a given object is not properly decorated or is null, an exception will be thrown.

A word about exceptions is in order. There are couple exceptions that can be thrown by Light. The most important one is System.Data.Common.DbException, which is thrown if there was a database error while executing a database statement. If your underlying database is SQL Server, then it is safe to cast the caught DbException exception to SqlException. Other exceptions are: DeclarationException, which is thrown if a class is not properly decorated with attributes; TriggerException, which is thrown if a trigger method threw an exception; and LightException, which is used for general errors and to wrap any other exceptions that may occur.

Please note that both DeclarationException and TriggerException are subclasses of LightException, so the catch statement catch(LightException e) will catch all three exception types. If you want to specifically handle a DeclarationException or a TriggerException, their catch statements must come before the catch statement that catches the LightException. Here is an example:

You cannot create an instance of a Dao class directly using its constructor, because Dao is an abstract class. Instead, you should create instances of Dao subclasses targeted for your database. So far, without any modifications, Light can work with SQL Server (SqlServerDao) and SQLite .NET provider (SQLiteDao) databases. If you need to target another database engine or would like to override the default implementations for SQL Server or SQLite, all you have to do is create a class that extends the Dao class and implement all its abstract methods.

All operations (except Select) are performed within an implicit transaction unless an explicit one already exists and was started by the same Dao instance. In that case, the existing transaction is used. The user must either commit or rollback an explicit transaction. If the Dispose method is called on the Dao object while it is in the middle of a transaction, the transaction will be rolled back. An explicit transaction is the one started by the user by calling the Dao.Begin method. Implicit transactions are handled by Dao objects internally, and are automatically committed upon successful execution of a command, or rolled back if an exception was thrown during command execution.

Note that for all of this to work, the Dao object must be associated with an open database connection. This can be done via the Dao.Connection property. SqlServerDao and SQLiteDao also provide constructors that accept a connection as a parameter. Remember that it is your responsibility to manage database connections used by Light. This means that you are responsible for opening and closing all database connections. A connection must be open before calling any methods of the Dao object. The Dao object will never call the Open or Close methods on any connection, not even if an exception occurs. Here is some sample code to demonstrate the concept. Let's assume that we will be connecting to a SQL Server database that has the following table defined:

Now, let's write some code. Note that this code has not been tested to compile; please use the demo project as a working sample:

using System;
using System.Data;
using System.Data.SqlClient;
using System.Collections;
using System.Collections.Generic;
using Light; // Light library namespace - this is all you need to use it.
//// Defines a mapping of this interface type to the dbo.person table.
//[Table("person", "dbo")]
publicinterface IPerson
{
[Column("id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)]
Id { get; set; }
[Column("name", DbType.AnsiString, 30)]
Name { get; set; }
[Column("dob", DbType.DateTime)]
Dob { get; set; }
}
//// Says that when operating on type Mother the table definition from
// type IPerson should be used.
//[TableRef(typeof(IPerson))]
publicclass Mother : IPerson
{
privateint id;
privatestring name;
private DateTime dob;
public Mother() {}
public Mother(int id, string name, DateTime dob)
{
this.id = id;
this.name = name;
this.dob = dob;
}
publicint Id
{
get { return id; }
set { id = value; }
}
publicstring Name
{
get { return name; }
set { name = value; }
}
public DateTime Dob
{
get { return dob; }
set { dob = value; }
}
}
//// Notice that this class is identical to Mother but does not
// implement the IPerson interface, so it has to define its
// own mapping.
//[Table("person", "dbo")]
publicclass Father
{
privateint id;
privatestring name;
private DateTime dob;
public Father() {}
public Father(int id, string name, DateTime dob)
{
this.id = id;
this.name = name;
this.dob = dob;
}
[Column("id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)]
publicint Id
{
get { return id; }
set { id = value; }
}
[Column("name", DbType.AnsiString, 30)]
publicstring Name
{
get { return name; }
set { name = value; }
}
[Column("dob", DbType.DateTime)]
public DateTime Dob
{
get { return dob; }
set { dob = value; }
}
}
//// Same thing but using a struct.
//[Table("person", "dbo")]
publicstruct Son
{
[Column("id", DbType.Int32, PrimaryKey=true, AutoIncrement=true)]
publicint Id;
[Column("name", DbType.AnsiString, 30)]
publicstring Name;
[Column("dob", DbType.DateTime)]
public DateTime Dob;
}
//// Delegating with a struct.
//[TableRef(typeof(IPerson))]
publicstruct Daughter : IPerson
{
privateint id;
privatestring name;
private DateTime dob;
publicint Id
{
get { return id; }
set { id = value; }
}
publicstring Name
{
get { return name; }
set { name = value; }
}
public DateTime Dob
{
get { return dob; }
set { dob = value; }
}
}
//// Main.
//publicclass Program
{
publicstaticvoid Main(string[] args)
{
string s = "Server=.;Database=test;Uid=sa;Pwd=";
// We use a SqlConnection, but any IDbConnection should do the trick
// as long as you are using the correct Dao implementation to
// generate SQL statements.
SqlConnection cn = new SqlConnection(s);
// Here is the Data Access Object.
Dao dao = new SqlServerDao(cn);
// This would also work:
// Dao dao = new SqlServerDao();
// dao.Connection = cn;
try
{
// The connection must be opened before using the Dao object.
cn.Open();
Mother mother = new Mother(0, "Jane", DateTime.Today);
int x = dao.Insert(mother);
Console.WriteLine("Records affected: " + x.ToString());
Console.WriteLine("Mother ID: " + mother.Id.ToString());
Father father = new Father(0, "John", DateTime.Today);
x = dao.Insert(father);
Console.WriteLine("Father ID: " + father.Id.ToString());
// We can also force father to be treated as
// another type by the Dao.
// This is not limited to Insert, but the object and type
// MUST be compatible.
dao.Insert<IPerson>(father);
// This will also work.
dao.Insert(typeof(IPerson), father);
// We now have 3 fathers. Let's get rid of the last one.
// The 'father' variable has the last Father inserted because
// its Id was set to the last inserted identity.
x = dao.Delete(father);
// Now we have 2 fathers. Let's get them from the database.
IList<Father> fathers = dao.Select<Father>();
Console.WriteLine(fathers.Count);
// NOTICE: Dao.Select and Dao.Find methods instantiate objects
// internally so you cannot use an interface type
// as the type of objects to return. In other words,
// the runtime must be able to create instance of given type
// using reflection (Activator.CreateInstance method).
// The safest approach you can take is to make
// sure that every entity type has a default constructor
// (it could be private).
Son son;
son.Name = "Jimmy";
son.Dob = DateTime.Today;
dao.Insert(son);
// Daughter is a struct, so it cannot be null.
// If record with given id is not found and the type is a struct,
// then an empty struct of given type is returned.
// This, obviously, only works for the generic version
// of the Find method. The other version returns an object,
// so null will be returned.
// The following is usually not a good idea,
// but they are compatible by table definitions.
Daughter daughter = dao.Find<Daughter>(son.Id);
Console.WriteLine(daughter.Name); // should print "Jimmy"
daughter.Name = "Mary";
dao.Update(daughter);
// Refresh the son.
// Generics not used, so the return type is object,
// could be null if not found.
object obj = dao.Find(typeof(Son), son.Id);
if(obj != null)
{
son = (Son) obj;
Console.WriteLine(son.Name); // should print "Mary"
}
}
catch(LightException e)
{
Console.WriteLine(e.Message);
}
catch(Exception e)
{
Console.WriteLine(e.Message);
}
finally
{
dao.Dispose();
try { cn.Close(); }
catch {}
}
}
}

Delegating table definition to another type was fairly easy, in my opinion. You simply apply TableRefAttribute to a type. This feature was geared towards being able to use patterns similar to the Strategy pattern. You can define an interface or an abstract class with all the required data elements. You can also have implementing classes delegate their table definition to this interface or abstract class, but have their business logic in methods differ. Here is some code that shows the use of MappingAttribute, which should help with inheritance. Assume that we are using the same connection and that the same dbo.person table exists in the database.

MappingAttribute allows you to map an inherited member to a column. It doesn't really have to be an inherited member; you can also use variables and properties defined in the same type. However, I like to see meta information along with the actual information, that is, attributes applied to class members. This makes it easier to change the attribute if you are changing class members, for example, the data type.

Notice that Father uses the inherited property, while Mother uses the inherited field. Also, notice the case of the member name parameter in MappingAttribute. Father starts the string PersonId with a capital letter, which hints Light to search through properties first. If a property with such a name is not found, the fields will be searched. If a field with such a name is not found, an exception will be thrown. Similarly, Mother has a personId starting with a lower case letter, so fields will be searched first. I guess the order in which members are searched does not give you a lot, and is not a huge performance gain, but I always wanted to implement something that could "take a hint" and actually use it.

Querying

Light provides a way to query the database. This comes in handy if you don't want Light to load all objects of any given type and then filter them yourself. I don't think you ever want to do that. The Light.Query object allows you to specify a custom WHERE clause so that the operation is performed only on a subset of records. This object can be used with the Dao.Select and Dao.Delete methods. When used with the Dao.Delete method, the WHERE clause of the Light.Query object will be used to limit the records that will be deleted.

The concept is identical to using a WHERE clause in a SQL DELETE statement. Using the Light.Query object with the Dao.Select method allows you to specify records that will be returned as objects of a given type. In addition, the Dao.Select method takes into account the ORDER BY clause (the Dao.Delete method ignores it), which can also be specified in the Light.Query object. Again, the concept is identical to using WHERE and ORDER BY in SQL SELECT statements.

The Light.Query object is a very simple object, and it does not parse the WHERE and the ORDER BY statements you give it. This means two things. First, you must use the real names of table columns as they are defined in the database. You cannot use a name of a property of a class to query the database. Second, you must specify a valid SQL statement for both the WHERE and ORDER BY clauses. If you will be using a plain (not parameterized) WHERE clause, then it is also your responsibility to protect yourself from SQL injection attacks. I don't think this is a problem when using parameterized statements.

Parameterized SQL statements are a recommended way of querying the database. It allows the database to cache the execution plan for later reuse. This means that the database does not have to parse your SQL statements each time they are executed, which definitely helps the performance. The Light.Query object allows you to create a parameterized WHERE clause. To achieve this, you simply use a parameter syntax as you would when writing a Stored Procedure and then set the values of those parameters by name or order. The following example should make this clear (code not tested).

//// We will use the Son struct defined previously in the article.
// Assume we have a number of records in the dbo.person
// table to which Son maps.
//using System;
using System.Data;
using System.Data.SqlClient;
using System.Collections.Generic;
using Light;
IDbConnection cn = new SqlConnection(connectionString);
Dao dao = new SqlServerDao(cn);
cn.Open();
// This will return all Sons born in the last year
// sorted from youngest to oldest.
// We will use the chaining abilities of the Query and Parameter objects.
IList<Son> bornLastYear = dao.Select<Son>(
new Query("dob BETWEEN @a AND @b", "dob DESC")
.Add(
new Parameter()
.SetName("@a")
.SetDbType(DbType.DateTime)
.SetValue(DateTime.Today.AddYear(-1)),
new Parameter()
.SetName("@b")
.SetDbType(DbType.DateTime)
.SetValue(DateTime.Today)
)
);
// This will return all Sons named John - non-parameterized version.
IList<Son> johnsNoParam = dao.Select<Son>(new Query("name='John'"));
// This will do the same thing, but using parameters.
IList<Son> johnsParam = dao.Select<Son>(
new Query("name=@name", "dob ASC").Add(
new Parameter("@name", DbType.AnsiString, 30, "John")
));
// This will return all Sons whose name starts with letter J.
Query query = new Query("name like @name").Add(
new Parameter("@name", DbType.AnsiString, 30, "J%")
);
IList<Son> startsWithJ = dao.Select<Son>(query);
//// We can use the same, previously defined, queries to delete records.
//// This will delete all Sons whose name starts with letter J.
int affectedRecords = dao.Delete<Son>(query);
dao.Dispose();
cn.Close();
cn.Dispose();

The creation of the Query and Parameter objects (in the first query) may look a bit awkward. Both the Query and Parameter classes follow the Builder pattern that allows for such code. Classes that implement the Builder pattern contain methods that, after performing required actions, return a reference to the object on which the method was called. This allows you to chain method calls on the same object. The Query and Parameter classes also have regular properties that you can set in a well-known manner. Both approaches work equally well. I just thought it would be easier to use these classes with such methods and the code would be more compact.

Default Table and Column Names

You can omit the name of the table in TableAttribute and the name of a column in ColumnAttribute. Light will provide default names to tables and columns based on the class and field names to which the attributes are applied. The rules to figure out the default name are very simple. In fact, there are no rules. The name of the class or field is used as is if the name is not provided in the attribute. It is best to see an example:

Triggers

The concept of triggers comes from the database. A database trigger is a piece of code that is executed when a certain action occurs on a table on which the trigger is defined. Light uses triggers in a similar fashion. Triggers are methods marked with Light.TriggerAttribute, have a void return type, and take a single parameter of type Light.TriggerContext. TriggerAttribute allows you to specify when the method is going to be called by the Dao object. The same method can be marked to be called for more than one action. To do this, simply use the bitwise OR operator on the Light.Actions passed to TriggerAttribute.

Trigger methods can be called before and/or after insert, update, and delete operations. However, it can only be called after a select operation (denoted by Actions.AfterConstruct) because, before the select operation, there are simply no objects to call triggers on: they are being created in the Dao.Select or Dao.Find methods.

So, the point here is that triggers are only called on existing objects. Hence, another caveat. When calling Dao.Delete and passing it a Query object, no triggers will be called on objects representing the records to be deleted simply because there are no objects for Light to work with. Internally, Light will not instantiate an instance just so it can call its triggers. If such behavior is required, you should first Dao.Select objects that are to be deleted and then pass them to the Dao.Delete method.

Here is some code demonstrating the use of triggers. The code has not been tested to compile or run. Assume we have the following table in our SQL Server database:

Be careful not to create triggers that load objects in circles. For example, say we would add a trigger to the Child class that would load its parent object on AfterActivate. This trigger would load the parent, which would start loading children, which in turn would start loading the parent again, and so on and so forth, until you run out of memory and your program crashes.

So, in a one-to-many relationship or cases where one object fully depends on another, triggers are very helpful. However, they will rarely be able to handle many-to-many relationships unless your code is disciplined enough to only access related objects from one side all the time. Of course, triggers don't solve all the issues of related objects, but in some cases, they might help.

Stored Procedures

Light allows you to call Stored Procedures to select objects. This is useful to call procedures that perform searches based on multiple tables. Alternatively, you can create a view to deal with this, but in most cases, it is easier to deal with a Stored Procedure. However, an even better use for it is to bypass an intermediate table in a many-to-many relationship defined in the database. An example should make this clear.

Performance

Light is a wrapper around ADO.NET, so it is slower than ADO.NET by definition. On top of that, Light uses Reflection to generate table models and create objects to be returned from the Dao.Select and Dao.Find methods. That is also slower than the creation of objects using the new operator. However, Light does attempt to compensate for these slowdowns.

Light generates only parameterized SQL statements. Every command that runs is prepared in the database (IDbCommand.Prepare is called before a command is executed). This forces the database to generate an execution plan for the command and cache it. Later calls to the same type of command (INSERT, SELECT, etc.) with the same type of object should be able to reuse the previously created execution plan from the database, unless the database removed it from its cache.

Light has a caching mechanism for generated table models, so it doesn't have to use Reflection to search through a type of any given object every time. By default, it stores up to 50 table models, but this number is configurable (see Dao.CacheSize). Light uses the Least Recently Used algorithm to choose table models to be evicted from the cache when it becomes full.

Conclusion

The demo project provided is not really a demo project. It is just a bunch of NUnit tests that I ran against a SQL Server 2005 database. So, if you want to run the demo project, you will need to reference (or re-reference) the NUnit DLL that is on your system. Also, you will need to compile the source code and reference it from the demo project. No binaries are provided in the downloads, only source code. You don't need Visual Studio to use these projects; you can use a freely available SharpDevelop IDE (which was used to develop Light) or the good old command line.

Also included is an extension project by Jordan Marr. His code adds support for concurrency, and introduces a useful business framework structure. It keeps track of object properties that were changed, and only updates objects if anything was changed. This reduces the load on the database. The business framework also allows you to add validation rules to your objects.

The code is fully commented, so you may find some more useful information there. I hope this was, is, or will be useful to somebody in some way...

Credits

Many thanks to Jordan Marr for his contribution, feedback, ideas, and the extension project.

History

2007-10-18: First submission of this article (code version 2.0.0.0).

2007-11-01: Added the "Querying" article section; new version of the code (version 2.1.0.0).

Bug fixes and improvements:

Fixed a bug with private properties; now, mapping a private property to a column works as expected.

This library is really meant for those who do not have access to LINQ to SQL or EF. Like those who are still working with .NET 2.0. Until very recently, I was working only with .NET 2.0 and simply wanted some easier methods of doing simple database operations.

Another reason is that I don't like to maintain XML mapping files or regenerate .NET objects from database schema every time it changes. With this code, I just add a property.

Your changes look good and may be useful in many situations I do, however, want to make some remarks about them.

1) You have an old version of the code. There is a newer, completely rewritten, version that makes Light much easier to use and extend. I have been using the new code for some time now, but just don't have the time to update the article on CodeProject (I hope I can get to it soon).

2) Read-only columns: You should be able create a read-only column using the following code:

In your example, the OrderTime property is not really read-only in your class. It has both get and set methods.

Properties that have no getter method are not used in the Insert or Update methods because Light cannot get a valid value from the object to be persisted. Similarly, properties that have no setter method are not used in the Select method because values returned by the database cannot be assigned to anything.

Considering that Light is not meant to be a full featured ORM, read-only columns are not very common. So I left read-only columns out thinking that its not too much trouble to add a private property to your class to make a read-only column, like in the example above.

2) ExecuteReader and ExecuteCommand methods: There are couple reasons why I would not add these methods.

a) Light was not meant to be a replacement for ADO.NET. If you are creating a Command object in your code, then why not create the ADO.NET IDbCommand object and use it directly? These methods don't return your domain objects, so why add the overhead of calling Light methods? Plus, using the IDbCommand should be even faster then getting an IDataReader from Dao object.

b) I have seen a lot of bad code that did not close data readers and relied on .NET framework to close them when the connection was closed. This led to many, many problems and weird coding patterns (I had to fix a lot of such code). Therefore, I try to protect data readers that are used by Light. If you are very careful about closing data readers - everything should be fine, but if you forget to close them somewhere you will most likely blame Light for the error (at least initially).

3) Event support is a good idea. Personally, I could not find a use for them, but if you can - you should definitely use them.

If you would like to send me your code I would be glad to look at it. It is always useful for me to get some feedback about my code.

And your English is very good. English is not my first language either - I am Russian.

Thanks,Sergey

P.S. Now, these are just my opinions. You should use and extend Light as you see fit.

You're right. In object-oriented programming terms of concepts, read-only columns must be defined that way.

3) About DataReaders.I do not want to create IDCommand directly for several reasons.I agree with the idea that the code of SQL queries should not appears in C # code. Code of SQL queries in service layer (for example) of project significantly complicates refactoring, because I have to make changes in my database structure more often than I would like to.Therefore, I am sure, SQL queries must generated automatically. And light do it successfully. But you are right again - data readers must be encapsulated. I deleted ExecuteReader methods from Light source and added next methods:

Now the code is out of Light and GetDBSelectCommand method declared as public. But Light can easily be modified to create and use AbstractLoaderService objects, if they exist for given type ( I mean search for classes in project assembly).

1. About the new version of the code: I wrote a quick article and submitted it as an unedited article. It could be found here (http://www.codeproject.com/KB/database/lite.aspx).

2. Yes, I do like these changes much better. I think that if you decide to make your code multithreaded or thread safe, it should be easier to do it with this code then with the previous version that allowed IDataReader to escape into client code.

There is a newer, completely rewritten, version that makes Light much easier to use and extend.

Didn't your mommy tell you it's not nice to tease other children! (grin)

I've been using a heavily tailored version of your code for some time now, and would also be very interested in a peek at the newer version. If nothing else, just a link to the sources would be terrific!

Hi,Did You get source codes from my laptop? I've wrote very, very similiar library. It has a little less features, but generally it's same So, congratulations for great project! I also didn't want class generators, big xml definitions, etc. Just pure code...

Hi, cool project (im planning to use it in my next projects) but ..Microsoft released a cool Library called Enterprise Library that are database indipendent (based on then provider)and keep simple multiple database implementation (you need only to change the sql strings).

Probably you already know it but its possible to implement this in next release?This should be good also for the database configuration too (by using app.config).

Yes, I heard about the Enterprise Library from Microsoft. It is database independent only for those databases for which you have the provider, which could be either supplied by Microsoft (SqlServer, Oracle, OleDb, etc.) or third-party provider referenced in your project.

To allow Light to work with a different database you would need to write a class that extends Light.Dao and overrides its abstract methods which should provide SQL statements using the correct SQL syntax for the target database. See Light.SqlServerDao for an example. I would not mind writing a Dao object for Oracle but I simply don't have an Oracle installation to test against.

As for configuration, I try to stay away from it, at least for now. You can configure Light by writing a simple factory class that would return the correct implementation of the Dao object to the rest of your code. The factory, in turn, can base its decision on any external configurations (such as web.config). Such setup also allows you to work with more then one database at a time:

I'm in a problem with a DateDime field in SqlServer 2005, nullable.If I read a row via Dao.Select, with the DateTime field value is NULL, the property in the class is set to DateTime.MinValue (by default?)

In my code (VB 2005) I don't set any value for the field, but when I perform update, there is a sql server overflow exception in Dao.Update method at row

result += dbcmd.ExecuteNonQuery();

The exception text is like "SqlDateTime Overflow. Date must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM."

I think the problem is here:in case of val is of DateTime type the test

Yes, the DateTime datatype is troublesome. Light assumes that you as a programmer are aware of the fact that by default a DateTime field gets the value of DateTime.MinValue. Same problem exists for any other struct (int, long, etc.) that cannot have a value of .NET null. Types int and long have a default value of 0 (zero), so SQL Server supports that and does not throw exceptions. But this could quickly introduce data inconsistency if in the database you allow an integer column to be NULL and use a simple .NET Int32 struct to map to it. That column will not be NULL, but will have a value of zero. Here, Light cannot check for zero and assign DbNull.Value - what if you really want to put a zero into that column?

The reason I stayed away from what you proposed is simple: I did not want to impose any rules about using any datatypes. What if DateTime.MinValue actually means something else in your code? What if you want to store DateTime.MaxValue in the DateTime field if the database column is NULL?

Suppose there is a column in the database that holds the date of the next doctor appointment which you did not schedule yet. Suppose you also have an object field mapped to this column (Person.NextAppt). Personally, I would use Person.NextAppt = DateTime.MaxValue to represent this. Of course, neither MinValue nor MaxValue are realistic, but at least Person.NextAppt > DateTime.Today will be true and you will not get a message saying that you missed your appointment that was scheduled over 2000 years ago.

There are couple workarounds you can use.

First, you can use Nullable(Of DateTime) instead of just DateTime. Same goes for other built-in structs (Int32, etc.). (And yes, I know how weird it is to use them.)

Second, you can create private properties that are aliases for your real columns:

If you decide to keep the changes you have suggested, don't forget to make the same modification to the Dao.Insert method as it will throw the same exception if you attempt to insert a record with a default value for a DateTime field.

Hope this helps. Let me know if you have any other questions.

P.S. I have a newer version of the code that fixes some subtle bugs and adds support for calling stored procedures for select statements. I will try to update the article soon.

There are small changes to this version already. There were improvements to exceptions - thanks to Jordan Marr.

Please let me know what other better and more powerful functions you would like to see in the future version. Current version gets the job done for me, so I don't know how many more updates I will do...

Yes, I am definitely interested in your changes. I decided against loading the parent class automatically and went with the MappingAttribute because I thought that there might be situations where you may not want to inherit all fields from the parent class. I thought this would be more flexible (although more typing of MappingAttributes), but if this scenario is never used...

This is great feedback, thank you! You are the first one to provide any feedback on usage of this code. Otherwise, the code is convenient for my needs only.

First, thanks for making your excellent code available for everyone to use. I am really enjoying using it.

Now I have a small suggestion regarding the use of the LightException:You should not wrap preexisting database exceptions inside your LightException (example: SqlException). This makes it harder to specifically handle SqlExceptions because instead of using the prescribed way, like this:

Thank you for your suggestion. You know, when I started writing this code I was debating whether to wrap everything inside a LightException or not. Then, I decided to wrap all exceptions for the time being so it would be easier to write tests - less catch statements - since I could examine the exception and inner exception in the debugger. I thought that I would go back and fix this, but never got around to it (being lazy).

And yes, you are right, it is not very friendly in terms of exception handling. I will modify the code to allow SqlExceptions to be thrown to client code.

To be honest, I have not tried LINQ yet - I only read about it and glanced over some examples. But here are some of my thoughts:

1. LINQ needs .NET 3.0 or 3.5 compilers, while Light can run on 2.0 and can easily be modified to run on .NET 1.1. (Believe it or not, there are companies out there that are still coding for .NET 1.1.) This is a plus in my case because the company I work for is still on .NET 2.0 and some applications and servers are on .NET 1.1.

2. LINQ is obviously much more powerful: joins, calling stored procedures, generates classes from database schema (or the other way around), dynamically generates object types for intermediate results, and so on. (Personally, I don't like when something generates classes I use.) Light, on the other hand, does not do anything to your objects! Notice the phrase "your objects" - these are the objects that you define, not the compiler. This is a huge plus for me.

In general though, both LINQ and Light generate SQL statements to be executed against a database. Light has a rather limited scope, while LINQ is meant to cover the whole ADO.NET.

I sent article and code updates to the codeproject team, so the article should be updated shortly.I also know of some more improvements to the code that I can make. I will update the article when I do them.

Great article and code, thanks.Is there any best practice on creating associations between objects?It seems that this can be done via custom attributes, it would be great to get your opinion on this.Thanks.

Yes, association can definitely be done via custom attributes, but I explicitly tried to stay away from those. Association between objects is beyond the scope of this library. When defining the scope, I limited it to working with one object mapped against one database table or view.

The short answer to your question is: yes, it can be done, but this library does not provide a way to do it.

The purpose of this library was to be small, simple, and leave you in control of your objects and when they are being loaded from and persisted to a database. It does not aim to replace the use of ADO.NET classes, but help with basic repetative tasks (insert, update, delete, select) for one table or view. Many times it is easier to execute dynamic SQL in a stored procedure that accepts some parameters. These cases are not meant to be covered by this library also.

Association between objects would involve dealing with eager or lazy loading of associated objects. Eager loading may potentially load the whole database, which is rarely desired. Lazy loading would either involve some aspect-oriented programming or the library would have to provide proxies (usually subclasses) for your objects. Proxies would change the runtime type of your objects, which I definitely did not want to do, and aspect-oriented attributes would make this library much harder to use. (If there is a better way to do lazy loading, please let me know - I would be glad to learn something new.)

Actually, I have a newer version of the code but have not updated the article yet. (I will try to do it soon.) It adds support for triggers. Basicly, you can tell the Light library to call certain methods of your object before and after each database operation (excluding the before select operation). In these methods you can load and save any associated objects. This will be described in greater detail in the updated version of the article, but here is a preview:

This looks remarkably much like my own lightweight ORM library which I posted at 23 Oct 2003: http://www.codeproject.com/dotnet/CsEntityPool.asp[^] Although my implementation does not support generics (we didn't have them at that time), it does do some stuff with queries and optimizations. You won't even have to provide the SqlConnection object for example.

Never read your article (but I will in a minute, when I am done writing this post). Although I wish I have read it before, maybe it would have saved my time and I wouldn't have to write this library. Actually, this library was based on Java's persistence API. The first version was more like Java's API, later I removed unsupported API. Also, the dll was named differently and most class names followed Java API's names. I later changed them to make it simpler and easier to understand and reflect the lack of many features.

Thank you for this excellent article. I downloaded the project code and I am impressed with the quality of your code and the great exemple it makes in terms of OOP. I will we using this library as a starting point for my own library.

Thank you, I am glad to be of some use.I just wanted to let you know that I will update the article and the code soon (hopefully tomorrow). The first version of the code does not work correctly with private properties. When it attempts to get the getter and the setter of a property, it only looks for public getter and setter. This is fixed in the next version of the code that I will post shortly.There are also some other small improvements. For example, when extracting values from a data reader the code no longer uses column names to get the value, it now uses integer indexes directly.I don't know if this is of any interest, but I thought that since you will be using this library, you might be interested in the latest version of the code.Hope this helps and thank you for your complements about the code.