Main menu

Kickass kung fu coder

While working on my current project I came up with this simple fluid interface used for data access with SQL Server. My current project doesnt use any ORMs for reasons that would wage war, so I won’t go into those details. I came up this with fluid interface to reduce the mundane ADO.NET code and to reduce repetition.

As with an ADO.NET Command object, this code executes non query or return a scalar or a data reader. My goal was to try and reduce as much code as possible especially with the data reader. I wanted to not use strings and ordinals and keep the code simple, so with a bit of experimenting I managed it with a class called “DynamicDbDataReader” which is a DynamicObject.

As a generic library, I had to allow for connections to be supplied / configured by allowing a connection instance to be supplied, a connection string or my preferred method, the name of the connection string that existing in the config file which I would keep as a string constant and pass it into the method.

The parameters that are supplied are done using a string for the parameter name and the object containing the parameter value. Method chaining is used to add more that one parameter and the data type is inferred automatically. All parameters are input only.

Example: Running a stored procedure with many parameters that does not return a result set (ExecuteNonQuery)

Introduction

The purpose of this post is document my recent journey I had moving from a typical web based (layered) architecture to an architecture based around CQRS and Event Sourcing. I am not going to describe CQRS and event sourcing as there are great blog posts already on the web that do the job perfectly well. Instead, this post is about the difficulties that I faced, planning, keeping the project on track during the transition.

Background

I hope to give you an understanding of my project and the existing architecture and the reasons why we moved to CQRS.

The product is a platform for driving networks of digital signage players/broadcasting video and generating playlists from its content that is also managed by the platform. Many distributed components make up the platform that are controlled by workflows. We have developed a UI plus we have developed a REST/SOAP based API to allow for integration (SaaS). This product has been designed as an enterprise level product and requires many servers. So it needs to tick all the non-functional requirement boxes (scalability, availability, reliability etc etc)

Technology

Being a C# guy, its written in .NET 4.0. The UI has been developed using Silverlight 4 (for our sins) and we plan to move to HTML5 in the very near future (thats another story). On the server-side, as mentioned above we provide an REST API, we host our web services on IIS7.5 through WCF. The database is SQL server 2008 R2. We have a number of windows services and lots of MSMQ.

Original architecture

When I started on this project, a prototype which served its purpose was already development using Silverlight 4, RIA services and Entity Framework and SQL server. When we started on the production version, I choose to start from scratch. Many because we are agile and we wanted to drive out the code through TDD, we wanted to drive out our build and deployment processes as well. At the time it was believed that the size of networks we would support was around 10,000 digital signage players. Taking other things into consideration like the skills of the team, cost of ownership it seemed the best thing to do as to “keep it simple” and go for a typical web based layered architecture (CRUD) which suited the technology stack. Our layers were.

Stateless services that interact with the domain and map WCF data contracts using AutoMapper.

Domain (POCOs with logic)

Persistence layer using NHibernate 3 and Fluent NHibernate.

SQL Server 2008 R2.

The windows services communicated with the Web services and listened on MSMQs.

All in all, this is a typical architecture, it was nicely decoupled, The ORM was abstracted from the domain, high level of code coverage from over 600 odd unit tests and many integrations and all in all its was a code base to be proud of.

Why Change?

Why change? We had a valid architecture in place and its well understood. Our non-functional requirements changed, we were originally talking about our biggest network being 10,000 which grow to 200,000+ up to 1 million. Our requirements were that we needed to scale, be available for 99.99% and be auditable. We could be dealing with thousands of web requests per second. Amongst these changes we were also starting to fill the pain with the current architecture. Now, I am not going to speak ill of this architecture, but we did find parts of the code base that started to smell.

Fat view models. Its common to implement CRUD behaviours for the entities in your model and expose them through your web services. This is fine, but what we were finding is that our application was being created in our view models. A view model was being injected with many proxies to get the data from the various web services that were then mashed up to form the UI.

Fat services. With all the will in the world, logic would find its way out of the domain and into a service, even with peer code reviews.

Multiple data mappings. This issue is were we read data from the database using an ORM into the domain (transform 1), then a requesting web service will map the domain object into a message object (DTO) (transform 2). The UI takes the message object and transforms into a presentation object (transform 3). Although this might not seem a lot, AutoMapper made this easier, its mundane code, needs testing as it can go wrong.

Interacting with the database. I have been pro ORM and most of the projects I have worked on over the last 4 years mainly used ORMs. I have been happiest when its NHibernate or Castle ActiveRecord. But dealing with the volumes of data and performance being paramount, we needed to have finer control over the SQL. So back to crafting our own sprocs. Mainly to respect the database and have the ability to tune it accordingly. Also we have a complex domain model, we could without knowingly easily fetch half the database back through one query. yes we could lasy load, but we could then end up with a “select n + 1” issue and getting this balance right is a problem that we could do without.

Why CQRS

I first came across CQRS about 2 years ago. At first, I was sceptic. It was about 6 months later and it just clicked and now I am completely sold on the concept.

More importantly, implementing an architecture with the CQRS concept and Event Sourcing rectified our current issues and satisfied all the non-functional requirements. It also gave us a lot of benefits.

Solving the current issues:

Fat view models: We would be returning the data needed for a view in one query straight from a stored procedure. This means that our view models need only know one proxy to get its data. This meant that we don’t need to shape the data in the view model.

Fat services: Our services are just facades over our command handlers. Each command handler deals with a single behaviour.

Multiple data mapping: This is reduced. The results from our query store are mapped straight into a message object (DTO). We still need to map in the UI though.

The queries are specific, we bring data only the data we need. Even before changing the query store to a denormalised tables we were seeing the benefits while stilling querying our relational database.

Beyond solving our current issues, the concept gave us some much more.

Planning the change

Before stating the move, I had to plan out many things. But first I have to present this to my team and get them to buy into the idea. I also needed to get the nod from management so I had to plan how the change was going to happen. I had to present the architecture and how we would move to it, while still delivering new features without knocking timescales into another year.

Most of the refactoring would be on the server side, so we could change the implementation for each service call. The calls you start with are important. We chose to get a tackle the lookup data and administration service calls changed first, knowing that parts of the domain and database would give us a good foundation moving forward.

So intercepting a single service call one by one seemed the best approach because the risk seems small and easy to revert. Doing the intercepting would be easy but driving out the CQRS based architecture was not, so we allowed a block of time to create the new databases and domain etc. We have to logically group the features to try and do a feature per sprint.

Although we were planning to move to a new architecture, we still had to ensure that old database was updated and that the existing features still worked. We was not a big issue as we could listen to events coming though the event bus and update the old the database accordingly. This felt good, we were using a clear benefit of the new architecture to keep the old stuff in sync. Once we had the whole system moved over we would remove this bridge and the old database.

Hurdles that got in way

Although I planned this, things happen that you didn’t see coming.

During the beginning, I worked in isolation to craft out the implementation on a different branch, while the rest of the team carried out the normal daily development. I tried at various points, walk the team through the new architecture to bring up their levels of understanding. I say I tried, CQRS was new to them and they needed more exposure to really get it. The mistake I was not getting the team involved more frequently.

I got to point where I had to refactor the core domain and I could not pussy foot around it no longer, so i had to just go for it. I was blessed that the team got a fairly stable build out, to relieve pressure, so I had to merge my branch back into the main trunk. This was painful, I had been working on the branch for 3 months and allow I planned out what parts of the code base to work on, changes and conflicts still happened.

Once the code was merged, we tackled the core domain which took another 3 months. Progress was slow because I hard to educate the team and the team had to make mistakes as part of this learning. Our deadlines slipped the board needed to be updated, and this effected the marketing and product launch. This was caused by not educating the team in the beginning and not enough stakeholder management.

Standing in the light at the end of the tunnel

After 6 months hard work, the team and I managed to complete the refactoring. If I had to do it again (which I hope I do), I would have produced the software architecture description document and other documentation sooner and better stakeholder management.

Our product has massively benefitted from CQRS. Was it worth it? Absolutely.

Its been a very long time since I last blogged. Many things have happened over the last year or two that has stopped me from keeping my blog active and its decayed as a result ;-(. I had been freelancing just over a year ago and the long days and lots of travel got me out of the routine. Bundled up with family time, martial arts and my xbox, my blogging time become non-existent. But now, i have a bit more time in my life and i hope to get a few new fresh posts out. I have had requests for source code for some of my older posts and i can only say sorry as i haven’t kept the code so i can’t email it, sorry.

Introduction

I haven’t blogged lately over the last 6 months. Main reason being Silverlight 3, I have been using most of my time consuming myself with Silverlight 3 and related technologies, plus working on to many projects and holding down a fulltime job. My focus is about developing line of business applications that involves communicating with distributed services rather than very pretty UIs. The obvious technology is WCF for communicated with backend services from silverlight (or any UI in the .net world).

One of the many golden rules with UI development is not lock up the UI while running long running processes and this includes communicated with backend services. Not locking the UI is nothing new, multi-threaded applications have been around for years and years. So what is this post about, its about using WCF asynchronously. Specifically with Silverlight, but the UI could be WPF (or winforms with some subtle differences).

The approach

Add service reference

The easier way to achieve asynchronous communication with WCF and Silverlight is to use “Add service reference” (ASR) in visual studio. Although this works and takes about a minute to do. The wizard generates a load of code for you, based on your service and data contracts, that you would be mad to modify it. If you are in control of the services then you might want to reuse the messages types. This is possible with the ASR. When you are prototyping/spiking ASR is a good choice. So what’s my issue:

Control, while building an application with WCF, silverlight i came to this point frequently where i needed to manipulate the data being fetched from a service before displaying in the UI.

TDD and IOC. both are very much part of me and while using TDD with IOC i have to go out of my way to support this with the generated code from IOC.

The code started to smell. I was wrapping the called to xxxClient generated for a service from ASR. It was not a pleasant smell.

When you look at the generated code and know WCF well enough, you realise that you have much more code then what is required. Its not big thing, but it increasing the size of the XAP file downloaded on the client.

Channel Factory

A simple object that is core to the client communication process. When you are in control of the services and UI its a best choice because to can reuse the same types. If you have never used the Channel factory, its a generic class that you use with your service contract (the interface) and you call a method called “CreateChannel” that returns you an instance of your service contract as a proxy. You can when call operations on the proxy like a regular object instance.

The Design goal

While I have been working on a project will Silverlight that communicates with backend services, I wanted to apply a pattern that i could use consistently for all communication with the services. All of my services followed the “Request/response” pattern and worked synchronously and i am perfectly happy to keep them this way.

From the silverlight end, i want to make a service request and receive the response without locking up the UI thread. Once I have the response back from the service, then I might place the results in the UI, so I needed to marshal this back on the UI thread. In the past I have used the “AsyncCallback” delegate to accomplish this and I needed the same approach here. To add to my problem, i wanted to apply a pattern that does not involve writing loads of code, especially framework code.

The Async Pattern

In WCF, the operation contract attribute contains a boolean property called “AsyncPattern”. While looking into how to use this, i discovered that have to change your service contract to:

Return “IAsyncResult”.

Prefix the Operation with “Begin”

Add two additional parameters to the operation, being the callback (asyncCallback and state object)

Add another method to the interface that complies with the “AsyncCallback” delegate with the same name as my operation but prefixed with “End” instead of “Begin” and does not have any WCF attributes.

My first thought to this was how would my service implementation change to suit this interface. I was put off my this, I liked my standard service contract that returned my specific Response object and took a specific Request object.

What was not clear to me and I never found out on the web was that I could keep my original service contract and service implementation as it was and create another interface local to silverlight that uses the async pattern. My WCF configuration (endpoint) was still configured to use my original contract.

Communicating with WCF from within Silverlight

So, interact with your service(s) from within silverlight, you use the channel factory. You use the async version of your service contract. The following code shows how interact with the service by creating a request object, constructing a call back method that handles the response to marshals the execution back to the UI thread.

Its a little lambda-tastic, but you don’t have to use lambdas and could use methods instead (actually if my code grows, i might make the method a class instead with defined methods.. may a future post to follow up on). The Company, Contact, message types (request/response) and mapper objects aren’t relavant to this subject, so no need to show the code.

The pattern is the key issue, I get a proxy from the “GetCompanyService” method using the Channel factory. For the “Get” methods, I create return types (observable collections) and return them at the end of method. At runtime when the main method is called, the return type (observable collections) is new’ed up and returned to the calling code.

In my project, i using the MVVM pattern, so the the return type for these methods are exposed as a public properties on my viewmodel. My view is bound to be view model via xaml. So the view loads straight away, when the async call is complete the reference is changes on the return type and my view automatically shows the results.

The code between the return type declaration and the return statement is the callback code which is going to be executed when the service responds. The first thing that happens in the callback to the get the response object from the service call by calling the “End..” method using the asyncState property on the IAsyncResult interface which is what the variable “result” just happens to be. The asyncState property is returning the reference of the proxy that is passed in, in the “begin…” call. This is all part of the standard pattern when using Asynchronous callbacks.

Once I have the response from the service, although its not relevant to this, i map the results into types that are local to the silverlight application. Where i have a collection as the return type, i am adding my newly mapped object to the return type reference, inside another method. this little method is invoked by the “Dispatcher” object, which will execute the method on the UI thread. My class inherits from “DependencyObject” so that i can use the Dispatcher. My class also implements an interface which again isn’t relevant to this subject.

This works for me very well and i have many service calls happening at once with some of my rich/complex UI’s and (although in a dev environment) the performance is very good.

Continuing on from part3 – the business layer. In this post the focus is on data access. In part 3, I created an interface called “ICustomerRepository” and a class “CustomerRepository” which will change to actually do something. Using the repository interface, our business code can interact with it without being coupled to the technology that is used to communicate with the data store.

This post is part of a series of posts about creating an architecture when creating a line of business application with ASP.NET MVC. The business and persistence layers have nothing to do with ASP.NET MVC or any other UI technology so this approach is relevant to any .net application. The next and last post of the series will be placing the business and persistence behind a WCF endpoint.

What technology?

We have many choices in the data access area. You could have either a ODBMS or a RDBMS. If you are using RDBMS than is a pretty good assumption to say you are using MS SQL server. So what choices do you have:

ADO.NET – Simple to understand, but as your application grows you will end up writing the same code over and over unless you create your own abstraction over the top. You have to write the SQL yourself, which means you will end up with many stored procedures (sprocs). On a positive note, you have control of the SQL and you can use sprocs as a API to your database. You can shape the result set of data in your sproc so its easier to handle in the application. This is the traditional approach, personally i had been using this approach since SQL 7.0 before .NET and also back in .NET 1.0 and 1.1. days. From experience, you do write more code mapping the results of the sproc to your classes. Other ways of using ADO.NET would be to provide the SQL inline with .NET code or use typed DataSets (poor mans attempt of ORM), but these are just wrong on many counts.

Object Relational Mappers (ORMs)

ORM’s have been around for years and have got more popular of the last 2-3 years in .net world. You have two types of ORMs, the first type are based on the Active Record pattern and the next type based on the Data Mapper pattern. The active Record pattern in short is where your database table and domain class have a 1 to 1 mapping. So your domain classes mirror the database. Works well when you have a good database schema. The Data mapper is where your domains class are different from the data tables. Their are lots of ORMs available, here is a summary I the ones I know well.

NHibernate – IMHO the most powerful ORM to date, supports the data mapping pattern, been around for a while and got a big community around it and a number of tools like NHProf and Fluent NHibernate that improve the experience. Being as powerful as it is, its a bigger learning curve. It has it own methods for querying data using HQL, detached criteria as well as Linq (not fully implemented). Also NH supports many different RDBMS’s. Negatives for me are, session management and that it requires initialising when you start your application. By default uses XML mapping files (I hate writing XML, but enjoy writing XAML. Work that one out). Need a handful of DLL’s that you need to ship with your application.

Castle ActiveRecord – Personal favourite. Built on top of NHIbernate and supports the Active Record Pattern. You add attributes to your classes and properties to set up your mapping. Its simple, but you have the power of nhibernate under the hood if you need it. The session management is simplified, still requires initialisation when your application starts. The ActiveRecordMediator is so simple to use. Negatives are that it requires shipping the same DLL’s that nhibernate needs plus the the castle ones. Borrows the querying functionality from nhibernate.

Linq for SQL (DLINQ) – Very simple to use, supports the Active Record pattern. Can you attributes or XML to define the mapping. Does come with a designer/SQL metal. Personally i hand craft my domain classes as the designer gives you a lot of boilerplate code that in more cases in not needed, plus i model the domain first and not the database. Its built into the .NET framework so no external DLL’s to manage. No session management, the data contexts only needs a connection string. The data context uses transactions by default. The only querying mechanism is Linq which is fully implemented. Negatives are: Limited to SQL server and SQL CE, have to include properties in your classes that relate to foreign keys in the db.

Linq for Entities – (ADO.NET Entity framework) – Waiting for the next version of this, as the current version is data driven instead of domain driven. So for me its not an option at the moment.

Object Database Management Systems (ODBMS)

I only have experience in using one ODBMS being db4o. Using an object database in a change in mindset and is not that common across the .NET developer community, although products like db4o does have a massive community across .net and java developers. I think the reason for slow uptake is down to the fact that RDBMS have been around for 30 odd years and they have got better and vendors have changed their products to keep supporting the current market trends like XML.

Db4o – Really easy to use, requires a tiny amount of code. Uses a file either locally or on a remote server. No mapping required, you use OO in your domain so why not store it OO in the database. Supports Linq and also has other ways to query the database. Great documentation and support. Negatives… wish it was more a mainstream so i wouldn’t have to use RDBMS’s any more.

While developing a project with Db4o, you realise that you don’t think rows in a table or association tables. And what is even better, when you already have a database in place that contains data and you make changes to your classes like adding properties, do you need a database migration script? no. Some smart stuff inside db4o knows that the type has changed an handles it. You don’t lose data, it just works. This is a big positive for me. When using a RDBMS, tracking database changes in development is a pain and you need a process in place not only for development but when you deploy to other environments. if you have a bug in script it stops you from deploying your release as you need keep your scripts in sync with your code. Needless to say, when managing SQL scripts you need to make changes within a transaction so it can rollback in the event of failure and you must also write your scripts so they can be run more than once. With Db4o its a non-issue, no scripts, no process, no problems.

So what technology?

Ok enough rumbling. With my agile head on. I will chose the simplest option, which i believe its db4o.

Implementing the Repository

At the end of part 3, I created a domain class called “Customer”, the ICustomerRepository interface and a concrete implementation called CustomerRepository.

Using db4o

So if you haven’t got db4o you can download the msi from here. I am using version 7.4 for .net 3.5.

In the Web.Business project, add references to:

Db4objects.Db4o.dll

Db4objects.Db4o.Linq.dll

System.Configuration

One difference between using SQL server and db4o is the connection lifetime, in SQL server connections are opened and closed as quickly as possible and the connection is returned to a pool (if configured to). In db4o, this works differently. When you start your application you open the connection and keep it open until the application ends. Their are a few ways to do this, one of the common approaches i see in web applications that require initialising a database component like nhibernate and Castle ActiveRecord is that in the Global.asax, its configured in the Application_Start() method. I think this stinks, why does UI application need to know about the persistence. My preferred way is to make it happen where its needed. Because i need to ensure the lifetime is the same as the application, I use a singleton to hold the reference. Here is that class.

I have added an application setting into the web.config called “DatabaseFileName” which as you might as guessed is the path to the db4o database file.<appSettings><add key=”DatabaseFileName” value=”C:\Web\WebDb.yap”/>

</appSettings>

Now to make the CustomerRepository use the DatabaseContext to fetch the data. The finished code looks like this.

That’s it, done. Only the database is empty. Ideally in the real world you might have screens in your application that you can use to populate the database. As we don’t, i have created a test that can be run to insert data in the database.

If you run the application, the data will be pulled from the database and displayed in the view. If this was a real application, i would create an generic abstract EntityRepository class that took a domain class as its generic type. I would make this base class use the DatabaseContext and that way i would not be repeating code.

Introduction

The focus of this post it to describe how I go about developing the business layer. This post follows on from my previous post ASP.NET MVC – Creating an application with a defined architecture. In my previous post, I was fulfilling a requirement to fetch a list of customers and display them on a page with ASP.NET MVC. So I will continue on with that as an example.

The Plan

At the end of the previous post, i had an object called “CustomerAgent” that just created two instances of the “customer” object. This is going to be replaced with a call the business layer to fetch a list of customers. The business layer will be returning the customers as a message type. The CustomerAgent will map the message type to the customer object that is already defined in the “PresentationProcesses” assembly. We will drive this out with a test.

In the business layer, we will need to respond to the call for fetching a list of customers. Our business layer will ask a “repository” to fetch customers from a datastore. The business layer will take a list of customers and map them into a message that will be returned to the caller.

In the next post to continue on from this one, the repository will need to get the customers from somewhere and map them into instances of objects that represent a customer in the domain model. An ORM tool will simplify this process.

While implementing this plan, we will be testing the interactions between layers. We will also be registering the more types in our IOC container.

Putting the plan into action

The CustomerAgent is not under test and currently returns fake instances, so we will create a new test assembly and place the “customer agent” under test and start driving out the interaction with the business layer.

Create a new class library in your solution called “Web.PresentationProcesses.Specs”.

Add a new class (test fixture) to this new assembly called “Fetching_a_list_of_customers” and decorate it with the “[TestFixture]” attribute.

Add a test called “Should_fetch_and_return_customers”. This test will return a list of “customers” and assert that the result was not null. At this point the test will pass as the “customer agent” is still just returning two made up instances. Here is the test (its not the final test, its going to change)

Currently the “Customer Agent” is a public class, i don’t want my implementations to be public. The interface will be only way that the above layer can communicate with this. But we will still want our tests to be able to work with the concrete implementation and also so will our mocking tool. In the “Web.PresentationProcesses” assembly, open up “AssemblyInfo.cs”. Add the following lines and save and close the file.

Next step, change the “CustomerAgent” class to be “internal” instead of “public”. Run our tests to verify that it all still works.

Driving out the business layer

As mentioned earlier the service is going to return data as a message type. This is going to use a very common message pattern being “Request/Response” which is also known as the “request/reply” pattern.

The business layer is going to be in its own assembly, it will run in-process to the MVC application. We could in the future without to much effort, place the business layer behind a WCF endpoint and host the business layer in another process. I personally would not to this by default, the reasons for making the business layer remote must be because you either have more than one application that interacts with the business layer tier or because of scalability. Scalability over performance its down to your applications needs, availability and size of the user base. Moving the business layer to run out of process is another blog post that i will write as the final post of this series as optional approach. Although the its not that different.

At the moment we have no business layer, so from the test i above, we start defining the interface (contract) in the business layer.

I am a big fan of Resharper, it makes my world a better place. It saddens me to think that developers are out their coding without the fruits that resharper gives.

Back into the unit test, cutting a not such a long story to be a shorter story. I am going to set up an expectation on an interface and return a response object. I have also driven out the properties that are in the response object. The test is also asserting that the “customer” UI object is populated with the values from the response Here is the test fixture. I have created the new types within the same code file as the fixture. I do this sometimes while i am cutting new code and creating new types, then i will (“with the help of resharper”) move the classes into there own files and move them into the correct assemblies. Which i do as the next step.

At the moment, the code will compile, but the test will fail. As mentioned in the last step, I am going to move the “FetchCustomerResponse”, “CustomerInfo” and “ICustomerService” into another assembly.

Add a new class library to the solution called “Web.Business”

Create a folder called “Contracts” and move the “ICustomerService” into this folder and change the namespace to match it new location.

In the Contracts folder, add a new folder called “Messages”. Move the CustomerInfo and FetchCustomerResponse types into this new folder and change the namespaces.

In both the “Web.PresentationProcesses” and “Web.PresentationProcesses.Specs”, add a project reference to “Web.Business”.

The CustomerAgent needs to be able to talk to the “ICustomerService”, change the constructor of the “CustomerAgent” to be passed a reference of “ICustomerService” and hold the reference in a variable in the “CustomerAgent”. The “SetUp” on the unit test will change to pass in the service into the constructor of the “CustomerAgent”.

Now to make the test pass, the code in the method ”GetCustomerList” in the “CustomerAgent” has been replaced to make the test pass. Here is that code for the modified “CustomerAgent” as well as the changes to the SetUp method in the test.

In the “Web.PresentationProcesses” assembly, add a reference to the “Web.Business” assembly.

In the “PresentationProcessesModule”, in the configure method, create a new instance of the “BusinessModule” and call the “Configure” method passing in the container.

The Customer Service

Now to implement the CustomerService. The service itself is just a facade and brings its internals together to provide a simple API. The service will delegate to objects that have the responsibility to carry out required actions. In the case of the CustomerService, it will ask a repository to return a list of customers. The customers will be instances of a domain entity called “Customer”. The service will map the domain type to the message type.

I keep the domain isolated from the outside world. The only way to interact with the domain from the outside world is through a service. The service does not contain much logic, if it did it would mean that logic is not in the domain and the domain would be not be rich. A thin domain and rich services would the “anemic domain model” anti-pattern. In this example, I am pulling data out of a repository, so their is no business logic.

Firstly, create a new class library assembly called “Web.Business.Specs” which you may have guessed is going to hold the tests for the business assembly. Add references to NUnit and Moq/RhinoMocks or what ever is your preferred mocking tool.

We are going to be testing internal objects within the web.business project. As described earlier on in this post. Add this two lines to the assemblyInfo.cs in the “web.business” project.

Create a new test fixture called “Fetching_customers”. Our is going to ask the service to provide a list of customers. This information will be provided a list of “CustomerInfo” objects contained in the “FetchCustomerResponse”. Here is the test.

The above test drove out the customer Repository interface and a domain object called customer. At the moment i have added two new folders “Persistence” and “Domain” to the “Web.Business” project and place the customer object in the domain folder and repository interface in the “Persistence” folder. Getting a step ahead, i have created a concrete implementation of CustomerRepository. Here is the code for the new types: