Introduction

Although developers have been unit testing their code for years, it was typically performed after the code was designed and written. As a great number of developers can attest, writing tests after the fact is difficult to do and often gets omitted when time runs out. Test-driven development (TDD) attempts to resolve this problem and produce higher quality, well-tested code by putting the cart before the horse and writing the tests before we write the code. One of the core practices of Extreme Programming (XP), TDD is acquiring a strong following in the Java community, but very little has been written about doing it in .NET.

What Are Unit Tests?

According to Ron Jeffries, Unit Tests are "programs written to run in batches and test classes. Each typically sends a class a fixed message and verifies it returns the predicted answer." In practical terms this means that you write programs that test the public interfaces of all of the classes in your application. This is not requirements testing or acceptance testing. Rather it is testing to ensure the methods you write are doing what you expect them to do. This can be very challenging to do well. First of all, you have to decide what tools you will use to build your tests. In the past we had large testing engines with complicated scripting languages that were great for dedicated QA teams, but weren't very good for unit testing. What journeyman programmers need is a toolkit that lets them develop tests using the same language and IDE that they are using to develop the application. Most modern Unit Testing frameworks are derived from the framework created by Kent Beck for the first XP project, the Chrysler C3 Project. It was written in Smalltalk and still exists today, although it has gone through many revisions. Later, Kent and Erich Gamma (of Patterns fame) ported it to Java and called it jUnit. Since then, it has been ported to many different languages, including C++, VB, Python, Perl and more.

The NUnit Testing Framwork

NUnit 2.0 is a radical departure from its ancestors. Those systems provided base classes from which you derived your test classes. There simply was no other way to do it. Unfortunately, they also imposed certain restrictions on the development of test code because many languages (like Java and C#) only allow single inheritance. This meant that refactoring test code was difficult without introducing complicated inheritance hierarchies. .NET introduced a new concept to programming that solves this problem: attributes. Attributes allow you to add metadata to your code. They typically don't affect the running code itself, but instead provide extra information about the code you write. Attributes are most often used to document your code, but they can also be used to provide information about a .NET assembly to a program that has never seen the assembly before. This is exactly how NUnit 2.0 works. The Test Runner application scans your compiled code looking for attributes that tell it which classes and methods are tests. It then uses reflection to execute those methods. You don't have to derive your test classes from a common base class. You just have to use the right attributes. NUNit provides a variety of attributes that you use when creating unit tests. They are used to define test fixtures, test methods, setup and teardown methods. There are also attributes for indicating expected exceptions or to cause a test to be skipped.

TestFixture Attribute

The TestFixture attribute is used to indicate that a class contains test methods. When you attach this attribute to a class in your project, the Test Runner application will scan it for test methods. The following code illustrates the usage of this attribute. (All of the code in this article is in C#, but NUnit will work with any .NET language, including VB.NET. See the NUnit documentation for additional information.)

The only restrictions on classes that use the TestFixture attribute are that they must have a public default constructor (or no constructor which is the same thing).

Test Attribute

The Test attribute is used to indicate that a method within a test fixture should be run by the Test Runner application. The method must be public, return void, and take no parameters or it will not be shown in the Test Runner GUI and will not be run when the Test Fixture is run. The following code illustrates the use of this attribute:

SetUp & Teardown Attributes

Sometimes when you are putting together Unit Tests, you have to do a number of things, before or after each test. You could create a private method and call it from each and every test method, or you could just use the Setup and Teardown attributes. These attributes indicate that a method should be executed before (SetUp) or after (Teardown) every test method in the Test Fixture. The most common use for these attributes is when you need to create dependent objects (e.g., database connections, etc.). This example shows the usage of these attributes:

ExpectedException Attribute

It is also not uncommon to have a situation where you actually want to ensure that an exception occurs. You could, of course, create a big try..catch statement to set a boolean, but that is a bit hack-ish. Instead, you should use the ExpectedException attribute, as shown in the following example:

When this code runs, the test will pass only if an exception of type InvalidOperationException is thrown. You can stack these attributes up if you need to expect more than one kinds of exception, but you probably should it when possible. A test should test only one thing. Also, be aware that this attribute is not aware of inheritance. In other words, if in the example above the code had thrown an exception that derived from InvalidOperationException, the test would have failed. You must be very explicit when you use this attribute.

Ignore Attribute

You probably won't use this attribute very often, but when you need it, you'll be glad it's there. If you need to indicate that a test should not be run, use the Ignore attribute as follows:

If you feel the need to temporarily comment out a test, use this instead. It lets you keep the test in your arsenal and it will continually remind you in the test runner output.

The NUnit Assertion Class

In addition to the attributes used to identify the tests in your code, NUnit also provides you a very important class you need to know about. The Assertion class provides a variety of static methods you can use in your test methods to actually test that what has happened is what you wanted to happen. The following sample shows what I mean:

(I know that isn't the most relevant bit of code, but it shows what I mean.)

Running Your Tests

Now that we have covered the basics of the code, lets talk about how to run your tests. It is really quite simple. NUnit comes with two different Test Runner applications: a Windows GUI app and a console XML app. Which one you use is a matter of personal preference. To use the GUI app, just run the application and tell it where your test assembly resides. The test assembly is the class library (or executable) that contains the Test Fixtures. The app will then show you a graphical view of each class and test that is in that assembly. To run the entire suite of tests, simple click the Run button. If you want to run only one Test Fixture or even just a single test, you can double-click it in the tree. The following screenshot shows the GUI app:

There are situations, particularly when you want to have an automated build script run your tests, when the GUI app isn't appropriate. In these automated builds, you typically want to have the output posted to a website or another place where it can be publicly reviewed by the development team, management or the customer. The NUnit 2.0 console application takes the assembly as a command-line argument and produces XML output. You can then use XSLT or CSS to convert the XML into HTML or any other format. For more information about the console application, check out the NUnit documentation.

Doing Test-Driven Development

So now you know how to write unit tests, right? Unfortunately just like programming, knowing the syntax isn't enough. You need to have a toolbox of skills and techniques before you can build professional software systems. Here are a few techniques that you can use to get you started. Remember, however, that these tools are just a start. To really improve your unit testing skills, you must practice, practice, practice. If you are unfamiliar with TDD, what I'm about to say may sound a little strange to you. A lot of people have spent a lot of time telling us that we should carefully design our classes, code them up and then test them. What I'm going to suggest is a completely different approach. Instead of designing a module, then coding it and then testing it, you turn the process around and do the testing first. To put it another way, you don't write a single line of production code until you have a test that fails. The typical programming sequence is something like this:

Write a test.

Run the test. It fails to compile because the code you're trying to test doesn't even exist yet! (This is the same thing as failing.)

Write a bare-bones stub to make the test compile.

Run the test. It should fail. (If it doesn't, then the test wasn't very good.)

Implement the code to make the test pass.

Run the test. It should pass. (If it doesn't, back up one step and try again.)

Start over with a new test!

While you are doing step #5, you create your code using a process called Coding by Intention. When you practice Coding by Intention you write your code top-down instead of bottom up. Instead of thinking, "I'm going to need this class with these methods," you just write the code that you want... before the class you need actually exists. If you try to compile your code, it fails because the compiler can't find the missing class. This is a good thing, because as I said above, failing to compile counts as a failing test. What you are doing here is expressing the intention of the code that you are writing. Not only does this help produce well-tested code, it also results in code that is easier to read, easier to debug and has a better design. In traditional software development, tests were thought to verify that an existing bit of code was written correctly. When you do TDD, however, your tests are used to define the behavior of a class before you write it. I won't suggest that this is easier than the old ways, but in my experience it is vastly better. If you have read about Extreme Programming, then this is primarity a review. However, if this is new to you, here is a sample. Suppose the application that I'm writing has to allow the user to make a deposit in a bank account. Before creating a BankAccount class, I create a class in my testing library called BankAccountTests. The first thing I need my bank account class to do is be able to take a deposit and show the correct balance. So I write the following code:

Once this is written, I compile my code. It fails of course because the BankAccount class doesn't exist. This illustrates the primary principle of Test-Driven Development: don't write any code unless you have a test that fails. Remember, when your test code won't compile, that counts as a failing test. Now I create my BankAccount class and I write just enough code to make the tests compile:

This time everything compiles just fine, so I go ahead and run the test. My test fails with the message "TestDeposit: expected: <150> but was <0>". So the next thing we do it write just enough code to make this test pass:

Using Mock Objects - DotNetMock

One of the biggest challenges you will face when writing units tests is to make sure that each test is only testing one thing. It is very common to have a situation where the method you are testing uses other objects to do its work. If you write a test for this method you end up testing not only the code in that method, but also code in the other classes. This is a problem. Instead we use mock objects to ensure that we are only testing the code we intend to test. A mock object emulates a real class and helps test expectations about how that class is used. Most importantly, mock objects are:

Easy to make

Easy to set up

Fast

Deterministic (produce predictable results)

Allow easy verification the correct calls were made, perhaps in the right order

The following example shows a typical mock object usage scenario. Notice that the test code is clean, easy to understand and not dependent on anything except the code being tested.

As you can see, the MockDatabase was easy to setup and allowed us to confirm that the Save method made certain calls on it. Also notice that the mock object prevents us from having to worry about real databases. We know that when the ModelClass saves itself, it should call the database's Update method twice. So we tell the MockDatabase to expect two updates calls, call Save and the confirm that what we expected really happened. Because the MockDatabase doesn't really connect to a database, we don't have to worry about keeping "test data" around. We only test that the Save code causes two updates.

Testing the Business Layer

Testing the business layer is where most developers feel comfortable talking about unit testing. Well designed business layer classes are loosely coupled and highly cohesive. In practical terms, coupling described the level of dependence that classes have with one another. If a class is loosely coupled, then making a change to one class should not have an impact on another class. On the other hand, a highly cohesive class is a class that does the one thing is was designed to do and nothing else. If you create your business layer class library so that the classes are loosely coupled and highly cohesive, then creating useful unit tests is easy. You should be able to create a single test class for each business class. You should be able to test each of its public methods with a minimal amount of effort. By the way, if you are having a difficult time creating unit tests for your business layer classes, then you probably need to do some significant refactoring. Of course, if you have been writing your tests first, you shouldn't have this problem.

Testing the User Interface

When you start to write the user interface for your application, a number of different problems arise. Although you can create user interface classes that are loosely coupled with respect to other classes, a user interface class is by definition highly coupled to the user! So how can we create a automated unit test to test this? The answer is that we separate the logic of our user interface from the actual presentation of the view. Various patterns exist in the literature under a variety of different names: Model-View-Controller, Model-View-Presenter, Doc-View, etc. The creators of these patterns recognized that decoupling the logic of what the view does (i.e., controller) from the view is a Good Thing. So how do we use this? The technique I use comes from Michael Feathers' paper The Humble Dialog Box. The idea is to make the view class support a simple interface used for getting and setting the values displayed by the view. There is basically no code in the view except for code related to the painting of the screen. The event handlers in the view for each interactive user interface element (e.g., a button) contain nothing more than a pass-thru to a method in the controller. The best way to illustrate this concept is with an example. Assume our application needs a screen that asks the user for their name and social security number. Both fields are required, so we need to make sure that a name is entered and the SSN has the correct format. Since we are writing our unit tests first, we write the following test:

When we build this we receive a lot of compiler errors because we don't have either a MockVitalsView or a VitalsController. So let's write skeletons of those classes. Remember, we only want to write enough to make this code compile.

Now our test assembly compiles and when we run the tests, the test runner reports two failures. The first occurs when TestSuccessful calls controller.OnOk, because the result is false rather than the expected true value. The second failure occurs when TestFailed calls view.Verify. Continuing on with our test-first paradigm, we now need to make these tests pass. It is relatively simple to make TestSuccessful pass, but to make TestFailed pass, we actually have to write some real code, such as:

Let's briefly review this code before proceeding. The first thing to notice is that we haven't changed the tests at all (which is why I didn't even bother to show them). We did, however, make significant changes to both MockVitalsView and VitalsController. Let's begin with the MockVitalsView. In our previous example, MockVitalsView didn't derive from any base class. To make our lives easier, we changed it to derive from DotNetMock.MockObject. The MockObject class gives us a stock implementation of Verify that does all the work for us. It does this by using expectation classes through which we indicate what we expect to happen to our mock object. In this case our tests are expecting specific values for the ErrorMessage property. This property is a string, so we add an ExpectationString member to our mock object. Then we implement the SetExpectedErrorMessage method and the ErrorMessage property to use this object. When we call Verify in our test code, the MockObject base class will check this expectation and identify anything that doesn't happen as expected. Pretty cool, eh? The other class that changed was our VitalsController class. Because this is where all the working code resides, we expected there to be quite a few changes here. Basically, we implemented the core logic of the view in the OnOk method. We use the accessor methods defined in the view to read the input values, and if an error occurs, we use the ErrorMessage property to write out an appropriate message. So we're done, right? Not quite. At this point, all we have is a working test of a controller using a mock view. We don't have anything to show the customer! What we need to do is use this controller with a real implementation of a view. How do we do that? The first thing we need to do is extract an interface from MockVitalsView. A quick look at VitalsController and VitalsControllerTests shows us that the following interface will work.

After creating the new interface, we change all references to MockVitalsView to IVitalsView in the controller and we add IVitalsView to the inheritance chain of MockVitalsView. And, of course, after performing this refactoring job we run our tests again. Assuming everything is fine, we can create our new view. For this example I will be creating an ASP.NET page to act as the view, but you could just as easily create a Windows Form. Here is the .ASPX file:

As you can see, the only code in the view is code that ties the IVitalsView interface to the ASP.NET Web Controls and a couple of lines to create the controller and call its methods. Views like this are easy to implement. Also, because all of the real code is in the controller, we can feel confident that we are rigorously testing our code.

Conclusion

Test-driven development is a powerful technique that you can use today to improve the quality of your code. It forces you to think carefully about the design of your code, and is ensures that all of your code is tested. Without adequate unit tests, refactoring existing code is next to impossible. Very few people who take the plunge into TDD later decide to stop doing it. Try and you will see that it is a Good Thing.

Comments and Discussions

The short answer is yes, you can. Our code is tested in exactly this way - the tests themselves are written in managed C++ linked against the code under test (DLL, exe or build into the harness - it doesn't matter).

However, you do need to pay very careful attention to build settings and dependencies - particularly the runtime library import and link order.

The cpp-sample sample project in the NUnit distribution should give you a starting point.

I found your article when I was looking for info on how to do unit testing on UI classes. (Sorry, I haven't read it all yet, but I am sure it is a good article.)

I think you are completely off-track with the discussion of MVC (model-view-controller), etc. The essence of MVC is that the view handles output, the controller handles input, and the model handles the internal representation. (The MFC "document-view" is similar but combines the controller into the view.) MVC is principally an example of the "Observer" pattern as described by Gamma et al. The main advantage is that the model need know nothing of the number or type of its view(s). Similarly the view need know nothing of its controller(s).

Contrary to what many people believe, MVC is not about making software more portable by decoupling the user interface from the internals (though that may be a worthwhile goal). For example, a view can have a great deal of "non-UI" code related to how it displays the model and there is nothing to say that the model cannot contain user interface code.

As an example, a spreadsheet program could use one model (document) class which may have several dependent view classes such as tables, graphs etc. In this case it is obvious that a graph view class would be a lot more involved than just handling UI stuff. On the other hand the model could include UI stuff in the form of a properties dialog, which should not be implemented as a separate type of view since there should only be one occurrence of it.

Anyway my main point is that your idea to separate the user interface is a good idea for unit testing. But if you are using the MVC analogy you should use the name "model" where you use "controller". But this is really nothing like MVC since if anything the "model" is dependent on the "view", whereas in MVC the view(s) are dependent on the model while the model is completely independent of any view.

Yes, you are right. This isn't really about MVC. In fact, the pattern I used here is more properly called MVP. As you point out, in MVC, the View is an observer on the model. But in MVP The presenter actively updates the model, which I what I did here.

The Methods & Tools newsletter has just released in its archive section the article "Improving Application Quality Using Test-Driven Development (TDD)". This article provides an introduction to Test-Driven Development with concrete examples using Nunit

I need to mock a class that implements multiple interfaces. Does DotNetMock have direct support for this? The way I have been doing it is to create an abstract class that implements the necessary interfaces. I then mock this abstract class. It also seems to me that you cannot mock a class whose methods are final. In other words, you can mock only interfaces and abstract classes.

Do you have a good solution on how to persist the state of the "Controller" object in the ASP.NET context? Session is kind of tricky, it must be cleared at the correct moment. Serializing the controller to VieState is unsecure and serialization can be hard.

Any ideas? Best practices? Should the controller object be stateless, i.e. no variables except for the view object?

I found this article incredibly useful and insightful. I have noticed that even i nthe pass I have done something akin to test driven programming by creating stub clasess and filling them with code that calls other objects not defined yet. However I find that this a BOTTOM-UP programming methodology also serves me well at times. So I am looking for this perfect blend.

Alright, now that I dispenced with the intro, here's my questions:

1) If one were to follow the TDD methodology with the Mockup objects, it seems to me that you insert real code bit by bit. As such, you end up, i nthe end, with real code everywhere and no mockup code in sight. Correct? Now the question is as follows: How can you preserve this mockup "infrastructure" so that you can hence preserve the tests, so that if you later on introduce new functionality or change things around, you can run all the tests again? Perhaps the answer is obvious, but I don't really see it [yes, I'm new to TDD].

2) Why is it really bad to call a test on a public method that in turn invokes an object that itself has tests assign to it?

3) Why only test public methods without parameters and that return void? There is plenty of public methods that don't have this signature. Is the answer therefore that we need to test at a higher-up level, or at least for the test, encompass methods with params and return values in a public method that takes no params and returns void? If that's the case then, we will be in fact creating test case classes that will never be replaced, and will stay in the project as just that, test classes. Is this how one proceeds? Create perhaps a separate .cs file and stick your top-level, test case classes in there?

4) Whatever happened to Design By Contract (DBC)? It seems extremely useful to me. In all honesty, it seems a blend of TDD and DBC is the way to go. But I may be wrong. I can see that DBC can be useful especially for the pre-conditions in methods that take parameters. I mean as coders, we alreayd must do some sort of checking on the params to make sure they're within the range of acceptable values, so in reality, we are partially coding according to the DBC standards. What are your thoughts on this?

I want to get used to using unit tests as part of test driven development, but am a little confused on how to handle private methods. Since the private methods act internally and are not designed for use by the 'outside world' external to that class, should there be specific unit tests for these (how would I implement these)?
Or are the tests for private methods implicit in the tests for the public methods that call them?

There are those of us, me included, who think that testing private methods is counter productive. One of the whole points of TDD is to provide support for refactoring. Remember that refactoring is the redesign of implementation without changing functionality. So if you have tests that are exercising the internal methods, they will actually get in the way of aggresive refactoring. I also have the opinion that good tests of the public methods should exercise the private methods as well.

Now there are also people who like the idea of testing private methods. In fact, the next version of Visual Studio .NET will include unit testing in some versions. That implementation (not Nunit) provides an interesting little class called PrivateObject that can be used to test private methods.

So there isn't really an answer that is "right". Just opinions. Give it some thought and make your own decision. For me testing internal methods doesn't provide enough value to outweigh the cost.

I think testing private methods is not really necessary.
Compare it to daily life: if I give my son a shopping-list, and he returns the right purchases, he did his job well. The test function only compares the shopping-list with the collection of purchases.
Maybe I was not satisfied with the time he spent to get the purchases. In that case I have to refactor some private methods. "Well done, boy! Next time you'd rather take your bike in stead of walking".
Next time I repeat the test to see if he still is able to get the right things from the shop. The test suite remains unchanged!

Only if performance is part of the requirements (shopping must be ready within 30 minutes), I add a test to my test suite to measure the time, and the test will fail if it exceeds 30 minutes.

I agree that in most cases, testing private methods is not necessary, but there are times that it is necessary.

Here is one example:

We have a class that represents a perfect binary tree. The API's of this class are: Add(item,key) and Get(key). But it is impossible to test that the tree is actually 'perfect' i.e. that the two subtrees of each node have the same depth +-1. We have to check the private method that keeps the tree perfect or verify that the private implementation of the tree is perfect.

Another case is if we test a class that has a static field firstTime that changes the classes behavior. To have a few tests of the class running the first time we have to reset the firstTime field back to its origional state.

We are using the TypeMock framework that can mock concrete classes, this frameworks has an ObjectState class that does exaclly this - resets all the classes field back to its origional state.

If you're a manager or developer of a mission-critical app then you probably want to guarantee 100% code coverage rather than "assume" that all of your tests on public methods will hit all of your private methods. So code coverage should be part of your "religion." I know that some in the Java TDD community have adopted Clover http://www.cenqua.com/clover for this purpose. I think it would help if anyone knows of products, open source or otherwise, that provide code coverage measurements on the .Net platform ... not to advocate any vendor, but to at least know what's out there, for which people are finding some success.

Obviously there are some high-end products that provide coverage testing (like Mercury, I believe) but it would be nice to know about some low/mid-range alternatives.

So far the debat on private vs. public testing has focused on the "public" being the entire outside universe and "private" being only with a class. But let me pose an alternative view. Suppose that the universe was reduced to that of an assembly.

I recently started working again on a pet project that I wrote years ago, and in it I have some internal classes. These are supporting classes for the rest of the assembly. For one reason or another I chose not to make them publicly accessible to the world.

Although they are not public to the world, they are public in the context of the assembly. Any class and method and property can access those internal classes, but are they testable.

I like to place the unit tests in a separate project. I would like to be able to test those internal support classes, but I cannot without some work with reflection.

When I compiled it just like that i got an assembly reference error for Test and TestFixture. I added using NUnit.Framework; When I compiled then, I got assembly reference errors for db, ModelClass, etc...

Should my DotNetMock assembly reference include the TestFixture and Test capabilities? If not, do i need NUnit.Framework? And if so, why does that all of a sudden give me assembly reference errors?

I am struggling with the best way to inject mock objects into a test. Here is the situation in a contrived example (the class names have been changed to protect the irrelevant):

I want a class, say VehicleBuilder, to implement a CreateVehicle() method that will return instances from one of several polymorphic IVehicle-implementing classes. These vehicle objects aggregate several other objects such as instances of Wheel and Engine classes. I want to test the Vehicle class using mock-wheels (and mock-engines, ...) and I would like to do this in a way that keeps VehicleBuilder and IVehicle classes from knowing anything about mock-wheels or any other testing code. I am OK with having Wheel implement IWheel which the mock-wheel also implements and so on for Engine/IEngine and other parts.

My question is, what is the best way to have the VehicleBuilder create mock-parts instead of real parts. Normally, I would have the VehicleBuilder create and hide its own VehicleFactory.

One solution is to pass a class implementing IVehicleFactory into the VehicleBuilder() constructor. In a similar vein, I suppose that I could come up with a registration mechanism for adding prototype Vehicles to the VehicleBuilder one-by-one. Then, when testing, I would add mock objects in addition to or instead of real objects.

I welcome any thoughts on tradeoffs of these or other implementations.

Remember that in unit testing our goal is to test classes in isolation. So when you are testing the VehicleBuilder, your goal is to confirm that given a certain set of preconditions, it will create the appropriate IVehicle derived class. You should test all of those preconditions and postconditions as part of a unit test for VehicleBuilder.

When you want to test the Vehicle class itself, there is no reason to generate it with a VehicleBuilder. Especially if you want it built in a special way (e.g. with mock wheels and engines).

Granted we are discussing a contrived example, but my suggestion would be to have a VehicleBuilder which you test independently as I described above. Then have a MockVehicleBuilder in your test harness that produces the Vehicles that have mock components.

Thanks for the reply. I think that you have answered one of my questions about the benefits of separating factories from the builders using them.

Let me introduce some more information on our 'real' system which involves hardware that might be connected to a serial (or other) port on a computer. To complicate matters, a given hardware-connection ('vehicle') consists of up to 3 different linked devices ('parts'), any of which can be connected, working, not-working, timing out, etc. We need to make sure that we test as many of these normal and abnormal situations as possible. This is a dynamic system where device states change asynchronously and often.

I have in my head the use of mock-objects to test some of the basic states of the hardware such as on/off, plugged/unplugged, communicating badly/well, etc. For our automated unit and integration tests, we really need to be able to turn on and off some of these different hardware states (often in the middle of an operation) even when we are eventually testing much higher up business logic. If we are using mock objects, it would seem at first glance to be efficient to inject the appropriate hardware mock objects into the test system, even if several layers down. On the other hand, it may be best to generate mock objects for each layer whose state depends on any hardware state or behaviour.

As you can see, I'm still trying to get my head around several aspects of TDD and mock-objects.

Remember that a mock object isn't really anything special. Sure you can use the DotNetMock framework or any other to get you some supporting framework, but the basic idea is to pass a control object to another object in order to test it.

Think of it like the scientific method. In science, you would try to set all of the variables in your experiment to control values and allow one of them to change. Then you can see what happens when you adjust that one variable.

The same principle is at play here.

So for your example, I would consider that the business objects expect to see an IDevice interface passed to them. That interface would define methods/properties like "IsOn" and "IsPlugged" so that the business logic can query them for state.

Then to test the business object, you pass a mock object (either derived from MockObject or not) to it in order to see that it (the business object) behaves correctly. The only reason to use MockObject as your base class is if you want to check wither the methods/properties on IDevice are actually called from the business object. If you don't need to test that, then just use a stub object instead of a full blown mock.