187: Testing Exceptions

One thing you really don’t want to see in a production Rails application is a 500 error. These are usually seen when your application’s code has raised an exception.

If your application’s users are seeing errors like this you should be notified about it as soon as possible so that they can be fixed. This episode will cover the process that should be followed when fixing these errors.

Notification Options

There are several solutions that will enable our Rails applications to notify us when a 500 error is raised and we’ll briefly discuss four of them here.

The first one on our list is the classic exception_notification plug in. This is a fairly basic solution but it works well. When installed and configured it will send an email whenever an error is raised by our application. If our app is throwing errors often then the emails will pile up in your inbox but this is an extra incentive to get those errors fixed.

The next option is exception_logger, which was covered back in episode 104. This differs from exception_notification in that instead of sending out emails it records exceptions in a database table and displays them through the user interface.

The two other options are commercial solutions. The first is Hoptoad which stores exceptions on their web server and then access them through a slick user interface.

The final option is Exceptional, which another nice solution that you might want to try.

Debugging Our Application

So, we have exception_notification installed in our application and have started getting notifications from it. The first one we’ll look at is this:

Looking at this notification it appears that we have mis-named a method on line 6 of our Product model. This looks like a fairly simple error to fix so we could dive straight in and correct it. But before we do we should write a failing test. If you’re not currently testing your applications you might be thinking that this is going to be another testing episode and considering not reading any further but bear with it a little longer. Even if you’re not testing your applications you should still be writing integration tests to cover these errors. As we’ll show you shortly it’s not difficult to write integration tests and integration testing is the type of testing that definitely gives you the most bang for your buck.

Even if you are using Test-Driven Development you should be writing integration tests to fix these issues. This exception has managed to slip through the cracks of the testing we have done to get this application ready for production so we have obviously found an area of the code that isn’t tested well enough.

The first thing we need to do is to write a failing integration test that covers the area of the exception. We’re going to use Rails’ built-in integration testing, but we could use any type of integration tests, for example Cucumber, to do this.

We can generate the file for our integration test, which we’ll call exceptions, by running the following command.

script/generate integration_test exceptions

This gives us a single place in which we can put all of the tests that we’ll write to cover the exceptions that are being raised. We’ll replace the default test in the file with a test that covers our exception.

We’re using a convention for our tests’ names that uses a combination of the request’s HTTP method and the URL that we’re requesting. As this exception happens when we create a new product we’ll be posting to /products. If we look at the request details in the notification above we’ll see that there were a number of parameters passed and we’ll need to pass some of these too in order to simulate the request in our test, although we can remove the controller, action and authenticity_token parameters.

When a product is created the response will either be a 302 (redirect) or a 200 (success). In this case the attempt to create a product will throw a validation error so we should get a 200 response. Therefore our test asserts that the reponse was :success.

This shows that we’re getting the same error that we saw in the exception notification: a 500 error about an undefined add_to method. We now have a test that successfully duplicates our exception.

By default there is no stack trace shown when an integration test fails. In the comments for the Railscast on which this is based, however, is a nice piece of code that enables this. With this in place we get a more information about the error and where it happens.

This should, of course, be add instead of add_to so we could make that change and run our test again. If we were unit testing our application, and of course we should be, then we would write a failing unit test to cover this specific functionality before making the fix itself.

As before we’ll start by writing an integration test that covers this exception. This time the failing request is a GET rather than a POST we don’t need to pass any parameters, just make a request to the URL and check that response is a 200.

test "GET /products/8/edit" do
get "/products/8/edit"
assert_response :success
end

When we run the integration tests now the test fails, but not with the error we were expecting: instead of a 500 error we get a 404.

1) Failure:
test_GET_/products/8/edit(ExceptionsTest) [/test/integration/exceptions_test.rb:26]:
Expected response to be a <:success>, but was <404>

The reason for this is that we don’t have a product with an id of 8 in our test database. This is one of the tricky parts of writing integration tests: sometimes the test depends on the application being in a certain state. In these cases the test will have to recreate that state to be a true reflection of the code we’re testing. This can mean that session variables or database records might need to be created to get the application in the correct state.

For this particular test we need to have an existing product. We could use Factories (as we demonstrated back in episode 158) or make use of fixtures. We have fixtures set up in our app so for simplicity’s sake we’ll use them and change the test so that it creates a product before calling the URL for that product’s edit page.

If we fix this an run the tests again they both pass so we have fixed both of the exceptions in our application by writing integration tests. Once you get into the habit of working this way this becomes an effective way of dealing with exceptions that are raised by your production application. This is definitely not a replacement for full Test Driven Development but it is a useful addition to the test arsenal for an application. Finally, if you want to improve your applications’ integration tests then it’s worth taking a look a Webrat which was covered in episode 156.