The SitePoint Forums have moved.

You can now find them here.
This forum is now closed to new posts, but you can browse existing content.
You can find out more information about the move and how to open a new account (if necessary) here.
If you get stuck you can get support by emailing forums@sitepoint.com

If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

For smaller teams, I find unit tests useful for more complicated algorithms and large libraries, but never for the actual web portion of applications. I find that errors in the web portion (which most PHP programming resides in) are far and few in between, and all the work put into making tests is not worth it. But testing whether an algorithm finds every possible permutation of a set of numbers, or whether a parser generates correct and expected output... I understand.

However, it is kind of useful if someone else writes the unit tests, to check another person's code. It's really the same as a code review, but in a different (and possibly better) way to do it. Kind of like working a mathematical problem backwards (whereas a code review is checking the problem forwards).

The importance of unit testing tends to depend on the size of the team and the skill (and their meticulosity) of the people working on the project IMO. I don't think unit testing is worth the effort in all situations, which is why sometimes picking up unit testing feels so pointless. It's not an end-all-be-all methodology.

I've found that since i've started using mvc, testing is easier as the sql in the model i can test against mysql via phpmyadmin. HTML and css i can test via validation and the majority of php errors i get are then confined to the controllers.

To continue this discussion though, what nobody has proven to me at any point, is exactly what unit testing proves, what the evidence is, and how to apply it to fixing a problem.

The problem which test-first coding is intended to solve is the problem of design. Every app breaks down into component parts (classes): what should these components be? It's a formal method for finding your way through unknown terrain in little steps which allow you to learn about the new domain as you go. You don't need a map: just dive right in and take your best guess. It's not primarily about testing at all, although that too.

And it works. It really does. There's a reason why people talk about being "test-infected". It's the most important thing I ever learned in programming.

Originally Posted by Serenarules

The thing is, when you're done, you aren't really done! Now you have to actually copy your mocks to a real class file and alter the code so that it uses live data. Ok, so now we have a different code base. One whose stability and accuracy isn't reflected in the prior tests.

I'm not sure exactly what you have in mind but this sounds like the normal TDD process: a test throws up new responsibilities which don't fit in the tested class and these are assigned to new objects which are mocked in the current test case. When you're done you move on to testing & implementing the mocks. In this way, as you work your way through the app, everything is always verified by the test harness and may not deviate from any specified behaviour. Mock expectations in one test are part of the blueprint for the mocked class itself. When you write unit tests of the mocked class you'd specify these behaviours, and maybe some others too.

There's an art to writing good tests. Tests should read like documentation (test-infected programmers don't use comments much) using the language of the domain as much as possible - although you do tend to lose that a bit the deeper you drill down into the internals. The end result is a detailed specification describing each and every component of the app. It is a lot of work, particularly in the beginning when, as always, you will have to go through a period of getting things wrong before you start getting them right. However, there is light at the end of the tunnel. You will gain overall. Writing the implementations which make the tests pass will be much, much easier, the design will be better, and you won't waste so much time debugging.

I don't really see myself as a writer of code any more. I'm a writer of tests and code just sort of happens while I'm testing. In the end, the specification is much more important. If you were to lose all the code but somehow keep all the specifications, it's relatively easy to re-write the app. If you lost all the specifications but kept all the code, you're stuffed. You won't even know what it's supposed to do never mind if it does actually do what it's supposed to do.

Code is complicated stuff and TDD is one of the best tools I know to help tame that complexity. Every once in a blue moon, when I'm under pressure, I try dumping the tests to save time. I always end up regretting it.

Ok, here, the mock repository is coded locally to use the passed in list object based psuedo-database. Because it's not hitting a real db, the code for getting the required element is totally different. So that test passes, wonderful. But what has it told me? When I get ready to create the "real" class, the only thing it will have in common is the public interface.

If your argument goes along these lines...

"of course, that's what this testing is for. You start with nothing, and based on the tests, build your interfaces one method declaration at a time, using mocks to verify that it IS implementable, not that the eventual logic behind it works."

...then I guess I just don't get it. If my spec calls for a Subscription repository that exposes a method that should return an entity by it's id, then my interface is already know. I just write:

public interface ISubscriptionRepository
{

public Subscription GetById(int id);

}

Writing a real class around that, and then testing THAT class, against a test db (of the same type as the live db so that the code will be the same), makes a lot more sense.

At an application boundary it's often easier just to use the real thing rather than trying to write a mock with identical behaviour. Would a mock of some kind of file-writing class duplicate the exact behaviour of the filesystem stat cache? Probably not. Even if it did you're still going to get caught out if the authors of the external resource change the behaviour. I wouldn't say never mock an external resource but usually my first choice would be to test with the real thing.

On the other hand, you do have complete control over your own code. If we mock classes inside the app we can later write the implementations to do anything we want.

Test-first and mocks is a great way to "discover" what shape these classes might have. The beauty of it is that you're always tightly focussed on one small part of the problem, and solving one small part leads you in to the next. It's easier to eat a horse in lots of little bites rather than one big gulp.