The SitePoint Forums have moved.

You can now find them here.
This forum is now closed to new posts, but you can browse existing content.
You can find out more information about the move and how to open a new account (if necessary) here.
If you get stuck you can get support by emailing forums@sitepoint.com

If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

About "database testing":
"This I think is a management task and nothing to do with actual development process, from what I can gather of your posting? Maybe you could explain some more..."

No, it's not. What I meant was testing if the code/class/whatever for handling database-related activities are functioning properly, i.e., "bug-free". I'm not talking about test-driven development fully here (I'm still in very early learning here), just using Unit Testing as a tool for testing.

For example when I've developed a class for user management. How do we test it? Well, open the sign up page, enter some data (invalid first, to check for validation code, then valid data). Then we check our local mail server, open the activation e-mail, then click it to see if the activation script works well. If all goes well, good, then we mingle around with user preferences or user features, whatever. If all goes well, we delete the user, hopefully in all those phases nothing goes wrong.

See, what I described just above is a very real-world, very typical, practical example. Not just a class that handles only one test or a set of independent tests. The tests are very related, i.e. you can't test the "delete user" functionality if the user hasn't been created. And you can't activate a user before it's registered, etc.

The other problem is consistency. Running this test suite should not leave the database in a dangling state, i.e. leaving the "test user" and not deleting it when any test fails. Running the test suite twice should produce the same result, however in case of inconsistent state this is unachievable because for the second running the "test user" didn't get deleted so it cannot create another test user.

I hope to get some ideas.... really, practical, typical, and real-world examples and solutions. Getting used to the idea of unit test may not be that difficult... but designing the tests itself, I guess, is an art of its own. None of the freely available, downloadable, tutorials on unit tests (including lastcraft's, which is wonderful but it's for introduction purposes only) mentions this.

I think it's a pain in the (?) because I've to decide on... since I'll be committing to one. Even if just for learning. I guess it works basically the same but code is code. I just don't wanna regret in the long run because I picked a "wrong" library in the first time.

It's like I've been using MDB 2... and when Creole/Propel came out I thought... "wow, why didn't I use that?" (well, Creole has only been here recently). Actually I can't switch since Creole doesn't have all the abilities of MDB that I need (especially the DDL (Data Definition Language) abstraction).

So anyone (lastcraft?) would give me a nice enough review or comparison on this, even if just a bunch of list? At least, I want to know what lastcraft didn't like from PHPUnit[2]...... and if possible, what PHPUnit[2] can do that [currently] SimpleTest doesn't (and why? when it'll be implemented? etc.)

Hmm... It sure is not a "which one is better" question when comparing PHPUnit[2] with SimpleTest. After looking at some docs I think SimpleTest is more suited to me (or I'm more suited to SimpleTest) than PHPUnit[2].

Somehow I hoped that SimpleTest would require PHP 5... It may be bad news to some people but for me it's very good news. Hmm... doesn't matter, as long as it works in PHP 5, a non-PHP5-only code is more than alright.

With regards to testing your database, and making sure it is in a consistant state, one option is to drop and re-create (and possibly repopulate with known data) the table in your setup() method. This method is called before each test method, so you can be assured each test is not interfering with the others.

There's these two things called Mock Objects and Server Stubs which can be used excellently for database simulation. For example, you can test if a user management class is sending the right SQL queries to your database server by replacing the actual connection with a Mock object. So:

The tests are very related, i.e. you can't test the "delete user" functionality if the user hasn't been created.

You can test whether it sends out the correct queries to your database.

Also, when you find that you can not test your classes without the use of a browser, it is very likely the sign that one class is doing too much by itself (i.e. a class that does both controller-related, presentation and database-related tasks) or that your classes are too tightly coupled: signs of bad design.

"Also, when you find that you can not test your classes without the use of a browser, it is very likely the sign that one class is doing too much by itself (i.e. a class that does both controller-related, presentation and database-related tasks) or that your classes are too tightly coupled: signs of bad design."

It seems that we're usually testing classes or units. But how about testing functionality? To the user, "sign up" is just a functionality. But to us (the developers), this consists of several things... the sign up script (which may span several scripts), the sign up template(s), the user management class, and some other low level functionality (like database abstraction). It's clear that if a lower level unit doesn't pass a test than we can safely assume most of the higher level unit(s) won't pass some tests successfully.

So how do we test functionalities as a whole? I.e. they that can span several scripts/units/files etc.? Just testing individual units (even if all of them passes) doesn't mean the functionality itself is bug-free.

Ok, blow by blow here is how we do things at Wordtracker with the knotty problem you described. Yes it is real world and untangling stuff is not so easy and you will have to make some minor code changes. These will be for the better.

When testing individual classes, we use mocks. I've gone off mocks for testing SQL unless it's very simple. Such tests are too constraining. I prefer an intergrated test for doing anything complicated. There is a sample test suite for a persistence layer here...

The problem with acceptance/functional testing is in configuration. If things like the database name and mail port come from a configuration file, then you can switch that file during tests. This means that you can safely use a test database. I'll usually have something like this in the tests...

If this is not feasible then you have to dance the merry dance and be very careful inserting and removing data. You are also limited in what you can test, because real data will be mixed in.

Regarding e-mail there are three approaches. In unit tests you want to mock the mailer, that's a no brainer. In functional tests you have two choices again.

If you have control of your configuration you could change the port used to connect to your MTA from port 25 to something else. You cannot do this switch with the PHP mail() function, but you can do it with the phpmailer library. You then need a fake MTA. We have just open sourced a prototype (look for fakemail on Sourceforge at http://sourceforge.net/projects/fakemail/). You test case now has these parts...

Regarding the comparison, PHPUnit follows JUnit and has fairly flexible internals. If you are developing a testing tool or are used to JUnit then that is a possibility. It also has test coverage, but I've never found that any real use.

SimpleTest has mock objects and the web tester. As a 3.5 year PHP XPer it's based on my experience of the sort of issues faced day to day. Of course my issues may not be your issues...

I've read the 2002 July draft of TDD By Example by Kent... Nice tutorial... but *sigh* still looks too simple in example to me, even the xUnit examples. I guess I needed more concrete examples... the real-world one, where you have classes communicating with (a dozen) other classes (duh! how coupled!) and also persistence... I guess not a tutorial, but a rather a "Test Patterns" by example (as opposed to "Design Patterns"). Not "unit" patterns like Fake It, Obvious Implementation, etc. but rather "if you're faced with a typical case like this, one object doing this another doing that and now you want to make a test for designing a functionality (or unit functionality)... this is what you should start with and bla bla bla" maybe like that.

Anyways, I could foresee that one class would have a test class with at least 2-3 times the size of that own class with also multiple number of functions. Is this really the expected of TDD? I mean, you could easily have 6-10 or even more tests for even just one (simple) method, just to be on the safe side... Just imagining the sheer size of tests made me shutter (and yes, I haven't yet written any tests yet!)

Anyways, I could foresee that one class would have a test class with at least 2-3 times the size of that own class with also multiple number of functions. Is this really the expected of TDD? I mean, you could easily have 6-10 or even more tests for even just one (simple) method, just to be on the safe side... Just imagining the sheer size of tests made me shutter (and yes, I haven't yet written any tests yet!)

Some insights please...

I don't know if there are any "expectations", but here is an example of a lines of code count from the latest project I am working on at work:

I mean, by using mock objects you completely ignore the actual object altogether... even though the actual object may be very robust, how can you guarantee that the mock and the real object behaves the same way? You could be testing a database abstraction layer library by using mock objects as the abstracted database engine, but then... it will only be a "fake database abstraction layer library" since it's not even tested with the actual backend...??

Dang... still confused. I guess TDD is more complicated than I had first imagined..... even the simple teeny weeny steps Kent explained in his book isn't enough to making it simple. :-( Or maybe I'm just plain idiotic.

I mean, by using mock objects you completely ignore the actual object altogether... even though the actual object may be very robust, how can you guarantee that the mock and the real object behaves the same way? You could be testing a database abstraction layer library by using mock objects as the abstracted database engine, but then... it will only be a "fake database abstraction layer library" since it's not even tested with the actual backend...??

I have Mocked the objects I am not testing. I want to test my code in isolation so I mock the objects which it colaborates with so that I am assured to have a knows set of inputs and responses during the test. I then have complete control over the envirnoment which I am testing my code.

This allows me to simulate in the test environment conditions that would be hard to replicate in the real world, but which you do want your code to be able to handle. How do you create a failure of the database? Unplug the network cable? Stop the service? Anybody else using this at the same time is not going to be very happy with your testing.

OTOH, with Mock objects, I can simulate the error message with would occour under that condition and verify my application code responds apporpriatly.

ceefour, there really isn't any alternative, you just have to try it to see what all the fuss is about.

It took me about a weekend (around 20 hours if I remember correctly); I downloaded SimpleTest, went through the excellent online tutorial and then started writing tests for my own classes. I assure you, once you start seeing the green bar popping up, you do not want to go back.

As for the "I don't have time to write tests"-excuse, well, don't even bother, I have found that my productivity has gone up (after it initially went down for a short while) since I started unit testing.

Consider both the unit test and the unit being tested as 1 whole component, writing tests is about 50% of the job and writing the unit being tested also takes around 50%. When you look at it that way, it just isn't extra time is it, I mean you'll have to test at one point or the other, why not do it while you're developing instead of trying to cram it into the last week of a project's schedule (if you're lucky)

1) Dependent methods like testNG. If methodB depends on methodA and methodA fails methodB is Skipped.

I am going to add an abandon() method which bails out of the entire test case. Would this be sufficient?

Originally Posted by Brenden Vickery

2) Groups like testNG. Individual methods can be apart of 1 or more groups.

Hmm...tricky. You can combine test cases into different group tests and I have an idea or two for making things easier on that score. I will delay this until better grouping is implemented because I don't think you'll need it then.

Originally Posted by Brenden Vickery

3) setUp* methods. Instead of only having a setUp() method run before each test run every method in the class that has a setUp prefix.

There is a problem in ordering the setUp()'s and actually it doesn't add much. After all you can explicitely call methods that don't start "test" in test methods anyway...

I am going to add an abandon() method which bails out of the entire test case. Would this be sufficient?

Indeed

From the amount of times I've used unit testing, I'd say that it has helped increase the number of lines of script developed during that period of use on unit testing, over the times I don't use unit testing

Why I don't use it day after day I don't know. I think I've yet to climitise to the idea of developing by the strict behaviour of unit testing. I like a little freedom just to fart about I suppose