Discussions

After the Junit cleanup last whole month, here are my few cents.
A JUnit test case can contain multiple tests. Each test is implemented by a method. The declaration of a test method must comply to a set of conventions in order to help JUnit and associated tools to automate the discovery and execution of tests. These conventions are:
1. The name of the method must begin with “test”, like in “testValidateOperator”,
2. The return type of a test method must be null,
3. A test method must not throw any exception,
4. A test method must not have any parameter.
There are a few details of which you should be aware of when writing JUnit tests. First of all, when executing a test case, JUnit will instantiate the test case class as many time as there are test methods. That is, each test method will be executed on a different instance of the test case class. Before calling the test method, JUnit will call the setUp() method. The tearDown() method will be called after the test method completes – successfully or not. The setUp() and tearDown() methods should be used to create and then clean up the test environment.
Good practices to be followed while writing a JUnit:
• Write tests for methods that have the fewest dependencies first. If you start by testing a high-level method, your test may fail because a subordinate method may return an incorrect value to the method under test. This increases the time spent finding the source of the problem, and you will still have to test the subordinate method anyway to be sure it does not contain other bugs.
• Tests should be logically simple as possible, preferably with no decisions at all. Every decision added to a test method increases the number of possible ways that a test method can be executed. Testing is meant to be a controlled environment; the only thing you want to change is the input to the method under test, not the test method itself.
• Wherever possible, use constant as expected values in your assertions instead of computed values. Consider these two assertions:
returnVal = getDiscountCode(input);
assertEquals(returnVal, computeDiscountCode(input));
assertEquals(returnVal, “023”);
In order for computeDiscountCode() to return the proper result it probably has to implement the same logic as the getDiscountCode(), which is what you are trying to test for in the first place. Further, suppose you fixed a defect in the getDiscountCode (). Now you have to change computeDiscountCode () in the same manner. The second assertion is easier to understand and maintain.
• Each unit test should be independent of all other tests.
A unit test should execute one specific behavior for a single method. Validating behavior of multiple methods is problematic as such coupling can increase refactoring time and effort. Consider the following example:
void testAdd()
{
int return1 = myClass.add(1,2);
int return2 = myclass.add(-1,-2);
assertTrue (return1 – return2 == 0);
}
If the assertions fails in this case, it will be difficult to determine which invocation of add() caused the problem.
• Each unit test should be clearly named and documented.
The name of the test method should clearly indicate which method is being tested because improperly named tests increase maintenance and refactoring efforts. Comments should also be used to describe the test or any special conditions.
• All methods, regardless of visibility, should have appropriate unit tests.
Public, private and protected methods that contain data and decisions should be unit tested in isolation.
• One assertion per test case.
Avoid using multiple assertions in one test case. For example, if a test case with five assertions fails on the first assertion, then the remaining four are not executed. Only when the first assertion is corrected will the other assertions execute. If assertion two fails, then that also must be corrected before the remaining three assertions are verified. Although the unit test file may be large, it will be easier to find and fix defects.
• Create unit tests that target exceptions.
Exceptions are thrown when a method finds itself in a state it cannot handle. These cases should be tested just like a method’s normal state of operations. If a method is declared to throw one or more exceptions, then unit tests must be create that simulate the circumstances under which those exceptions could be thrown. The unit tests should assert that the exceptions are thrown as expected.
Please let me know if this helps. Happy coding Junits…!!

Nice points on Junit. I appreciate it. However the title is little misleading. Meaning, "Junit-Do's and Don'ts" , if I read the content I didn't see any don't case. I would name it like "Junit-Best practices".

I disagree on the point of unit method should not throw exceptions. When you throw an exception its called an ERROR. What do you do instead? catch and fail()? how do I see the stack trace? add a print statement? sounds like an anti-pattern.
Cheers,
Ross

Not to negate Mahesh's great post, but why bother with JUnit when TestNG has so many better features?

In all seriousness, if you were serious about Unit testing why would you still be using JUnit?

.NET might have lot of better feature than java - but that is no reason to switch. It all depends on which platform you are comfortable with.
TestNG tries to do too much .. Junit does just enough.. google junit extension if you need something.

Not to negate Mahesh's great post, but why bother with JUnit when TestNG has so many better features?

In all seriousness, if you were serious about Unit testing why would you still be using JUnit?

.NET might have lot of better feature than java - but that is no reason to switch. It all depends on which platform you are comfortable with.

TestNG tries to do too much .. Junit does just enough.. google junit extension if you need something.

.NET has a big blemish against it, and that is that it can only run on one platform and the worst server platform at that. Yes, there are Monos of the world, but you get the point, otherwise people would switch if feature wise .NET was better.
As far as TestNG, it doesn't do too much. When you say too much, what do you mean? You can use as little as you like or as much as you'd like, but you're sure to not run into obstacles as you might with JUnit, specifically when dealing with multithreaded tests which TestNG has pretty decent support for. To create a test in TestNG you just create a class and annotate it with @Test, not sure what's so "too much" about that.

Mahesh,
I had the same experience about 4 years ago, where I was reviewing a lot of test code written in JUnit. It prompted me to write an article on JUnit Antipatterns. It's nice to see that a number of the same points are raised.
A few other comments:
* The return type of a test should not be null: it must be void.
* I would argue that a test method should throw an exception, if you're not testing for exceptions. I actually classed this as an antipattern: catching unexpected exceptions
Anyway, thanks for the write up.
Joe

I had the same experience about 4 years ago, where I was reviewing a lot of test code written in JUnit. It prompted me to write an article on JUnit Antipatterns. It's nice to see that a number of the same points are raised.

A few other comments:

* The return type of a test should not be null: it must be void.

* I would argue that a test method should throw an exception, if you're not testing for exceptions. I actually classed this as an antipattern: catching unexpected exceptions

Anyway, thanks for the write up.

Joe

Good article, and +1 on both of Joe's points.
You especially need to declare that a test throws an exception when it is an unexpected checked exception. IOException is a good example. IOException is usually not relevant to your test, but you don't want your test to pass if you happen to get one.
Stan Silvert
http://www.jsfunit.org

I agree with much of your hints, but some seems exagerate. For example the one assertion per test case.
User user = dao.getUserById(1);
assertEquals("Alberto", user.getFirstName());
assertEquals("Gori", user.geLastName());
What is wrong here? Nothing!
Two test case, one for first name and one for last name, would be ridicolous and would be a waste of time (and of coding). If you have tons of test like that you risk to increase test amount of time from 10 seconds to 10 minutes.
If first name assertion fails, fix the problem and then you'll get also the last name check.
This is reasonable for me.

Well, people shouldn't take "best practices" of someone as the 11th commandment. Anything in the software world, unless you're a corporate developer, is up for debate. From patterns, to OO programming, to Unit Testing. I truly hate when someone hears a differing opinion that might not be popular and thinks that this person is less knowledgeable, etc...
Basically look, you can be a great developer whether you unit test one way, or another way, or whether you unit test at all. There were hundreds of awesome engineers and software projects before the whole unit testing stuff began and there will be after a new paradigm of software engineering becomes popular.
So for your comment, yes, I agree that it's ridiculous to create these artificial constraints for unit tests. "Unit" is what a unit is to you. My unit is testing a particular unit of functionality what ever that might be. It might be one, or 500 assertions. If I'm testing an objects state after an operation that has side effects, depending on the complexity of that object's state, it might require numerous assertions. And no, I'm not going to rerun the "unit" test for every assertion.
JUnit 3 has some major limitations, again because the creators came up with some bullshit artificial constraints that a unit test method has to be independent. Independent of what? What if I want to structure my unit tests in a way such as my unit test class tests for a class of behaviors within a class or package, etc... I then need to instantiate some code before I can run these 20 unit tests. Why wouldn't the authors think of something as simple as instantiating behavior instead of instantiating the unit test class each and every time you have to run a method? What's wrong with unit test side effects? Not everything can be tested in isolation. I can't wait for the usual responses of folks saying that if you can't test each and every freaking thing in isolation, your design is flawed. I beg to differ.
Ilya

@Ilya
Your argumentation sound like my dauther when she invent a word for something she doesn't know the proper name of, "But I call it like this". Then I as a parent reply, "But if you want everybody else to understand you might call it by its proper name".
Therefore when it comes to testing I try to avoid using the word unit in testing. Method-level testing, module-testing, integraion-tests, customer-tests is better choise of word where you try to actually say what you are testing and not witch to you are using to drive the test. With jUnit you can do all sorts of tests, not just unit-tests.

Your argumentation sound like my dauther when she invent a word for something she doesn't know the proper name of, "But I call it like this". Then I as a parent reply, "But if you want everybody else to understand you might call it by its proper name".

Therefore when it comes to testing I try to avoid using the word unit in testing. Method-level testing, module-testing, integraion-tests, customer-tests is better choise of word where you try to actually say what you are testing and not witch to you are using to drive the test. With jUnit you can do all sorts of tests, not just unit-tests.

@Roland
I don't understand your logic at all. Why do you think TestNG is like a newly invented baby's word ? Simply because its the word that you are not familiarly with ?
TestNG is been around for several years, well-adopted and with large group of followings. It is well-supported by all major IDEs. JUnit was great when it first came out. And it did have some short comings as mentioned in the articles.
We were using JUnit before. But we are now using TestNG. TestNG allows us to convert test cases of legacy home-made test framework to testNG pretty easily. It allows us to test different set of methods via grouping; it allows multi-thread testing by simply add attribute to the annotation. I can selected test a method, a class or package.... etc.
If someone considers this is types of features are too much, then you don't need to use it. One @Test annotation is enough.
I haven't go back to look the new JUnit4. I am sure it is probably equally impressive. Use what you are comfortable with; but there is no need to trash TestNG.
Chester

Ok, lets resort to comparing folks to your little daughter, how mature.
You can method test all you want and I do to, when my design allows for it. It's really narrow minded to say that every class's method can be tested in isolation and to belittle someone's design as "flaw" when it can't. There are numerous cases in OO design and development when object's state cannot be tested in isolation, though you must test the interaction with the object as a whole and assert it after a particular set of interactions.
Take a builder of something for example, it makes no sense to always test every single method in isolation, especially if invariants are enforced. You might test each builder method in isolation as an assurance of invariant enforcement, but eventually you'll need to build an object in different way and assert the outcome, so the rule of thumb of testing each method in isolation is just bullshit.

It adds huge runtime overhead to test suites, when compared to the overhead added by JUnit. I recently converted one of my JUnit 4 test classes to TestNG, and was surprised to see that those same tests, running in the same environment and from the same Java IDE (IDEA), take about four times as long to run as they did before (having only changed from JUnit 4 to TestNG 5.9, nothing more).
This should be no surprise actually, if you examines the internal implementation of TestNG (as I had to do in order to integrate it with my mocking tool); it's a huge, inelegant, badly designed mess (for example, consider the "XmlSuite" class: why the coupling to XML?).

Confusing annotations: can anyone explain the difference between @BeforeTest and @BeforeMethod? The official documentation doesn't.

Why all this focus on XML? With JUnit, I only see XML when running from Ant. Otherwise I use my Java IDE, only rarely needing to create a test suite class (annotated with @SuiteClasses, which BTW is much easier than coding XML).

Making test methods depend on each other is a very bad idea, and completelly unnecessary. Yet TestNG provides incentive for doing it.

Finally, TestNG has a lot of extra features which somehow I never missed in JUnit 4. They may be very useful to others, but for me they only add noise.

I don't agree that methods that depend on other methods is a bad thing. Again, you're just repeating the dos and don'ts from the "Testing Gods". That is nonsense. Object's state can be complex, requiring state changes and the expected outcomes to be dependent on others. It's not only object state, it's any state.
Let's see, if I have a database and I'm testing my persistence layer. Some methods might verify data that was persisted in other methods. Do I have to clear my database state each time I run a method? Again, the problem comes from folks thinking that one method must be run in isolation, that's JUnit 3 mentality and it's very limiting and wrong. My unit of testing might be a class, which tests interactions with a particular object, so I instantiate the object, then the methods run in a particular dependency order, but at the end of the actual "test", which in some cases might be when all methods in a class are finished running, I verify the state, etc... Yes, it's complex, but software is complex and you can't impose the completely stateless methods on everyone's testing methodology, at least not in the imperative programming world, where state changes occur and reinitializing 5000 objects to their initial state and rerunning the same routines can be very expensive.

I don't agree that methods that depend on other methods is a bad thing. Again, you're just repeating the dos and don'ts from the "Testing Gods". That is nonsense.

No, my opinion comes from years (and hundreds of tests) of experience. In my own open source project (JMockit), I have currently more than 800 JUnit tests.

Object's state can be complex, requiring state changes and the expected outcomes to be dependent on others. It's not only object state, it's any state.

Sure. But that has nothing to do with making tests themselves inter-dependent.

Let's see, if I have a database and I'm testing my persistence layer. Some methods might verify data that was persisted in other methods. Do I have to clear my database state each time I run a method?

The way I do it is to create reusable "assertXyz" methods that I then call from several independent tests. And I have done this a lot, in integration tests which persisted entities in a live database through Hibernate. I never needed to clear database state, since each test ran inside a dedicated transaction.

Again, the problem comes from folks thinking that one method must be run in isolation, that's JUnit 3 mentality and it's very limiting and wrong. My unit of testing might be a class, which tests interactions with a particular object, so I instantiate the object, then the methods run in a particular dependency order, but at the end of the actual "test", which in some cases might be when all methods in a class are finished running, I verify the state, etc... Yes, it's complex, but software is complex and you can't impose the completely stateless methods on everyone's testing methodology, at least not in the imperative programming world, where state changes occur and reinitializing 5000 objects to their initial state and rerunning the same routines can be very expensive.

I have no doubt this approach works, but it seems more complicated than necessary. In a previous business application project, we had about 500 entities, many quite complex. We got to the point of having nearly 3000 integration tests, all independent. Test data (persisted business entities) was created through helper methods, collected in separate classes, one per functional area. Each JUnit test would then simply call the necessary entity creation methods it needed. Tests were reasonably easy to write, and ran surprisingly fast, given the circunstances.

Also, I find the "justification" for dependent tests given in the official documentation to be quite lame.
They say this:

Sometimes, you need your test methods to be invoked in a certain order. This is useful for example
To make sure a certain number of test methods have completed and succeeded before running more test methods.
To initialize your tests while wanting this initialization methods to be test methods as well (methods tagged with @Before/After will not be part of the final report).

Well, in the first case I prefer to simply call those methods directly in my tests, instead of annotating the actual tests so that TestNG will call them for me. If one of the methods fails (throws an error or exception), the test will also fail.
In the second case, I really don't see why I would want to have initialization methods be tests. Obviously, they are not tests, so why would I treat them like that? Again, I would instead simply call them from the real test methods.

I recently converted one of my JUnit 4 test classes to TestNG, and was surprised to see that those same tests, running in the same environment and from the same Java IDE (IDEA), take about four times as long to run as they did before (having only changed from JUnit 4 to TestNG 5.9, nothing more)

I'm very skeptical that this came from TestNG's overhead, are you sure you really investigated this in depth?

Confusing annotations: can anyone explain the difference between @BeforeTest and @BeforeMethod?

The XML file defines different levels of granularity, each one containing the next:
[suite]
[test]
[class]
[method]
Each of these levels can be wrapped by their respective @Before and @After annotation:
@BeforeSuite
@BeforeTest
@BeforeClass
@BeforeMethod
@AfterMethod
@AfterClass
@AfterTest
@AfterSuite

Why all this focus on XML?

It's convenient in order to configure which tests to run without having to recompile any code. It's also useful to exchange test configurations between teams or developers, or just to define various test runs ("nightly", "smoke test", "front-end", etc...).

Making test methods depend on each other is a very bad idea, and completelly unnecessary

For unit tests, I agree. For other tests that need expensive state in order to function, you really want to be able to reuse that state whenever you can, which is why TestNG tests usually go faster than JUnit ones in these scenarios.
The only way you can do this with JUnit is by introducing static variables, which introduces a lot of problems (concurrency, not multi-JVM friendly, clean up issues, etc...).
Another problem is that JUnit will often tell you "100 tests failed" while TestNG will report "1 test failed, 99 skipped", which is much more accurate.
--
Cedric

Hello Cedric,
Thanks for responding.
I ran my TestNG class (converted from JUnit 4.5) using the "TestNG-J" plugin bundled in IntelliJ IDEA 8.1. I did notice that the plugin creates a temporary XML file with the test suite definition, which TestNG then reads. Since there is no such file when using JUnit, that is obviously one of the causes for higher overhead. (I don't know if the plugin could have passed the suite definition to TestNG in a more efficient way, without using a file.)
Besides the extra XML file processing, by using the debugger it's easy to see that a lot more code inside TestNG gets executed when running tests, when compared to running the same tests with JUnit 4. I had to spend quite some time doing this with JUnit 3.8, JUnit 4.4, JUnit 4.5+, and TestNG 5.8/5.9, so that I could figure out how to integrate each test runner with JMockit. From this experience, I believe that all these test runners could be made to run faster, specially TestNG.
I think the documentation about the various test scopes could be improved. I guess the problem is that the word "test" has multiple meanings; I usually think of it as a single test method, which at runtime optionally gets its execution wrapped by other methods that run before/after. In JUnit 4, we have only the "suite", "test class", and "test method" scopes, and I am still not clear on why TestNG has a fourth scope.
In JUnit 4, it's easy and quick to change a suite class, annotated with @RunWith(Suite.class) and @SuiteClass(A.class, B.class, ...). And of course, there is the standard Ant "junit" task. But I can see some would prefer to always define suites in XML.
As I described before, I have experience with large JUnit integration test suites, which hit a live database through Hibernate. In this experience, at least, I never ran into any concurrency or clean up issues. So, it definitelly can be done.

Personally, I use JUnit as the "first-line" testing framework just because all of the tools default to it and I prefer to keep deltas small if I can.
But what inevitably happens is that I find that I want to use something that TestNG has and JUnit doesn't; method timings, tests for expected exceptions, test ordering, and the like.
And then I convert to TestNG and I'm much happier; my tests don't run slower, they run more efficiently and test more of the things I want to test.
XML files... bah. I see the XML output from TestNG and happily ignore it. If the XML is getting in your way, you're doing it wrong.
I have yet to have any complaints about the results of moving to TestNG from JUnit.

OK, I decided to make a performance experiment that everybody will be able to reproduce.
One of the test classes in the JMockit test suite is mockit.DeencapsulationTest. It contains 33 unit tests, which are very simple and run very fast. The original test class uses JUnit 4.6.
This test class can be converted to TestNG 5.9 by simply changing the two following lines of code
import static org.junit.Assert.*;
import org.junit.*;
to
import static org.testng.Assert.*;
import org.testng.annotations.*;
I use IntelliJ IDEA 9M1 (EAP build 10558), with the latest TestNG-J plugin (version 1.1.1), on Windows XP and using Sun JDK 1.6.0_14.
With regular run configurations in IDEA (no extra JVM or command line parameters) for that specific test class, I get the following approximate timings (the exact execution time varies from execution to execution, since the tests are so fast):
JUnit 4.6 (33 tests): ~0.04 s
TestNG 5.9 (33 tests): ~0.39 s
So, in this environment the TestNG suite takes about 10 times as long as the JUnit suite. Maybe it is some issue with the TestNG-J plugin... If someone does reproduce this using Eclipse, I would be interested in the results.

OK, I decided to make a performance experiment that everybody will be able to reproduce.

Could you compare command line times, so that we can factor the IDE out of the equation?

No problem.
I ran from the command line (using org.junit.runner.JUnitCore and org.testng.TestNG, in each case), obtaining about the same execution times for both versions of the test class.
So, indeed, the big difference when running through IDEA/TestNG-J was in the overhead added by the plugin.
I did not see any difference in the execution time for JUnit, when switching from IDEA to the command line. Definitely, the TestNG-J plugin is at fault.
Running from the command line solves the performance issue, but then I lose the ability to run from IDEA, which is what I am used to do with JUnit tests...
Maybe the TestNG-J author can be contacted? The plugin description only mentions TestNG 5.6, so it probably can be updated and improved.

'lo - Cedric pinged me about this thread and I'll take a look at the code as soon as I can. Out of curiosity, what type of test configuration setup were you using for these timings? Group, package, class?
Without taking an actual look so far, some areas I'm thinking could be at play here are: scanning for test classes in a group, looking up dependencies to include.
If you run the testng-j plugin setup with a preexisting suite.xml (rather than the generated one), do you see speed improvments matching the command line?

'lo - Cedric pinged me about this thread and I'll take a look at the code as soon as I can. Out of curiosity, what type of test configuration setup were you using for these timings? Group, package, class?

Without taking an actual look so far, some areas I'm thinking could be at play here are: scanning for test classes in a group, looking up dependencies to include.

If you run the testng-j plugin setup with a preexisting suite.xml (rather than the generated one), do you see speed improvments matching the command line?

I used only the "Class" scope (simply by hitting "Ctrl+Shift+F10" in IDEA, with the desired test class selected).
I could not find any way in the IDEA UI to tell the TestNG-J plugin to use an existing testng.xml.
I believe if you run any test class in the "standard" IDEA way, you should get the same result. At least I don't see anything different from the way I always run JUnit tests.

I use IntelliJ IDEA 8.1 as well, but I never run from the IDE ant plug-in (some of my colleague do).
I simply run from the Ant task. We are using it also as a debug tool. Therefore we prefer to see the output in command line rather than XML when we debug. So I just wrote a simple ConsoleReporter to print out the results to console.

In JUnit 4, it's easy and quick to change a suite class, annotated with @RunWith(Suite.class) and @SuiteClass(A.class, B.class, ...). And of course, there is the standard Ant "junit" task. But I can see some would prefer to always define suites in XML.

I don't think using the annotation in the test code is best approach. In our case, We have a team of developers who, in many times, running tests in the same class or even same test methods. With TestNG, all developers have his/her own testng.xml file which indicate which suite/package/group/class/method to run without interfere other's.
Also, for the same suites of tests, we have another testng.xml for nightly tests. The individual developers are usually concerns more about the particular class his developing, so the testng.xml will modified to only tests those related methods or classes; But the nightly test will tests all. And In this case, the XML/HTML output is more important than the console output, as the former will be used for reporting purpose.

Just a hint: I thought the first argument of assertEquals(...) should be an expected value, isn't it?

Very good observation!
I think this is a common mistake and it can lead to problems in finding the solution for the failing assert.
Personally, I use and advice others to put a proper string comment for the used assert, something like:
assertEquals("No users should be authenticated", 0, list.size());
Also, the comments should be more related to the business meaning of the assert or test method. This will bring more value to your unit tests and they will be more understandable by others.
If you work in a multinational environment, you also might consider adding some unit testing guidelines for working with dates and localization.
Your tests are working fine on your machine but if your continuous integration system is somewhere else on a computer with a different time zone and/or locales, then you might have some unpleasant surprises.
One more thing. When writing unit tests try to minimize as much as possible (maybe even to zero) the unit test's dependency to some external system resources (files, etc). This could also bring some unexpected results if you run the tests on different operating systems.

• Each unit test should be independent of all other tests.
A unit test should execute one specific behavior for a single method. Validating behavior of multiple methods is problematic as such coupling can increase refactoring time and effort. Consider the following example:
void testAdd()
{
int return1 = myClass.add(1,2);
int return2 = myclass.add(-1,-2);
assertTrue (return1 – return2 == 0);
}
If the assertions fails in this case, it will be difficult to determine which invocation of add() caused the problem.

I didn't quite understood this advice. If you are testing method add() then what's the problem in testing several method calls to it?
The bottom line: If the test fails, then you don't know which call to add() caused the problem, but you KNOW that you must look at the code of add() to solve the problem.
In addition, avoiding to write this kind of tests would shield any saving in refactorings or effort?? I don't think so.

Thanks for pointing out. I agree that quoted example doesn't exactly convey the point of tests being independent of others.
Consider if there were two methods add() and sub() being called and we are asserting (return1 - return2). The test case will not make any sense as it will be difficult to determine which invocation actually caused the problem. But the quoted example is also correct in its own way. It will always be difficult to determine which invocation actually caused the problem. In any case the next step will be to check the method sanity.
And also, I always felt writing a JUnit was always extra effort. Instead, rigorous testing with variety of inputs would be faster approach. Not sure how valid is it.

+1, that's what I've tried to convey in my other replies. All these conventions are for "corporate programmers" who can't think for themselves and be creative. Well, the God of unit testing told us to structure our tests in this way and use these conventions, let's not think or take it subjectively, rather let's follow the convention. Look at what happens to all these large corporate development projects, they hire folks who learned the patterns book from cover to cover and now apply these patterns blindly to any problem. Same with unit testing. Yes, there are some practices that define nice patterns for resolving a particular set of testing problems, but please don't force people to absolutely test the way you want to test.

Ah the old magic numbers debate. It's just not as simple as this - what about the poor s.o.b. who has to maintain this code and doesn't know how or why "23.39201" was calculated? Sure self-fulling tests must be avoided, but unexplained constants can be just as bad.

• Each unit test should be independent of all other tests.

This is a worthy aim, but (as Cedric pointed out) there are scenarios when it's more important for the test to run quickly than for it to be independent. E.g. indexing a large quantity of data to test a search algorithm etc.

• Each unit test should be clearly named and documented.

Most of the tests should be clear enough not to need documentation

• All methods, regardless of visibility, should have appropriate unit tests.

Event getters and setters? What about toString(), and hashCode(). Really simple methods may not warrent a test.

• One assertion per test case.

If you follow this advice ridgedly you'll end up with a test that splits intent across multiple methods.
Re TestNG
I really wanted to like it, I tried to like it. I bought the book, I started using it on two projects, but that damn XML file just gets in the way. Defining groups was a pain (is there really no default group?). I like the @Test and @BeforeX annotations, and the ability to explicitly expect expections, but other than that it was one big meh. Would love to be proved wrong.
Now JBehave - that's the NG.

One problem I face is maintaining the test cases.
We follow the "One TestClass for each class approch", but sometimes the TestClass becomes too large and unmanageable. It happens when there are bit more methods in the class or has large no. of test cases. Currently I split the TestClass either logically or "one TestClass for each method", which ever suits the situation.
Is someone knows any better solution for this issue?

I never consider one test class for one method is the right approach. It is not only leads to large amount of classes, it also leads to a lot of setup/tear down codes repeated otherwise they could be shared among methods.
This is particular true when you have a set of database records needs to be setup before tests and clean up afterwords. In many situation like this, you don't want to setup/clean after each method, rather setup/clean after each class or suite; All test methods share the common resources.
I usually setup classes in logical groups based on function/features. Each class may have many related test methods for the function. If this is database related function, I will have ALL CRUD functions in one test class.

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.