@Philip: "not implemented yet" is probably the most common cause for inconclusive unit test results - either the feature hasn't been implemented yet, or the unit test is just an auto-generated stub.
–
tdammersAug 14 '12 at 19:06

Also, if the setup for your test fails (and not the test itself), then that could warrant calling the result inconclusive instead of failed.
–
MetaFightJul 12 '13 at 7:46

Depends what you want to see. It's always nice to have at least the option of seeing all the results (as in, a complete list of all tests that have been run, and their status), and if you run the tests as a developer, you are probably also interested in error details for the failed ones. You do want a way to boil it all down to one single exit status though: either everything passes (which means global pass) or one or more tests do not pass (which means global failure). This is especially important if you want to hook your test suite into a Continuous Integration framework or similar automated checking mechanism.

So, long story short: If you can, make it configurable. Most off-the-shelf solutions have this option, and if you're rolling your own, it's not hard to do either.

Use defaults. They're usually good, succinct and convey exactly the information you need. Look at nose in Python (find command line examples!) or the Test tools in Visual Studio 2010. In my experience, they're quite good.

Less is more. Don't clutter results with lots of long-winded error messages or needless warnings. Again, see default results for tests in the previous frameworks to get a good idea of what looks good.

If you're building a unit testing framework:

Above all make things intuitive and easy to understand. You can get pretty far just presenting pass/fail/skipped results for each test can be extremely helpful. Again, less is more: gussied-up presentations usually won't help things out too much. Even simple colour text-coding can help immensely, as can basic print-outs to a command window.