One thing that I noticed is your scientific notation is error prone. If the input is anything other than #e# (# is a number), the program crashes. At a minimum you should indicate that the input is invalid (not crash). I'd say it should also accept e# (i.e. 1e#).

It'd also be nice to have a hexadecimal conversion function, nth root function (take the arbitrary root of a number), and log base 2 function (or arbitrary base).

One thing that I noticed is your scientific notation is error prone. If the input is anything other than #e# (# is a number), the program crashes. At a minimum you should indicate that the input is invalid (not crash). I'd say it should also accept e# (i.e. 1e#).

It'd also be nice to have a hexadecimal conversion function, nth root function (take the arbitrary root of a number), and log base 2 function (or arbitrary base).

Look forward to see where this goes.

If you can post source code, we can help you go through it (I think it's in Java, right?). If mag's putting invalid data into your app and it crashes, then you're not putting in validation. Did you create tests for these? You should be checking for the following:

You and your tests! You'd be happy to note that I wrote an error logging/email library and put a couple tests in it! However you'll be sad to note that we aren't deploying the tests with the library

To the OP: Data validation is a must though. The biggest issue I see facing mobile developers is the random crashing that seems to plague apps that would otherwise be really useful. I have uninstalled many an application because it was not validated properly and became unusable. It often made me wonder if anybody actually tested it before they released it to the wild.

It often made me wonder if anybody actually tested it before they released it to the wild.

Well my "company" of android developers consists of one (me) lol. I tested it and cleaned quite a lot of errors. But not all of them. That's why this app is still marked as version 0.8... I'm kinda using users as beta testers

You and your tests! You'd be happy to note that I wrote an error logging/email library and put a couple tests in it! However you'll be sad to note that we aren't deploying the tests with the library

To the OP: Data validation is a must though. The biggest issue I see facing mobile developers is the random crashing that seems to plague apps that would otherwise be really useful. I have uninstalled many an application because it was not validated properly and became unusable. It often made me wonder if anybody actually tested it before they released it to the wild.

And this is why we test.

Honestly, I don't understand why Unit Testing was never really part of my workflow. Being able to test code coverage is a wonderful thing. Especially if you work in a statically typed language like C# or Java (being used for Android) or eve objective-c is a must. All you have to do is create mocks (or fakes) that mimic those objects that you need to test and test it. By verifying that your code does exactly what it's doing plus being able to test for edge cases like null values. It's not imperative yuo reach 100% code coverage but if you're testing with code coverage in mind, then you can verify that your code behaves as it should.

Sorry for hijacking your thread, Hexorg. To be honest I haven't looked at your app, but I'm glad to see that you're learning by doing -- that's the only effective way to learn.

DJSPIN80 wrote:

Honestly, I don't understand why Unit Testing was never really part of my workflow. Being able to test code coverage is a wonderful thing. Especially if you work in a statically typed language like C# or Java (being used for Android) or eve objective-c is a must. All you have to do is create mocks (or fakes) that mimic those objects that you need to test and test it. By verifying that your code does exactly what it's doing plus being able to test for edge cases like null values. It's not imperative yuo reach 100% code coverage but if you're testing with code coverage in mind, then you can verify that your code behaves as it should.

Reading this scared me so much I logged in to post a response! Code coverage is not a good test suite quality metric. In fact, it's an outright dangerous one.

The statement that "if you're testing with code coverage in mind, then you can verify that your code behaves as it should" cannot be more wrong, and I'm really frightened to see a skilled, experienced software engineer be lulled into such a false sense of security.

Many bugs do indeed lie in simple corner cases, and complete code coverage will find a lot of bugs. But really, this does not verify your code behaves as it should. Take the following (extremely simple) example:

Code:

void bar_1(){ lock(&something); }

void bar_2(){ lock(&something); }

void foo(){ if(a) bar_1(); if(b) bar_2(); }

See the problem? You can have a test suite which has 100% code coverage and yet will still never encounter the deadlock. I think code coverage is such a bad metric not because it potentially leaves so many bugs uncovered, but because it leaves some bugs uncovered and draws programmers into incorrect conclusions about their code's correctness.

A better metric would be feasible path coverage, i.e. the percentage of feasible paths through a program's CFG which are covered by the test suite. Of course, this has its own problems. For one, it's extremely difficult to quantify the number of feasible paths, and determining whether a single path is feasible can be an intractable problem (probably the biggest reason it isn't used as a metric). For two, the number of feasible paths generally grows exponentially with the size of the program, making it infeasible to generate, let alone manually develop, a large enough set of tests to obtain any significant path coverage. And finally, it's also an unsound metric; 100% feasible path coverage does not imply the absence of any bugs, or even the absence of simple bugs like OOB errors:

This is a perfectly plausible bug which could be missed even by test suites with 100% feasible path coverage. Even worse, it's potentially exploitable.

Anyway, I got off on a tangent. My point was simple. Test suites are a good idea. Test suites with 100% code coverage are an even better idea. Test suites with 100% feasible path coverage are a pipe dream, but an even better idea still. Yet none of them provide verification of your software.

Back to occasional lurking... sorry again for hijacking your thread, OP.

The statement that "if you're testing with code coverage in mind, then you can verify that your code behaves as it should" cannot be more wrong, and I'm really frightened to see a skilled, experienced software engineer be lulled into such a false sense of security.

True, if you base your perception on code coverage alone. Pardon my semantics, and I agree with you that code coverage is not the way to go, I meant that code coverage should be based on meaningful tests. Testing for behavior, not simply blocks of code.

Quote:

Many bugs do indeed lie in simple corner cases, and complete code coverage will find a lot of bugs. But really, this does not verify your code behaves as it should.

This is why I'm a big proponent of meaningful tests. I don't believe testing for LoC coverage unveils anything, it creates a false sense of security. That's like creating a bridge and testing the strength of the arch by the amount of concrete you poured in. It's not a reliable test until you really plug away at trying to destroy it.

Also, a lot of bugs do lie in simple corner cases but code coverage tests doesn't unveil them at all. It's about validating behavior and your test should reflect that. When your code is "moving" per se, then you can really start to see where the null values come up, or when an exception is thrown.

Anyways, I should have clarified what I meant, I'm 100% agreeing with you so that's my bad.

Depends. Faking/mocking honors the service boundaries of your application. Unit testing is supposed to validate that the behavior you coded against is validated. You're not int he business of testing the entire stack, just the calls you're making. This is where faking/mocking comes in. If your business logic pulls data from a service (whether it's an ASMX or WCF or SOAP service), you don't care. Fake the service, return the values you expect and test the behavior. You can get real granular because you can start plugging away invalid values as well. By doing so, you're isolating the behavior of your code thus you can see how your code actually behaves. The minute you start extending your tests to include the database, web service calls, etc...you've done integration testing.

Also, you don't have to write fake/mock objects. You can use a tool like Moq, RhinoMocks or FakeItEasy (all in .NET).

This reminds me, are you testing against interfaces or concrete classes? As much as possible, I really go the distance and pass around interfaces rather than concrete objects. This way, every layer of my application is testable.

Who is online

Users browsing this forum: No registered users and 2 guests

You cannot post new topics in this forumYou cannot reply to topics in this forumYou cannot edit your posts in this forumYou cannot delete your posts in this forumYou cannot post attachments in this forum