A personal research journal-cum-blog for my general project of introducing computer software developers to Philosophy and how it can apply to day-to-day programming and design. It is named after my special project of applying Existentialism’s motto “existence precedes essence”, generating a common theoretical approach to a diverse range of programming topics. [disclaimers]

In reading Philosophy 101, about Truth with a capital "T", and the non-traditional logics that use new notions of truth, we of course arrive at Fuzzy Logic with its departure from simple binary true/false values, and embrace of an arbitrarily wide range of values in between.

Contemplating this gave me a small AHA moment: Unit Testing is an area where there is an implicit assumption that "Test Passes" has either a true or false value. How about Fuzzy Unit Testing where there is some numeric value in the 0...1 range which reports a degree of pass/fail-ness? i.e. a percentage pass/fail for each test. For example, testing algorithms that predict something could be given a percentage pass/fail based on how well the prediction matched the actual value. Stock market predictions, bank customer credit default prediction, etc come to mind. This sort of testing of predictions about future defaults (i.e. credit grades) is just the sort of thing that the BASEL II accords are forcing banks to start doing.

Another great idea (if I do say so myself) that I had a few years ago was the notion that there is extra meta-data that could/should be gathered as a part of running unit test suites; specifically, the performance characteristics of each test run. The fact that a test still passes, but is 10 times slower than the previous test run, is a very important piece of information that we don't usually get. Archiving and reporting on this meta-data about each test run can give very interesting metrics on how the code changes are improving/degrading performance on various application features/behavior over time. I can now see that this comparative performance data would be a form of fuzzy testing.

POSTSCRIPT - July 6th, 2010

I have recently come across several types of meta-data that APPLE should have been tracking during their testing.
1) CPU Temperature
2) Battery Drainage Rate
3) Antenna Receiver Strength
I have just had to deal with iMac and iPod and iPhone issues with each of those respectively. Software updates caused new behavior overheating old iMacs, draining iPod batteries, and everyone now knows about the iPhone antenna issues. Had APPLE used the proposed techniques above, they would have caught these problems.

It wasn't obvious to me how a change in software could have such a dramatic effect on temperature until I started seeing all the hoopla about new browsers being faster because they have been rewritten to take advantage of GPU (graphic processing unit) processing power instead of just CPU circuitry. The GPU would be a major piece of chip real estate to change from cold and idle to hot and bothered. As described here in SD Times, Apple's new O/S is one of the riders on this bandwagon, which would explain the overheating of old Macs and iPods.