Edition 255

I’m a programmer, and therefore live in a world of worst-case scenarios. Networks fail, hard drives crash, and those troublesome human beings persist in disgorging all sorts of nonsense into every interface my software has. Computers are incredibly complex and delicate systems, and they fail in dramatic and unexpected ways, without warning, every day. No, no, I’m fine. My eye twitches like that all the time.

Few months ago I saw an email asking if someone could unwrap this one line of JavaScript. It was created by Mathieu ‘p01’ Henri, author of www.p01.org where you can find this and many other cool demos.

Few things are guaranteed to increase all the time: Distance between stars, Entropy in the visible universe, and Business requirements . Many articles say Don't over-engineer but don’t say why or how. Here are 10 clear examples.

Automated testing is a core part of writing reliable software; there's only so much you can test manually, and there's no way you can test things as thoroughly or as conscientiously as a machine. As someone who has spent an inordinate amount of time working on automated testing systems, for both work and open-source projects, this post covers how I think of them. Which distinctions are meaningful and which aren't, which practices make a difference and which don't, building up to a coherent set of principles of how to think about the world of automated testing in any software project.

Remember that actors interact only using message passing. In order to check actors behavior you can do it through the messages sent to them and back from them. So how do you test actors? You send them messages :) To test actors that communicate only with messages, you need to send it a message and get reply back and check it.