Friday, April 29, 2011

I love the Rands in Repose post about Three Superpowers. If you don’t already read his blog, you really should, he’s a great writer and always has deep insightful advice. Often, due to a cruel world I find myself needing to fix problems, but unable to issue a Mandate. The Debate generally drives me nuts, I tend to only go there when the issue is really murky and subjective. What I rely on most often is the Nudge. On my good days, I can get in, nudge, and get right back out again before anyone has noticed. The downside is no credit, but at least things are moving forward, I can live with that.After I read that post, I started thinking about other options or techniques that I use in to get things accomplished. One common problem is with communication. When people talk, they’ll say anything. That’s fine, but in an industry were we need certainty, a verbal conversation leaves room for one of the parties to change their story later. What I often find necessary to lock things down is to force someone to ‘put it in writing’. This has two benefits, first that it is carved in stone, you can refer to it if there are future problems, but the second is that it makes the other person think about what they are saying. Shooting off a quick answer is easy, but having to commit to it is hard. Often the act of writing it down changes it significantly. Verbal conversations, either in person or on the phone are fluid things, subject to change based on recollection. Email amps it up a notch, and a formal document of some type takes it even further.

In programming, we don’t want to lose time on work that doesn’t have value, but the value of some work isn’t always obvious until things go wrong. Ideas start with conversations, but need to be solidified in writing before a lot of resources are committed. Programmers often complain about changes and scope creep, but a great deal of those problems come from faulty communications or shallow thinking. We can control that, by insuring that we lock it down as early as possible. It’s a superpower that we can easily possess.

Wednesday, April 27, 2011

When you are developing software, one problem you don’t want is for an error to occur but the processing continues until things get completely scrambled. This makes it really hard to track down the ‘where’, ‘when’ and ‘why’ of the initial error. With that in mind, we often build systems to stop immediately on the first suspicion result. Doing this makes it easy to find the problem and forces you to fix it before it goes into production. It helps in building better quality at a faster pace.However useful, this is the worst thing for a real production system. If some small piece is broken, you don’t want the whole system grinding to a halt in the middle of the night. That type of extreme reaction will only create a slew of secondary problems, which will result in a lot more time being consumed than necessary. Since time is a scarce resource in IT, we have to use it wisely. Systems should be resilient. They should continue working were possible. Actions, data, etc. should be atomic, so that ordering or partial completion doesn’t matter. Failures should be monitored, but routed around, to insure that small problems don’t escalate into major issues. Small problems are the norm for most computing environments.A lot of thinking has gone into our industry. We have concepts like transactions, and the ACID properties that exist to make sure we sub-divide the processing in a way that failures are isolated. We’ve been building fault tolerant systems for decades. Together, this body of knowledge allows organizations like Google to massively distribute their systems in a way that significantly reduces the impact of hardware or software problems. These are well-understood, well-documented issues.Clearly since we need both types of error handling, the choice of how to handle errors differs depending on the context. A system in development should stop immediately, while one in production should make every effort to keep going. A well-written system also ‘silos’ its functionality so that problems in one area of the system don’t overflow into others. All systems should have these two basic modes of operation. This obviously makes coding a little more difficult and adds to the amount of work, but if done well, it can reduce the operational and support efforts. Software always has bugs, so it is important to accept this and build with that in mind. Also, operations personal are users of the system. A well-written system not only makes it easy for the user’s to accomplish their tasks, but it also makes it easy to deploy and keep running. Both goals are important.

Thursday, April 21, 2011

A while ago I was in a discussion with one of my friends. He said something interesting that really caught my attention. His code was in testing and had turned up some corner-cases. While developing the code he became aware of these issues, but as he said “they weren’t in the requirements”, so he didn’t address them.I found myself surprised. There is -- I strongly believe -- for all code an inherent set of basic requirements that goes without saying. The users don’t need to specify them, they just come in with the expectation that the code will be able to withstand normal wear and tear. That it will be usable.We don’t, for instance, need to specify that a GUI should be easy to use. That requirement is fundamental, it is right there in the definition of GUI, and we certainly don’t need people to iterate what that means in general. We don’t need people to specify that a batch script should be restartable and idempotent. Both qualities go back decades and are important to insuring that the system is operationally sound.We don’t need people to specify that installation should be simple and repeatable. Those were lessons learned a long time ago.I could go on, there is a massive amount of these well-known ‘best practices’, ‘conventions’, ‘acceptable behavior’, etc. We already know a huge amount about the proper way to build software so that it lives up to its usage; so that it is industrial strength.A set of requirements from the users, managers, or whoever, is a set of things that are merged with this existing knowledge base. These default requirements. And for experienced developers, although there are no official standards for these necessities, we know what they are and why they are important. When we are commissioned to write or enhance a code base, these fundamentals need to be in place. They don’t need to be iterated by non-experts, they just need to be there.On occasion, a few of the stakeholder requirements may contradict the default ones. That happens, particularly when stakeholders don’t have enough experience with software. In such collisions, the default requirements hold precedence, and it is up to the programmers to bring this issue to the stakeholders. If there are good solid reasons for altering the defaults and someone make a reasonable case, then it can be done in that way, but otherwise we’re professional software developers so we need to build things that work. So the defaults are important. For me, in my friends situation, I would have gone back early with the corner-cases and the way I intended to solve the problems. I’d be open for debate, but I prefer to go in with a viable solution, it makes it faster to get an agreement. If it’s a domain issue, since I’m not the expert, I would be extremely flexible to what they need. With a technical issue, I would insure that any alternatives don’t compromise the system. But in either case, one should never violate good practices unless there is a valid reason to do so. The most important aspect is that the issue should come up for discussion, and that the resolution should be noted and followed.Requirements in that sense, are those things above and beyond the fact that we are building some industrial-strength software. Even before the project was a glimmer in some-one’s eye there is a huge amount of basic work necessary to make it successful. We shouldn’t dump the responsibility for knowing what that is on the stakeholders, it’s part of our job.

Friday, April 15, 2011

I just saw this interesting blog post, here, that states that there are over 9 million Java programmers. That number doesn’t ring true with me, my last recollection -- from who knows where -- was that there was roughly a million or so programmers in all languages/technologies.For this type of number, I’d like it to be restricted to people that are making a “living” off of programming, doing it on a full-time or nearly full-time basis. The number of hobbyists and students is interesting, but these numbers fluctuate widely with popularity. There are also a lot of 'coders' in operations and support roles, but they tend to write small works that smooth over bumps or glue together existing pieces. It's the number of people doing the heavy lifting that I am curious about. No doubt it is a difficult number to find. You can’t just measure downloads, or up and coming job ads. And there is a huge segment of older programmers out there making a good living from the mature technologies like COBOL and mainframes, but most of them have a near-zero web presence. Still, they are an important group because their technologies run a good majority of the mission critical applications and they often have more refined engineering skills and processes. If anyone has any links, or comments, I’d be really interested. I’ve seen a lot of resources, but I’ve never found one that I felt really captured this number correctly.

Thursday, April 7, 2011

One striking features that I’ve always found with elegant code is how it manages do so much with so little. In its expression it is dramatically stingy, yet in its functionality it is extremely broad and applicable to a large number of similar problems.Software ultimately is just a very long sequence of instructions for a computer to follow. When building up these sequences, we can code them each very precisely to the specific instance of a specific problem, or we can take a step back and then generalize them to allow for a wider range of reuse. All computer languages do this by providing sets of standard libraries that accomplish common tasks. Many of these libraries find simple, elegant abstractions that allow for a huge range of functionality by providing an underlying set of consistent primitives, on which programmers can build larger more specific blocks of code. Get the primitives right and the next layer gets cleaner and easier to build. Get them wrong, and the eccentricities percolate into the code above, making it more difficult.Great examples of reuse in code understand this well. Instead of pounding out an endless series of nearly duplicated lumps of code, the programmers have built layer after layer of clean, well-thought out Lego-like blocks that get assembled into ever higher functionality. The key to doing this isn’t by allowing for a mass of configurable options, or by forcing a large amount of separate declarative bindings. It isn’t by allowing lots of arguments to the methods, or by wiring everything up with arbitrarily convoluted underlying rules. Rather, when done well, it usually comes from a simple clean abstraction, iron-like consistency and a serious amount of economy of expression in both the internals and in how it is used externally.That is, a simple and consistent abstraction yields a small number of primitives that are easily interwoven at a specific level. And handled correctly, it encapsulates all of the nasty details while providing an obvious set of default behaviors. Simple things are simple to do. At its best, the calling code reads very straight-forward and is naturally self-documenting. You don’t have to know the underlying details or read reams of incoherent documentation to get a correct sense of the underlying behavior. You can use it, it is obvious, and you can move on.This often happens with well-written libraries, but the principle can be applied to all software code. A large application may have many layers, but each layer stands on its own as a simple, readable and easily understandable work. One way to judge the degree of success in achieving this is to give the work to another programmer that has never seen it before. If they can get a fairly good grasp of what the code is doing nearly instantly, then it has these qualities. If they are confused, or turned around, or need extra documentation then the code does not speak for itself very clearly. Or the abstraction is too convoluted (or the programmer is too junior for this type of abstraction).To get really strong economy of expression the primitives must be a small number, not overlapping and they must cover all of the possible operations. A non-programming example is +, -, * and /. With just four operators, one can do a large amount of arithmetic. AND, OR and NOT is another example of a consistent family of primitives (as is NAND and NOR). Applying these primitives to objects (mathematical in this case) in various combinations, provides for a wide range of applications. There are no oddly overlapping operators like addOneAndMultipleByTwo, since at the level they are getting used, it would be unnecessary and confusing. Simple primitives, well-thought-out lead to very clean implementations underneath. They allow any layer built on top to be clean and readable. They simplify, encapsulate and self-document some underlying complexity, while providing a strong base for building on.When done in a large system, at each level, it makes it really easy to view each level individually, with its inherent complexity, and understand how it operates with respect to the level below. It makes it easy to change, without having to worry about unintended side-effects. It makes it easy to extend, to cover new functionality. This, by definition, is the essence of elegance. Clean and simple with no weird bits or thingys that will get posted to WTF. Done well, further development work on an elegant system becomes interesting, fun and fast as opposed to being time-consuming, painful and dangerous.