Archive for maj, 2013

Discussing Programming topics with non-programmers

The other day I was in good company with a business owner, who is also aÂ programmer, and a business controller, who isn’t a programmer. We wereÂ discussing the issues of one of my favorite topics: Software Quality.

Naturally the software must do what it is intended to do, but the hiddenÂ issue, which to me is almost as important: Functionality must be placed inÂ the right areas.

The analogy became looking for $10 in a persons wallet.

To the programmer getting this task, the basic work: Find person, findÂ persons wallet, check if wallet contains $10. Has to be done regardless ofÂ where the functionality is applied. To the CPU the steps will be loaded inÂ sequence anyway thus the “correct” position of the code is another matter.

Now, in real life, if I ask someone if they have $10 in their wallet,Â they are able to check their own wallet and inform me. On the other hand,Â I could just take their wallet and check myself. Naturally this would be aÂ violation of privacy - and I’d have to know where they keep their wallets.

In the same sense, a controller could implement the functionalityÂ required leading to promiscuous objects being violated. Or the objectsÂ themselves could have the functionality implemented. The first leads toÂ violation of Law of Demeter,Â tight coupling, and too much knowledge,Â which in turn leads to higher risk of introducing bugs, higher maintenanceÂ cost, intricate dependencies, and a big ball of mud.

The example is a bit far fetched, but it served the purpose of why it isÂ important to have clean code in more than one sense.

The first snippet the Controller will have to know of Person and Wallet.Â In the latter Controller needs to know of Person, and Person needs to knowÂ of Wallet. Even though there are the same amount of dependencies, the contextÂ for the Controller is much higher in the first snippet.

OOP is dead

A while ago Pinterest suggested I read ObjectÂ Oriented Programming is Dead which is a bit dated, nevertheless itÂ is still relevant. I just think that there are more reasons why OOP isÂ dead - and yet still alive.

First off, we killed OOP by trying to fit a relational database as theÂ persistence layer - this leads toÂ data transfer objects, which areÂ mostly grouped global variables or Java beans, which has nothing to doÂ with encapsulation.

Second, we killed OOP by placing logic in the wrong classes, classes inÂ the wrong hierarchy, and generally forgetting what OOP is about.

Third blow, we apparently insist on imperative styled programming for anÂ OOP, which leads to the issues described above.

Fourth stab, we seem to be grounded in the snapshot state of databases.Â That is, an object in a database has a single state, without any priorÂ history. This is similar to register loading, and overwriting, and isÂ prevalent in the Update keyword. You actually have to twist, turn, andÂ contort the default behavior of an RDBMS to get a history/audit trail forÂ the values.

The final death blow was delivered by Martin Odersky - who also kindlyÂ revived OOP in junction with FP - in his presentation ObjectÂ and functions, conflict without a cause. Well he has done so onÂ other occasions touting the Scala horn.

Rich Hickey - the Clojure guy - seems to at least backup the FP and OOPÂ notion - primarily of stateless objects, and Datomic seems like aÂ brilliant choice for a persistence layer.

OOP is alive

The Object Oriented notion is extremely important as it is what shouldÂ drive SOA services. They should be defined by their interfaces andÂ encapsulate data and implementations. As Steve YeggeÂ mentioned Amazon is quite good at, and Google not quite as good.

I believe SOA is the right level of re-usability of software, and we will have to get much better at it as more and more mash-ups are wanted. That means we have to accept interoperability at a different level, keep disciplined and not query database tables which we know of, but really belongs to another service.

We also have to embrace the functional style - I’m talking REST for webÂ developers - in which objects are immutable/stateless, and transformationsÂ can be predictably run whether in parallel or sequence, synchronous orÂ asynchronous to the client.

I’ve been annoyed by the findings that we - as software developers - have at best an average of 68% success rate, and with approximately a third of these in need of costly repairs, which in my book constitutes a success rate of only 46%. We are not even mediocre at best, and we haven’t moved away from Frederick P. Brooks’ “Plan to throw one away” (Chapter 11 of The Mythical Man-Month, ISBN: 0201835959) Even though he argues against it in the 20th anniversary edition from 1995 - almost 20 years ago. That is - it seems we haven’t really learned anything for the past 40 years of software development.

I know that there are other professions in which the products are declared as successes while they soon after display less than adequate abilities. Vasa, Titanic, Columbus reaching the west passage to India, Apollo 1 and Soyuz 1 & 11, etc.

Let us assume that the complexity of the requested software solutions are normal distributed. Then 68% is 2s in the Six Sigma terminology, or zero nines in the engineering Nines terminology. The only natural division of things into 2/3 and 1/3 which comes to mind is the awake and sleep division of the day.

The complexity of a software solution is quite hard to measure as it involves the actual solution as well as the work done in order to get to the final solution, and that in turn involves the people associated with the project over its entire scope: Developers, testers, architects, project managers, customers, etc. For a parallel to the world of mechanics we have the Invention of Heavier-than-air Aircraft, in which the Wright brothers won over the better funded and equipped team of Samuel Pierpont Langley.

I’m not sure whether Langleys team was Punished by Rewards or they just over-engineered the task at hand. Fact is, they didn’t deliver before the Wright brothers.

I know - usually we don’t have a race to be first, we have a contract to fulfill instead. Perhaps that is why there are so many failures in the business. It is work and not (serious) play. Perhaps it is the ever increasing measures of confinement. Normally when I’m using someones services, we have a general acceptance of What we are discussing, I know Why I want or need whatever service it is, but I’m not telling the service provider How they must go about their business.

Take travelling for instance - I pay someone to transport me and my luggage to some specified destination, but I don’t tell them How they should do it, which route to take or in any other way meddle with the service, they are providing. A cab driver might ask which way to take, e.g. choosing between fast and cheap.

Conversely for software projects, there are notions of “must be the same as the previous”, “must be XML”, and “must have a response time below 100 ms” - while these are demands on How things must be done, they could very well - and in some cases have - impaired the end product. You end up with something as ugly as trying to use Java with twitter4j to store Tweets in MongoDB. Tweets come in as JSON object, MongoDB stores “rows” of JSON. On paper JSON objects comes in and should be filtered and piped to the database. But twitter4j reads JSON, makes it into standard Java objects, which then have to be constructed back into JSON using a convoluted builder. Adding Java makes the simple solution a lot more complicated.

But I digress.

If we on average are 68% successful, then the software around us are the result of those successes. I’m sorry, but I’m pretty sure that my definition of a success is not quite compatible with the apparent standard. Clunky interfaces and useless messages are one thing, but not being able to see your online bank account around payday due to too much stress on the machines. Not being able to log in to a “secure” facility because Java is not the right version - Java has had so many security breaches that I’m puzzled why it is in use for “security”. For extra fun, try doing this using Firefox on a 64 bit Windows box - there are seemingly no end to the hiccups Firefox will have, and will have to be killed by the process manager. Having to click on a message sent to you - notified by sending you an e-mail, that you have a new message - redirecting to downloading the message as a PDF, then having to open the PDF in another viewer, while maintaining the notion in the interface that you have unread messages. Windows not being able to shut down because it cannot play the logoff sound - it’s apparently extremely important that this sound is played.

I am pretty sure we could do better than this - and as these bugs/annoyances have existed for a long time, I have to believe that these are in the 46% of the successful software which doesn’t have to be remedied.

On the graph we have the normal distribution, the red shaded area is the 46%, the pink shaded area is the next 22% - the successes which have to be mended. In total, the entire shaded area is 68% - our success rate.

If the x-axis reflects the complexity of projects, then we’re struggling with the mediocre, which is quite puzzling as that would probably be the projects we get most of - and then we should have learned from the previous attempts, and thus be better at handling - but then developers aren’t the only part of the solution.

I am certain that with higher discipline on both sides of the table during project development we can consolidate 68% and even jump the next 22% to the first nine: 90% - it could be that 93% is a possibility within a few years, but let’s settle for the first milestone.

As can be seen, then the step from 46% to 68% is approximately the same as the 68% to 90% - these should be low hanging fruits, ripe for the picking.

If we can pull this off - and quite frankly we simply have to - this would mean that we would have a lot more successes, which should bring greater happiness within the working environment. It could bring more work as the investment is more secure there could be a higher number of companies trying to implement new projects, which at the current rates would be deemed too risky.

The graph shows the number of tries you have to make to ensure a probability of success above 90% for the three success rates: 1/2, 2/3, and 9/10 - currently we are about the 1/2, if we estimate the successes in need of mending as not quite a success, 2/3 is approximately 68%. Thus to probabilistic ensure at least 98% success rate, our customers are expected to be willing to invest 2x, 4x, and 6x the estimated cost of a project depending upon the quality we - as a business - on average can provide.

More positive work - it shouldn’t mean more death marches, nor longer hours - quite the opposite. More projects delivered to the satisfaction of the customers, on time, on budget, hopefully working better than hoped for.

If on the other hand we can’t pull this off by discipline, better contracts, and better cooperation for all involved, then we simply have o cut down on the complexity of the projects. I know that whether you fail or succeed, you still earn the money along the way - but that is not the way I want to live and work. Return customers - the happy ones - are amazingly better customers, and they are replenishable resources, simply by the fact that they return and thus is not depleted.

If we build services that our customers will benefit from, then there should be no other hindrances to mutual benefit.

While this seems like a nobrainer, a win-win situation, why aren’t we already there? What can we do?

Unfortunately I’m not sure, but to start somewhere, I think we can start at the software development process. I’ve been reading the Danish Quality Model (DDKM in Danish) for healthcare. They have done a splendid job in providing reasons Why things must be done, and What the things should encompass, Who is responsible, but not How - this is left for each Hospital to decide. I really like that approach. The hospitals will require renewal of their certificates at least every 3 years. Furthermore, I read the Bonnerup report (in Danish) “Experiences from governmental IT projects - how to do it better?” ISBN:8790221567 from March 2001 - and the article (Still in Danish) “Authors of the Bonnerup-report 10 years later: None the wiser.”

Usually customers don’t quite know what they want until they see it - sometime though, they know exactly what they want, but have a hard time telling developers what it is. Communication is essential, and short iterations of continuous improvement, i.e. Plan-Do-Check-Act with the customer at hand seems essential. We know that getting everything right in the first try is almost impossible. I don’t think any golf player expects to play a perfect round.

The customer representative must be available, knowledgeable, and able to make decisions in all aspects. This will make the communication fluent as opposed to a lot of back and forth, who said what, and similar issues. If there is a reason to change a color, the customer representative should be able to make the call as opposed to having 5 meetings and a committee approve.

Decision makers must be invested in the project. Outside consultants who will earn money regardless of the progress of the project should be discouraged as decision makers.

Teams - on both sides - should be as small as possible to improve collaboration, communication, and understanding of the aspects in the project.

Project management must improve - staffing, estimates, communication, user participation, and post delivery follow up. It is important to learn from previous and others mistakes. At times you feel part of a Monty Python sketch sometimes the Architect Sketch, sometimes the “Meeting to take action” part of Life of Brian. Boehm and Turner discusses Alistair Cockburns notion of competence levels in the book “Balancing Agility and Discipline” ISBN:0321186125. A lawyer with specialty in IT projects mentioned reading meeting minutes from failed projects stating that “The project is delayed” as the sole content. Not why, not how long, not which actions are being taken to counter this. Later on the minutes would read “The project is greatly delayed.”

Developers must work on a single specifically defined atomic task. This makes it easier to describe why something is added to the solution. A task can touch upon several files, but doesn’t have to.

Developers must use version control. Each atomic task is committed to version control - preferably with the task description and Why it was added. This allows a nice readable history of the project, and an audit of the project at any point in time.

Developers must test their code. Not only to ensure correct behavior of the expected flow and of the negated behavior, but - and this is more important - to guard against future changes. Tests for code makes it possible to freely try alternatives, and if you are stress testing, then it makes it possible to evaluate different configurations.

Have a vision, set goals. It is important for everyone to know which direction the project is heading in, and to know some of the milestones along the way. Accept that even the best laid out plans will fail - they do in sports, so why should we expect anything else from corporate business?

Third party competent people should audit the source code according to these rules and the project description, making it possible to give an adequate description of the projects health - much like an accountant should be able to read a companys ledger and estimate the financial soundness of the company. Third party because we need independent observers to be objective about the project.

If we cannot improve, then it must be imposed that the Minimum Viable Product becomes the Maximum accepted proposal for future projects.

Should we strive for an accrediting institution certifying software companies on a yearly basis? I really hope not.

Resources

Update

TechRepublic had a blog entry IT projects: Why you need to fail more often, and perhaps this is actually one of the reasons why we don’t do any better: We keep on beating a dead horse as opposed to cutting the losses early and learn from mistakes made, both our own, but also those of our colleagues in the business.

I know, it is hard to keep a business running if you terminate projects early due to infeasibility. But no matter how far you go down a wrong road, going further or faster will not get you back on track.