There are many people who do get into our field purely for money. The sad thing is that they’re allowed to. There are not enough sufficiently hard courses that weed out the less dedicated ones available on the university/college level. Same could be said of the HR people who are not able to distinguish wheat from chaff…

—Srdjan

I don’t see why that’s necessarily a sad thing…The fact is that the majority of programming tasks aren’t actually that difficult. They don’t require a superstar in order to get done…

I agree that many programming tasks are not difficult. I am not convinced we have found a good way of predicting which tasks are difficult and which are not, which require a cut and a paste and which require some careful thought to avoid breaking something else or adding cruft to code.

I don’t think we do a very good job of dividing up work on a project such that the people with the least experience are protected from damaging the code. But that is just my opinion. I may be wrong.

I am much more confident when I say that we have to be very careful when we set the bar for the minimum standard needed to ship software. The choice between (a) employing people who aren’t particularly dedicated and would fail anything remotely resembling challenging courses in computer science and, (b) hiring “superstars” is a false dichotomy. There is such a thing as someone who is competent and professional enough to pass tough courses. Someone who won’t get confused if you ask them to keep abreast of developments in their field.

In many other fields there are tough courses. MBA programs are often touted as separating the wheat from the chaff. Yet we don’t assume that every MBA is a superstar in business. We do assume they have a baseline of competency. Likewise Chartered Accountants do not tolerate people taking a lukewarm attitude towards their field. But nobody would describe every accountant as a financial wizard.

There’s a continuum of talent, and we can raise the bar above the gutter without leaping from mediocre to world-class in a single bound.

In any business situation, it’s prudent to be wary of people who put their own selfish interests ahead of the business. So it’s fine to avoid people who want to learn to use the latest bauble on your dime. But again, let’s not establish another false dichotomy: people who are curious about programming and eager to learn are not automatically disinterested in shipping solid code using conservative strategies.

The most important thing is to establish the baseline competency required for the job and find the people who meet it. We don’t lower the bar to match whomever is available when we are looking, and we don’t assume that our choices are always binary (Superstar vs. Mediocre, Passionate vs. Pragmatic, Java vs. Ruby).

Part II: Something is still fishy

For a solid grounding on how to successfully develop software, start with The Mythical Man-Month: Essays on Software Engineering. It is one of the most important books ever written about developing software, from the small to the large. Read the book that spawned the expression, “There is no silver bullet.”

There is another way we can misinterpret Guillaume’s comment. Many programming tasks are not difficult. True. But that does not mean that all programming tasks are not difficult. Furthermore, I have experienced a paradox: sometimes when a good person works on an easy task, they see things in the easy task that an average person would miss.

An average person might copy and paste code all day. A good person might refactor to eliminate duplication. Of course, that means the good person can do far more of the easy work than the average person. But more importantly, the code base is forever improved, its friction has been lowered.

The existence of easy tasks in programming can lead us to believe that a programming is a relatively unskilled trade. When we examine the pieces on their own, we might believe so. My question is this:

If anyone can do it, Why do we as an industry stink at it? Although each piece seems easy, the whole of completing a successful project on time and budget with acceptable quality and good long-term cost of ownership is elusive.

Now I know that Joel Spolsky and everyone else at Microsoft will tell you that software is ridiculously high quality compared to something-or-other. And with auto-widget-building wizards and AST-bending IDEs and memory management, today’s programmers ought to be ridiculously productive.

But yet…

I admit this is anecdotal, but I’m hearing a different story from my ex-colleagues and ex-clients out there: projects are late, bug lists are long and getting longer, and a lot of companies are scrutinizing their IT ROI and getting very Scrooge-like about starting new projects in-house.

They just aren’t happy with the results they’re getting.

Now maybe this has absolutely nothing to do with programmers. Maybe it’s all about requirements and waterfall and managing expectations. Maybe it’s all about looking at writing code and realizing that getting ’er done today with a few minor warts is more important than getting ’er done tomorrow with no bugs. (Or worse, tomorrow’s code is even buggier because it features an abstraction of a framework plugged into a component organized in a service architecture, and there’re so many moving parts that nothing works properly.)

Or maybe this has nothing to do with the problem-solving, hold a complicated thing in your head, visualize all the ways it can go wrong so you can program defensively part of programming. I buy that. I buy that maybe we have to be better at writing programs for people to read. I buy that we have to be better at testing our code. I buy that we have to be better at all sorts of software development skills like analyzing requirements.

It could be that programmers on the whole have all of the skills needed, and that our problem is we just haven’t found the right formula for harnessing their power, for getting them all to march in lock step towards project success.

Maybe we programmers are doing a fine job and there’s no need to be better at it. But somehow something isn’t right, and throwing more people with fewer skills at the problem doesn’t seem to be working.

Part of the problem -- or maybe I should say part of the explanation for what we're seeing -- is that crappy software is pretty good compared to the next-best non-software solutions available to most businesses. An expense-tracking system that sometimes fails is probably much better than having to employ a room full of accountants, but the same system is only slightly worse than the theoretically ideal, never-failing system. So the savvy business person will almost certainly want to pay to build *some* software, but not necessarily *good* software. After all, if business folks perceive better systems as being more expensive to build, they may very well decide that "crappy" is all they really need.

And if a many business leaders think this way, pretty soon a market grows up around supplying the resources necessary to satisfy the ever-present demand for crappy software. If this market is large enough, it could even swamp the market for the resources needed to build better-than-crappy software, especially if it's hard to tell the resources on the first market from those on the second.

Going further, if the first market is really large, it might even start to influence the suppliers of resources, cause them, say, to start specializing in the production of resources for the larger, crappy-software market. People who wanted resources for better-than-crappy software might have to start searching longer and harder to find those resources, making crappy software seem all the more attractive.

After a few business cycles, to continue our hypothetical scenario, everybody has fairly crappy software systems. To do better than one's competitor, then, one might try to build better-than-crappy software. But, as we've already established, it might be hard to find the necessary resources.

That's when we might expect to start hearing from our ex-colleagues and ex-clients that something isn't quite right: "projects are late, bug lists are long, and those once-happy business leaders are now scrutinizing their IT ROI and getting very Scrooge-like about starting new projects in-house." Then, in response to this rising dissatisfaction, we might expect the demand for the resources necessary to build better-than-crappy software to start rising as well. And, as it does, the market might shift, and then the suppliers of resources might shift, too. We might even start seeing an influx of better resources.

Of course, the price for these new, better resources will almost certainly be higher. And that price might cause some business leaders to wonder if they can make due with less. Do we really need good software? they might ask. Maybe, they might conclude, all they really need is software that is just *slightly* better than crappy.

And if many business leaders think this way, pretty soon a market grows up around ...

I'll try to put more time into my next comments since I quite obviously failed to make my point clear. What I was trying to say was basically:

"There is such a thing as someone who is competent and professional enough to pass tough courses. Someone who won’t get confused if you ask them to keep abreast of developments in their field."

It's pretty much exactly what I was trying to point out. I don't see Linus Torvalds or Don Knuth going out and writing small business software anytime soon. And they're what I meant by "superstar" programmer. In a lot of companies the term is thrown around fairly liberally and they basically use it instead of productive. If you're slightly above average they call you superstar and I think that's stupid. I know I'm no superstar, I actually think I suck big time. Calling people superstars is just incentive for them to sit back and rest on their "laurels", however insignificant they may be.

I think the best employee to have you described perfectly and I failed to make that clear. People that program as a job and people that program for fun both have downsides. Whereas the former sometimes lose track of developments in the field, the latter is sometimes a liability since he/she will sometimes prefer to use a language/framework/architecture/whatever that seems more "fun" rather than the best tool for the job.

Given the choice between someone who takes pride in his work but doesn't spend all his free time programming and someone who lives and breathes programming but is much more concentrated on what's interesting than what's good for the project I'd pick the former every time. Obviously the best of both worlds is someone who loves programming and is professional about it but I just wanted to point out that reading blogs and having many weekend pet programming projects alone isn't the only indicator that someone is a valuable programmer.

I like what Max said. I think it has been an evolution over the years. What used to be "hard" has been packaged and commodotized, thereby raising the bar for what is possible for the "Average" programmer, but in the same way also introduces a new set of pitfalls that perhaps the average programmer is not used to dealing with. Take for example VB6 vs VB.Net. In VB6 multithreading was "hard" and for the most part you avoided it. In .NET, there is a convenient Thread class which makes the average programmer try to make his/her program "faster and more responsive!" by using this new fangled concept, but then blows their foot off in the process, because they assume that if there is a library for it, it MUST be good to use. I see this also with AJAX. The technology was there, but someone figured out how to put it together for dynamic web apps. Now there are libraries for it and it is the next hot thing, but as with most things programming, you can pick up some one elses library, but then have difficulty because you don't understand what is going on underneath. I also think that users expectations are also going up due to the fact that neater interfaces are coming out that are harder to do, do correctly, and do well. So, our bug lists stay constant, because we are pushing the envelope trying to satisfy all of the "wouldn't it be nice if..." requests that come from a higher who saw it once on google or some other website and the average programmer looking for a library to satisfy the need, because the time or skill needed to really implement what is asked for from the ground up is either out of scope from a time or skill perspective.

There is certainly a lot of software which is unskilled, or at least only requires a moderate amount of inspiration and competency. And if you adopt mitigation processes like XP, you can get away with even less inspiration and competency.

Underlying this post and its key question, though, is a false assumption: that good programmers produce good code, and that bad code is produced by bad programmers. The reality is that programmers are only part of the vast cosmos which have to align for good software of any size to make it in the industry: graphic designers/"usability engineers", program architects, business analysts, QA, marketing people, and management also have to get into this mix.

And the only time you get really good software is when you manage to make all of those people work more-or-less in sync. And because that happens so rarely, we get software which is broke in reflection to the part of the team that is broke.

I know this is not a new idea, but it doesn't seem to have been discussed much lately on the blogs . . . I have been thinking lately that all programming is really about creating a model of the world. And the software systems that end up being fantastically reliable are the ones where the authors sat down and wrote some down math that made their model internally consistent, and then implemented that model in code. And in each of those cases, they did this because the previous attempts in the industry had by and large been messy and buggy.

Examples that come to mind are RDBMS, compilers, and feedback control systems. These are things that, when properly implemented and used, fail so rarely that we hardly even think about them, and yet the things they do are far more complex than a lot of the code that most programmers write.

Your examples are all cases where the problem space is well defined -- where you can sit down and do some math and figure things out. The compiler is a great exemplar: it's a going from a carefully defined (if not well-defined) and unchanging source to a carefully defined (if not well-defined) and unchanging destination. I write a compiler to go from known point A to known point B.

Most "business functionality" (as I define in this blog post) isn't nearly so neat. It's got a lot of hand-waving, it's only vaguely clear (even to the customer) what they really want to accomplish, and it's subject to the vicissitudes and whims of the business context. It carries forward the cruft of corporate politics, system architecture, and the other limitations.

This all said, I think you have a good point in general: that is, if I can change the scope of a problem so that I can sit down and prove it's right (e.g. if I can write code such that can't be in a bad state), then it's a lot easier to write code. The only tricky part then becomes wiring together these disparate small solutions to solve the big problem.

For a little while I too have had the feeling that something in how we do software these days is fundamentally broken.

If you think about the tools we have these days, compared with 30 years ago, not a lot has changed, at it's heart an IDE is really still just an editor, compiler and a debugger.

Features like intellisense are little more than memory aids to deal with the huge APIs that today's platforms feature, while interface designers merely allow us to hang together various UI components without having to hand-code them - convenient, but no revolution.

I really feel like we are in danger of reaching a point soon where our tools won't be able to keep up with the complexity of the software we need to create. And then what?

I think the whole idea of creating a good software is a myth. Unless and until we come up with good standards, good software cannot be created. Let me explain, creating software is an art, starting from envisioning to putting a team together to delivering and supporting. Every new product in software has its own cycle of all these above said activities. However if you look at any other industry, there is a complete form of industrialisation, from the finance industry it is balance sheets, from automotive it is to produce components and cars which are same. These industries have prospered over a long time period and have effectively created standards, hell in the software industry we cannot even come up with one standard to exchange documents of information, Microsoft has it's own way, PDF jumps in, open doc format is tried, but no results.. So when this is the case, you do not get quality of software itself. In other industries, due to very high levels of standardization, you can take a mediocre person and teach him to design a product based on complete standards, ask somebody to manufacture it based on standards and support the product based on standards, in the software industry, we have no standards, hence the product quality is weak. How many times have you seen a water purifier failm and how many times have you seen a leave request application fail??Standards!!

I've been trying to write something cohesive regarding this topic but I seem to have have pulled too much on to my plate to synthesize a concise answer to "why does the software industry stink", at least in any kind of timely fashion: the answer is just too long and complex to flush out at blog latency.

Instead, I'll just refer you to Alan Kay who at least has the principle if not the full causality of the matter. Industry is terribly guilty of instrumental reasoning: we focus on effective approaches for our specific ends, without taking the time to consider the end theselves. The goal oriented viewpoint causes myopia, granting the ability to rapidly refine processes, but that refinement only causes further entrenchment in the originating process. To quote my favorite Mon Calamari, "its a trap!"

Maybe I havent been looking hard enough, but I havent seen many competent high level informatics systems. Most tools are squarely designed at boosting the power of Blub programmers, maybe the occasional hotshot iconoclast language coming along and reworking Blub world, and some tools come along to hide Blub as much as possible (dreamweaver) but I havent seen very good tools to marry high level design to the programmers world. To pin down a specific example, take the fact that step through debuggers are still the only real way to grok data flow in our programs: that is sin.

It also doesnt help that we seem to change our target platform every six years either (distributed -> websites -> webservices - rest).

In enterprisey organizations I've seen three main problems. First is a separation between the people who have to use the software and the people who pay for the software. It's been a recipe for vendors to do the minimal amount possible after they've already sold the decision makers on the project. Feedback cycles between the customers and the developers only work when the customers have real carrots and sticks at their disposal. A feedback cycle that includes a third party (who has no skin in the game) who has carrots will be an easy mark for vendors.

Secondly, is the depth of IT organizations and the need for middle managers to feel they are adding value by micromanagement and attempts at technical IT decision making. A senior manager will say, give me a payroll system in x amount of time using y resources -- they won't care what language you write it in or what editor you use. They just want results. That's something that is actionable by programmers. But middle managers will invariably wish to "help" the software get done by injecting their ideas gleamed from the latest magazine article they read or Gartner conference they attended. Middle managers will help developers by choosing the implementation language and tools they are allowed to use. Management by magazine is not an effective strategy, but you wouldn't know that from how many middle managers actively practice it. There is a reason your doctor, fireman and home builder aren't very interested in your opinion on what tools they should use ... you're not qualified to make that decision. Yet it is the rare middle manager who recognizes that fact and instead focuses on results and removing roadblocks.

Finally, because of the previous two hindrances, it is very difficult in IT organizations to separate the wheat from the chaff. By rewarding mediocrity and hindering software development, the doers are slowed down enough that it's not as evident that the talkers aren't producing anything of value. In every other field, the organizations attempt to help the best and the brightest be more efficient and get more done ... not so much in the enterprisey environments.

The technical part of the IT team will do that to themselves, too. Architects concerned about sub-par developers will limit tool sets, sysadmins won't allow install frameworks they don't like, DBAs will limit access to the database's power, turf wars erupt over who can do what, etc., etc.

It's popular to blame management, but the calcification of development strategies comes from a lot of different places.

I would agree, there are many roles that can stifle software development and you've listed a number of good examples.

But if we flip the viewpoint a little ... if we change it from figuring out who is to blame to figuring out who can help solve the problem, then I think we make some progress. I would say managers with budgetary control are an important part of the solution.

"However if you look at any other industry, there is a complete form of industrialisation, from the finance industry it is balance sheets, from automotive it is to produce components and cars which are same."

This is the real myth.

I work in finance. There are no standards. There are lots of attempts, but every finance company reinvents its own wheel. Oh, and when they reinvent it, they claim it gives them a "competitive edge" to every other finance company. Same for just about every other industry where these mythical "standards" don't exist.

The difference, in the finance industry (and many others) is that the people who come up with the forms, workflows, business processes, etc. just understand and accept compromise. Software developers (and, by extension, software managers) look at the world of business software and say, "What a mess." Well, it is a mess, but so are the business processes, workflows, etc. that often underly those systems.

Some software may be crappy, but trust me, it's not because of standards. Software developers through experiments like open source software have become more standards-focused than many other industries. Standards are not the silver bullet.

There is no silver bullet :-) Software is art, some pieces are better than others, and creative inspiration isn't "reproducible on-demand" no matter how hard you try.