Even when viewing the subject in the most objective way possible, it is clear that software, as a product, generally suffers from low quality.

Take for example a house built from scratch. Usually, the house will function as it is supposed to. It will stand for many years to come, the roof will support heavy weather conditions, the doors and the windows will do their job, the foundations will not collapse even when the house is fully populated. Sure, minor problems do occur, like a leaking faucet or a bad paint job, but these are not critical.

Software, on the other hand is much more susceptible to suffer from bad quality: unexpected crashes, erroneous behavior, miscellaneous bugs, etc. Sure, there are many software projects and products which show high quality and are very reliable. But lots of software products do not fall in this category. Take into consideration paradigms like TDD which its popularity is on the rise in the past few years.

Why is this? Why do people have to fear that their software will not work or crash? (Do you walk into a house fearing its foundations will collapse?) Why is software - subjectively - so full of bugs?

Possible reasons:

Modern software engineering has existed for only a few decades, a small time period compared to other forms of engineering/production.

Software is very complicated with layers upon layers of complexity, integrating them all is not trivial.

Software development is relatively easy to start with, anyone can write a simple program on his PC, which leads to amateur software leaking into the market.

Tight budgets and timeframes do not allow complete and high quality development and extensive testing.

How do you explain this issue, and do you see software quality advancing in the near future?

Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise.
If this question can be reworded to fit the rules in the help center, please edit the question.

53 Answers
53

I see time and time again that the marketplace does not reward software quality. Apple is a good example; for many years they had a superior product, but people would not pay. (Not just the up-front costs of Apple but also the costs to change from whatever they were using that year.)

I still see many people making buying decisions on the idea that 'more is better'. Both Microsoft and the Free Software Foundation have conditioned people to two pernicious ideas: 1. The product with more features is always better. 2. It is OK for software to fail once in a while. Because the people making buying decisions do not understand that simpler is better, there are tremendous economic incentives to create software that is complex but appears to have nice features.

War story: in the early 1990s, a friend of mine tried to write his dissertation using Microsoft Word. Everything went fine until his manuscript got to around a hundred pages. At that point Word refused to let him change it any more. The eventual fix was for him to double the amount of memory on his PC. In the early 1990s this was an expensive fix.

After it was all over I asked why he had chosen Word. His response:

I liked the alleged features.

People buy software on perception, not on reality. It doesn't matter if the reality of the software is that it is hard to use or doesn't work; if it looks good, people will buy it.

Friends of mine who study economics tell me that this behavior is the sign of an immature market and that when computing matures and the demand for computing stabilizes, things will be different. I am not so sure---I look at the market for automobiles, which have 100+ years of engineering knowhow put in, and I see a much bigger market for cars that make people feel good than I do for cars that are boring but just work. I look forward to a day when buyers reject programs for having too many features and insist on buying simple, boring programs that just work.

If you think the only difference between a quality car and an expensive car is status, you must have only driven one type of car.
–
notJimNov 26 '09 at 9:49

2

Isn't you whole point undermined by the fact that Apple have been rewarded in the marketplace when they started doing things right, just after Steve Jobs returned? Their stock price has gone up greater than 100x in the last 15 years.
–
Matthew LockMar 23 '10 at 6:13

1

I was surprised to see a mention of Free Software Foundation. What have they done to perpetuate that idea that "more is better" and "it's okay to produce crap"?
–
TshepangApr 28 '10 at 19:04

One major reason is that for the most part, software "engineers" aren't really trained as engineers. One of the most important principles in engineering is to keep designs as simple as possible in order to maximize reliability (fewer parts = fewer things that can fail).

Most software developers that I've worked with over the years are not just unaware of the KISS principle, but also actively committed to making their software as complicated as possible. Programmers by their nature enjoy working with complexity, so much so that they tend to add it if it isn't there already. This leads to buggy software.

KISS? Yeah Right :-) We'd all love to keep it simple but the bloody users keep coming up with obscure business rules that need X tweaking. Simple software is only found in the pages of textbooks or taught on CS courses
–
CruachanDec 8 '08 at 9:00

5

and a lot of them suck because they heard that its good money on the radio, and now they convinced someone to hire them, not because they are passionate and interested in the craft
–
DevelopingChrisAug 10 '09 at 12:59

12

A bridge has only one function. To make an analogy, you need to compare it with a software that has only one function, like notepad. I would say you get the same degree of quality from a bridge then from notepad. On the other hand, a complex software would need to be compared with a complex structure, like a whole city would equal windows. With the money injected every year in city infrastructure maintenance, I'd say a whole city is just as crappy as windows is.
–
didibusFeb 13 '10 at 8:05

"Take for example a house built from
scratch. Usually, the house will
function as it is supposed to. It will
stand for many years to come, the roof
will support heavy weather conditions,
the doors and the windows will do
their job, the foundations will not
collapse even when the house is fully
populated. Sure, minor problems do
occur, like a leaking faucet or a bad
paint job, but these are not
critical."

This is clearly not true -- it's a really bad example. I've seen a large number of houses that are just shacks. Houses that can, and often do, simply collapse in heavy weather. They don't use windows and the door is a sheet of plywood.

To get away with building a shack instead of a house, you have to build your shack where there are no housing codes or standards. Or, you have to evade inspection by claiming you don't "live" there. Or you have to be heavily armed so that the inspectors don't bother you.

[bignose] Indeed. Also relevant is that many of us (including, I suspect, the person who wrote the original question) live in countries where housing is strictly regulated, and poor quality is considered unacceptable even if both builder and purchaser want to cut corners. In the absence of that enforced regulation, housing runs the whole range including ghastly deathtraps; that is what we see in the field of software.

Software quality is like all other forms of quality. The issue is this.

Most consumer products have warranties (expressed or implied). Some even have applicable standards for safety.

Also relevant is that many of us live in countries where housing is strictly regulated, and poor quality is considered unacceptable even if both builder and purchaser want to cut corners. In the absence of that enforced regulation, housing runs the whole range including ghastly deathtraps; that is what we see in the field of software.
–
bignoseMar 23 '10 at 6:08

I see a lot of browbeating here, but I think we should acknowledge one point:

Software Development is HARD

It used to be considered a truism that when IBM wrote OS/360 it was, at that point, the most logically complex system ever developed by humans.

Since then we've developed techniques of handling more and more complex systems. Our languages, APIs and tools have added layers of conceptualization and abstraction not dreamed of when OS/360 was hand-cranked together. The trouble is the complexity of what we are trying to achieve has increased in step - or maybe even a little more than.

Now this isn't to decry that there is problems with over-tight deadlines, poor programmers and all the issues brought up here. But the modern world practically runs on the software developed over the past 20 - 30 years and while there's lots of room for improvement we're not doing too badly - all in all.

I think this is probably the closest answer, but I'd like to add one more thing: Software development is in its infancy. While engineers have had hundreds, even thousands of years to perfect their craft, software engineers have had a handful of decades.
–
Jeff HubbardDec 8 '08 at 7:38

Though I agree with the reasons given so far, there is one very simple explanation missing here that is rampant throughout the software industry: bad management.

I worked as a consultant for many years, and I can't tell you how many places I've worked at where the developers were expected to finish yesterday a project they were given tomorrow. It was either deliver the project on time or lose your job, so we had no choice but write code that was thrown together and not adequately tested. This was especially prevalent where the managers were not in the IT field themselves and had no clue what it takes to write and support quality software.

Like software, a house is made of many smaller structures - bricks, doors, roof tiles and so on. Unlike software, however, each of these pieces has already been pre-made and tested long before it reaches the house. Bricks are subject to stress and pressure testing, door hinges are tested thousands of times for durability so the fact that these parts should not fail is a given and we know their limits.

In software, each part is often being used for the first time - that is, it's entirely new code. Maybe we don't know quite how it behaves in all circumstances. This is one reason why I believe so much in code re-use where possible.

A lot of responders here are blaming these defects on unskilled software developers.

I think the problem is far deeper and more interesting than that.

A great deal of software comes out of engineering institutions who hire solid people and put enormous energy into getting things right. There are QA departments, methodologies, policies like peer code review, etc.

And still a shocking percentage of projects have serious problems or fail utterly, never delivering anything of use.

How is that possible? We should have this down by now, right?

If you ask the people in charge, they might blame the budget. And of course that's often the final straw, the thing that ostensibly forces the doors to close or the project to ship too early.

But that is often an extremely superficial answer designed to protect careers. After all, there wouldn't be budget problems if the project was planned and executed properly.

And running software projects, especially large ones, requires a specialized skillset that goes way beyond MS Project and even formal methodologies like Agile.

Here's an example: you have to be able to explain to stakeholders (who maybe have no point of reference and think software is easy) that what they want is going to take way longer than they expect. And sell that, even if there is enormous pressure - and there will be.

You have to educate delicately (after all, these are smart exec-types who are very sure of themselves), maybe telling a story about the old Waterfall methodology and why software projects fail, and why a counter-intuitive approach like Agile might be necessary, and sorry, not only does that mean your budget is not going to work, it could mean no fixed costs at all.

That can be tough because there is competition out there who will tell them what they want to hear and move forward with a plan that will most likely lead to chaos and defeat. Because people don't understand this stuff.

I think the problem is inherently human. Human brains haven’t developed to cope well with this high degree of abstraction required in software development. Our ancestors had to face tigers in the savanna. Natural selection dictates that they evolved by adapting to this situation.

Most animals, even “intelligent” ones, fail at the most simple tasts that require any kind of abstraction at all. For example, I’ve heard from zoologists that cats are unable to find their food if, in order to get to it, they have to turn their back to the food source. They just can’t make the mental connection that turning in the opposite direction might help them bypass an obstacle.

Human minds can cope with much higher degrees of abstraction. However, the problem of software development is a completely new one. There has never been a comparable situation. Even though humans have known mathematics for millenia, this has never been a factor favoured in evolution (and the time span has been much too short to play any role in evolution anyway).

In conclusion, human minds aren't really well evolved to face this kind of task. It just doesn’t come natural. Therefore, everything has to be found out the hard way.
Intuitively, we say “building software is much more complex than building houses” – in a way, that’s right. But it’s most probably due to what I’ve outlined above.

Admittedly, that explanation, even if right, is quite abstract and I do think that software quality will improve substantially in the near future (long before any mechanism such as natural selection can kick in) because we’ll find out more about the processes involved and how to refine them.

A lot of high-falutin' ideas here, but I think the answer is very simple.

I'm not trolling, but the fact that most software engineers a. suck and b. have absolutely no desire whatsoever to develop themselves has everything to do with this. The 9-5'ers, especially in internal software (I work in internal software myself so... :-) just have absolutely no idea what they're doing, nothing in place to specify what is good and what isn't, the blind leading the blind. It's very rare for software to be developed to anything resembling a good standard anywhere. Most software is a big ball o' mud.

Ultimately there are a few good software houses out there, the rest suck. And we are stuck with the suck on the whole... the amazing thing is that anything actually works!!

As others have pointed out, there are plenty of badly built houses, houses that are so infected with fungi after a single year that it's a hazard to your health to live there, houses where the rain leaks in through the roof, houses that collapse in windy weather and houses that fail in countless of other ways.
And not just that, there are also countless of structural engineering projects which end hugely breaking their budget, taking years more than planned, and costing three times the original estimate. That is not unique to software engineering.

But another important factor is that building engineers can take a lot of things for granted. When you're building a house, you don't have to implement gravity yourself. The structural strength of steel or concrete just happens, you don't need to do anything special to ensure that it works in leap years too, or if the user opens the bathroom door at the same time as the phone rings.

In software, the only rules are the ones you code yourself. You're building a house out of the empty void, not out of well-known materials like concrete, steel and wood, and without the luxury of a fixed framework supplying gravity, solid ground, a consistent and stable atmosphere consisting of the same mix of gases every day, and keeping a pretty much constant pressure.

Of course, there is also plenty of bad management, plenty of incompetent programmers, plenty of bad practices, and so on. But it's not just that.

Absolutely. Additionally, I think part of the problem is exactly that we insist on thinking about "software construction" in terms of analogies to physical construction, and hand-wave over the important differences that you mention.
–
Tim LesherDec 8 '08 at 3:38

2

"The structural strength of steel or concrete just happens." This is extremely false, as any building inspector will tell you. These things can only "just happen" when there are strict standards dictating steel and concrete composition, along with frequent testing, and thorough enforcement. Your house is not simple. It is the result of hundreds of years of small refinements to structure and materials. Of course, the builders have the advantage that people keep asking them to build the same thing over and over.
–
PeterAllenWebbSep 9 '10 at 12:24

Because software is heavily modified over time, it's important for it to be designed such that changes in one part of the software don't affect other parts of the software. Even though principles for addressing this are widely understood in our field, there are a couple of problems. The first is that there are plenty of individual practicioners who don't understand or care about those principles. The other is that there can be competing forces that cause us to violate those principles (budget, schedule, or even engineering concerns such as performance engineering). So over time software becomes complex and brittle unless you apply constant discipline to prevent that outcome.

Another piece is that because everybody (developers, managers, even customers) understands software is malleable, people have at least some level of comfort with the idea that if there's a bug, we'll fix it with the next release or patch. I've often heard it said that it's more expensive to fix prod bugs than QA bugs, and more expensive to fix QA bugs than dev bugs, etc., and given the economics we should do things like QA and unit testing. And ultimately I agree, but the idea can be carried too far. If you're able to find the bug via unit testing or QA, yeah, you're better off, but there's a cost associated with finding the bug in the first place, and diminishing returns kicks in pretty quickly due to the underlying complexity of the system (e.g. combinatorial explosions in the number of possible configurations). So the economics mandate catching most problems in dev/QA by investing in training, code reviews, testing disciple, etc., but not trying to be exhaustive, because in most cases the costs of fixing user-discovered bugs is reasonable. This is very different than most consumer products, where the cost associated with fixing bugs is great. The ROI for catching problems early is much greater if a late discovery is going to mean a recall or pulling something off the shelves.

To go with the building example: It's like you build a small hut, and after being half done building it the owner comes along and says: "Look, I thought about it and I'd rather have a villa." - So you change the building, but the fundament has already been laid, so you have to modify what's already there. The owner comes back after two months and says "Guys, one bathroom is not enough. I'd rather have three. It's done tomorrow, right?" And so on and so on. Good software engineering relies on permanent iterative re-architecturing over time.
–
Thorsten79Aug 13 '10 at 8:06

This problem is definitely not unique to software development. All the reasons you mentioned are valid, but in cases that software is not life or mission critical, much less attention to quality is generally given.

As such, the house analogy is not a good one - if houses break-down, human lives could be lost which is unacceptable. For this reason a much tighter quality control is maintained.

There are plenty of industries that quality is all over the board, and it is usually contributed to value considerations. You pay more - you get higher quality products, and it's the same in software development.

Making a house that will stand up to normal weather and aging can be accomplished with simple constraints on selection of material and distribution of load. There are no shortage of modeling systems that can spit out workable designs automatically.

So yea following a few simple rules you too can design a simple structure that does not fall down, so what? :)

I love my new house but its far from bug-free. The builders failed to do the proper flow analysis of water through the piping and the result is aweful water hammer effects when the outside sprinklers are switched on.

The concrete floor in the basement started cracking after only 2 years.

There are several places where cracks have developed in walls.

The blower motor in the furnace failed after 1 year of use.

Sinks not properly cauked from the beginning causing water seepage and big mess in sink cabinents.

After a few days of 90+ deg inside temperature all the rubber door stops in the entire house cracked and needed to be replaced.

Various GFIs were too picky to be useful for powering much of anything and needed to be replaced..

A few florescent lighting fixtures went bad after the first year and needed to be replaced.

God knows if you've ever gone through the process of building a new house even with contractors who have their own QA people you still need to keep up on top of everything and file 'bug reports' regularly (Of which I've filed countless dozens) to end up with a house that resembles the one you ordered.

I like most home owners are no stranger to home depot... Until we have houses that can last 100 years without any problems or "regular maintenance" then I disagree with the basic premise.

Lack of formal training is a universal problem. Complexity is a universal problem. Just because houses are "big structures" does not also mean they are "complex structures".

I don't feel comfortable with all the house-building analogies. Most people (even programmers) are not aware, that any nontrivial software is usually much more complex than a house. Also the parts of a software system have zero tolerance for errors. A house will not break down, if one brick is slightly off spec, software will.

It took e.g. mechanical engineering centuries to get to the current state where they are able to have exact knowledge about how to dimensionize a part, they have standard parts, ...
Unfortunately that what is called "Software Engineering" so far mostly failed to deliver methods and tools that fulfill the same purpose for our trade. We are still in the middle age where some knowledge is there and can be applied for special areas, but no consistent standard theory of the software development process (and Tools supporting it) seems to exist so far. The reasons for that are too complex to discuss them here, mainly because they see the process of software development too superficially.

Software has got a complexity and fragility not found in other areas. E. g. a car has got number of parts in the four-digit or lower five-digit area (although some of them quite complex), a piece of software can have millions of statements.
And noone would accept a car that is falling apart while you drive just because e.g. the ash tray overflows, would you?

Additional points:

The tools used by most people are far from "state of the art". E.g. the most common errors in C/C++ like pointer errors and buffer overflows can hardly happen in Pascal-style languages like Ada that have been available for decades.

Mechanical parts usually have some tolerance to overload. E.g. in aerospace parts are overdimensionized by 50% or more so that an undetected damage to a part will usually not cause a serious failure until (hopefully!) discovered during the next routine maintainance interval. Similiar concepts exist in other areas of engineering. Doing this in software development is much additional work (you need a customer who wants this and is willing to pay for it) and still not 100% safe most of the time due to the huge number of possible errors.

The industry leaders tend to either not making use of the state of the art or taint it when trying to implement it. E.g. C# has got some very good basic concepts, but the C-style syntax is confusing, error-prone trash, Pascal syntax is much clearer, and the security concept was tainted by the possibility of using unsafe code. Libraries still usually have only a C interface as the smallest common denominator.

Many people in the business lack a knowledge about the necessarity of building appropriate abstractions.
E.g. operation systems usually have only two layers of security, user and system. If a process (e.g. some evil script in a web browser that has corresponding securiety holes) can get user access rights, it can violate all data that the user has access to.
And if a user process can get system rights, it can do with the machine whatever it wants.
Different, fine-grained concepts exist, but are hardly used for several reasons, including lazyness of the programmer or backward compatibility, but also because these have been fittet to APIs afterwards and therefore have neither good granularity nor good performance, but also being proprietary and unportable.
The same applies for development tools. Why can a broken statement in a C/C++ program break data or even the program at a totally different location, causing unpredictable results? Why don't we have "unbreakable" standard parts?

People go to the limits of existing basic technologies or beyond instead of creating better basic technologies first.

Users got used to seeing failures as "normal", even serious ones that noone would accept in e.g. a car.

I think alot of you have missed a major point. Software is invisible, we can only see the results from a given set of inputs and the whole process happens with out us able to see what it is doing.

When building a house we can see, touch and feel the process and adjust when its wrong. In software we cant do this, We use some inputs, get some outputs and then check to see if it what we wanted. If not we go back change some code and try again thus leading to quality problems because we might have a fluke set of inputs that just happens to give the right output and we will move on and cant see the errors and bugs under the surface.

I think it's because every part of a (well-written) piece of software is unique. A house is a big project, but there's a lot of repetition: there are a ton of identical nails, 2x4s, bolts, bricks and so on. In contrast, in a well-written piece of software there's only one of each thing. It's like building a whole house by reusing the exact same brick, nail, and piece of lumber over and over: sure, you make a smaller project, but if that single brick, nail or piece of lumber is shoddy, the whole house will fall down.

Houses are sturdy because of the amount of redundancy. If one nail is out of place, it generally doesn't cause the house to fall. In software, the equivalent of a nail out of place could cause the whole thing to crash.
–
KibbeeDec 7 '08 at 19:51

Engineering in all other disciplines is a highly mathematical subject -- a structural , aeronautical, or chemical engineer can analyze the problem they are trying to solve using well developed mathematical models.

All attempts to do this for software engineering on a large scale have failed.

The may be due to the relative 'newness' of software engineering, but I think it is more inherent than that.

I think that is the wrong question to ask. Quality software is made and does exist. Some of it is still running reliably years after it was created.

The real question is why is software quality so expensive. It is expensive enough that people requisitioning a system would prefer to keep a buggy system that only just does it rather than pony up for the huge expense of finding and fixing the bugs. Leading to crappy software being released.

Quality software is hard to keep quality too. Imagine a console app that was written years ago and written well? Compared to an interface written in modern tools, that is no longer quality. Technology has moves so fast. For a car it takes decades before it's interface is no longer considered quality (well, not so much now that computers are being integrated into the bloody things).

Your analogy perfectly explains why software quality is hard. Houses are physical objects whose materials obey the laws of physics. The materials, architecture, and engineering qualities of a house can be physically and numerically analyzed. How else can a civil engineer describe down to the number pounds of rebar how to build a bridge across a windy span able to withstand high speed truck traffic and the bridge does not collapse? Newtonian Physics has rules that never vary. You can numerically prove how strong the bridge will be.

On the other hand, computer programming has the halting problem. No amount of analysis can prove that a given program fulfills its defined purpose. So even if you could assemble some dream team of super-programmers to work on your project. The program they deliver cannot be certified to fulfill your (non-trivial) requirements 100%.

I like the house analogy tossed around. When you build a house, you have a lot of slack in many areas. Even someone who knows nothing about building houses can learn it pretty quickly because the processes and the materials are "simple" and well understood. A piece of rock is not going to suddenly jump up and talk to your face. If you put it somewhere and you give it some support, it's going to stay there - period. Burned ceramics aren't going to let water through. You just have to find some goo to plug the gaps until the guarantee time is over and you're good.

With software, you have much more delicate and hidden dependencies. Since people don't understand this, here is my analogy: Chess has some pretty simple rules (roughly 20). If you need, you can enumerate them on one standard sized piece of paper (A4 or letter or whatever). Despite that, you can't enumerate all possible states of the system nor can you, given most states, tell how the system got there.

The Asian game of Go has even more simple rules (roughly 10). Contrary to intuition, Go has even more complex states and so far, no computer in the world has ever come even close to challenging a decent human player without a handicap.

Lastly, we have Conway's Game of Life. Four rules. 4. Read it again: Four (4). Game of Life is so complex that it's Turing Complete: You can build a fully working computer with it and compute anything that a computer can compute, for example, you can teach it to play Chess. Or to simulate a computer playing Conway's Game of Life. If you can write a program that can write a program that can play Conway's Game of Life. I wonder if She took precautions against "infinite recursive creation loops". Anyway.

Conclusion: While software may look simple, it never is. Every problem has at least an infinite amount of possible correct solutions and an even greater amount of wrong ones. If math allowed it, we would have to multiply this with the possible amounts of misunderstandings between the customer and the developer (multiplied by the levels of indirection between them, i.e. managers, customer relationship people, project managers, your co-workers), add the usual sprinkle of bugs in the environment you have to use and then ...

We wonder how it is even possible that anyone ever wrote a piece of code with more than 10 lines of code that actually works.

Sometimes its less expensive to ship buggy software than it is to fix it (or, more likely, its perceived to be less expensive)

Lack of understanding of the problem being solved. If you don't completely understand what you are solving, its going to be difficult to do so without introducing bugs.

Most programmers are pretty bad programmers (my opinion, of course, but in my own experience, I'd say only one in five programmers really know what they're doing)

If a problem is complex, its easy to get lost in one aspect and neglect another

Some programming languages are too verbose, making it difficult to keep the whole problem in mind at any one time, which allows bugs to creep in (for example, I tend to make more mistakes in Java than in Python, it may just be coincidence, of course, but I feel that Pythons higher level code helps me solve problems in fewer discrete chunks, leaving less room for bugs)

Dependencies. I believe that dependencies (I mean calculations and data which depend on one another) are a major cause of bugs - when they're not managed properly anyway (dependents not getting updated when they should etc)

A lot of programmers are lazy or distracted causing them to make mistakes. I know I'm guilty of this sometimes.

Most programmers aren't rigorous or methodological enough when approaching a problem. Instead of carefully planning out a solution and verifying that it is correct (formally or otherwise), they instead dive in and start coding. I know I sometimes do this even though I know I shouldn't..

The tight coupling of operations (instructions, statements, code blocks, functions etc) makes code less dynamic and fluid, which makes it difficult to update code, determine where code should be split up, what code can be reused, what should run concurrently and so on. This is, IMHO, another large source of error and one thats not easily solved with existing code.

Fred Brooks, of mythical man month fame, had an interesting article a few years ago titled No Silver Bullet. This was just about the time that OO software and design was being talked about as the next silver bullet to all of our software design problems.

The basic idea was that software can be broken into two types of complexity, accidental and inherent.

Accidental complexity is using x86 assembly language rather than python to write a processing script.

You can do it with x86 assembly language, but, you'll spend a lot more time.

Accidental complexity can be solved with different languages, especially if you can get a language which maps well to your problem. The large number of languages show that we work hard to solve this problem.

The other sort of complexity, inherent complexity, is what is hard. This is not solved by languages etc and is what causes us the biggest headaches. This is where the leaky abstractions bite us, and where the library calls work in the order A, B, C, D, but, in spite of the documentation which say that you can call A, B, D, C, it fails later when you call F.

Also, of course, some problems are just hard at a basic level (US Taxes, for example) and no language helps here.

Another reason not mentioned so far: even though a software achieve a reasonable good "quality" (in the sense it does do what initially specified), it still can goes horribly wrong.

Why ?

Because the final client may ending up using it with totally different conditions than the ones specified in the beginning, making the software pretty much useless.

Case in point: the computing the return rate of house mortgages, based on 20 years of data, does a perfect job... until you start feeding it with NINA mortgages (No Income, No Asset): it will still keep saying the return rate is fine. Actually... we know all too well by now the real return rate for those mortgage ;)

So the difference with the quality of an house is that you have to evaluate the usage conditions of a software released into production to really asses its quality...

One of the reasons is that the industry is very young compared to, e.g., masonry (no conspiracy theories, please). Also, ours is a field that changes too quickly. Consequently, we are never at the top of the game :( but we do try.

The business reason: because the vast majority of software does not need to be high quality.

A low quality bridge or a low quality car can easily kill someone. Thousands even. Low quality software however, will usually just annoy people. It is simply not worth it to spend as much time making quality software in most cases.

Indeed, if you look at the software used in safety critical applications, it is usually of far higher quality. It has spent more time in development, and been more thoroughly tested. It has been engineered in the same way a car or bridge would be engineered.

Human brains are not really built for elaborate networks and relations of complex abstractions.

Thus, while humans can deal with complex buildings or machinery because these are (literally) concrete by relying on things like physical dimensions and proximity and the natural laws of physics, the same cannot be said for software.

At best, we can use engineering practices to break down problems in such a way that we only have to grasp smaller problems at a time or that we can impacts of our mistakes are limited.

In addition, human communication is ambiguous and ineffective by the nature of natural languages, while software must be precise by nature.

Moore's Law causes some of the problems. Unlike Civil Engineering, everything changes quickly, so that expertise, standards, and reliable components are obsolete long before they have matured. Before we can work out how to properly solve the problem, it's gone. Topical example: many-cores.

Civil Engineering has also absorbed new technologies (e.g. pre-stressed concrete), but it's many times slower, and it doesn't require the reinvention of everything.

I also think the fact that software is so malleable that we can reinvent everything, is a contributing factor towards the pace of change. :-) It's more expensive to build a house, so people are more circumspect. However, as someone said, you're not allowed to build a cheap, DIY ramshackle shack (in developed countries).