If a car company sells you a car engine that bursts into flames, they're still held liable for damages. They can't just say "Uhmm... The engine may burst into flames, you should buy our next model".

The fact that Adobe can get away with this amazes me! With the theoretical engine problem you need to recall/repair each and every engine individually. With software, once you've developed the patch you can distribute it at next to no cost. There's no excuse for this.

Why can software engineers and companies get away with such horrendous practices?

> If a car company sells you a car engine that bursts into flames, they're still held liable for damages.

Not forever! Only for 10 years, after that they are no longer responsible. Software is the same way, only the timeframe is much shorter, and there is no set standard.

Although less than a year, like CS5.5 is too short. I would suggest double or triple the usual time between major versions as a reasonable timeframe, in this case that seems to be about yearly, so Adobe should provide support for 2 to 3 years for old versions.

But if cars are renewed annually with, say, one major revision every 4-5 years then the critical lifespan should be about double the time between major upgrades. That way old products can't be killed the day they're updated.

Why can software engineers and companies get away with such horrendous practices?

Because most of us who write software believe that the risks associated with effectively zero liability for software failure are far outweighed by the costs of government intervention. General purpose software is far too "easy" to create for mandatory liability to make any sense.

The better solution is for the market to demand simple security fixes in situations like this.

[Edited to clarify that software risk is less costly than government intervention]

They're not lazy. If you've ever used Fireworks in the Macromedia days and compare it to the massive behemoths of slowness they've added to each subsequent CS version, I can only conclude that they're not lazy. An effort this concerted to make no progress in ten years and add naught but a twenty second startup time on processors that weren't even conceived back in Macromedia Fireworks 8 days cannot be a product of laziness.

Just like I think that this here thing doesn't have anything to do with laziness. It's that Adobe wants you to pirate the crap out of CS6, because they know they won't get money from you anyway. They do know, however, that every cent they don't get directly from you is a cent they'll get from your future employer or customer or small business which is forced to buy Adobe CS6. It's not laziness, it's doing exactly what they need to to make sure that they keep their repeat customers upgrading.

Microsoft have the technical capability to deactivate pirate installs of Windows through WGA. Instead, they choose to display a nag message and disable software updates. Microsoft spent a ton of money developing a really sophisticated anti-piracy system, but decided against using it to prevent piracy by end-users. To me, that speaks volumes about how piracy fits into their business strategy.

I was thinking this the other day, and given that Adobe is a company that could struggle with the demise of Flash on the web and growing discontent over its core product line the time is almost right for an evangelist to take control and make sweeping changes. If a person came along and made drastic changes, such as the re-branding of Flash, a complete overhaul of the Adobe Creative Suite and the stripping-down of the PDF format I can see a lot of people turning their heads.

There's an old joke - field marshal Model is asked the secret of his success as a commander. He says the key is managing the men under you. They are smart or stupid, energetic or lazy. The smart and energetic make excellent field commanders, the smart and lazy make good staff officers, and the lazy and stupid can handle supply. And the lazy and energetic? Transfer them elsewhere.

Adobe isn't lazy. It takes extra work to implement multiple cross platform ui libraries with twenty different slider widgets none of which work quite right.

While popular the comparison between cars and computer programs are not well chosen. Actually comparing software to any physical object is point-less. These two only have anything in common on the surface.

If you were to make software require the rigorous testing that physical products like cars undergo you would likely never be able to ship anything. If you did the customer would not be willing to pay the price.

Software is infinitely more complex than even space shuttles. The number of possible combinations which you program can traverse is so big it doesn't make any sense.

You could of course start proving mathematically that your software will always behave correctly. This would require you to use a language which facilitates such a method like erlang. No more web development in PHP, Ruby, JavaScript or anything else which relies on probabilistic garbage collection.

I guarantee you that once you spend the money having your code proven your costs are so high that no one will buy your software. In stead they'll turn to the competitor who wrote it in VB and accept their EULA and live with any errors.

The nature of software is not the same as of physical objects. You can either accept this and plan accordingly or you can betray yourself and keep getting angry about bugs.

I don't have any experience with engineering mechanical systems, but I think there are at least aspects of software that are more complex than building physical things.

Software is expected to scale by many orders of magnitude in many dimensions. The equivalent would be a vehicle that supports carrying between 1 and 1 million people, can travel anywhere between 1 and 1 million mph, running off fuel between 1 and 200 octane. Physical objects are never expected to support such wide scaling parameters, and yet this is very common in software.

Software is also expected to run on lots of different kinds of hardware with different features and performance characteristics. A rough analogy is a physical design that has to support being constructed from either aluminium or steel.

Since software is more abstract in nature, you'll often hear people saying that they weren't even sure what they were building until version 2. The requirements are also more likely to change during the engineering process. Mechanical things seem more likely to have a well-defined purpose and scope throughout the engineering process.

Eh, I don't really buy any of that. Have you actually worked in the mechanical engineering world? I feel like it's far more gray than software engineering. I might have a specs on the output, but the environment is the actual physical world with all of its problems. Corrosion, temperature, vibration, dirt, dust, etc. It just screws with you the entire time. The abstract environment of a computer is tame in comparison. The only thing you have to worry about is the dependencies - which is basically configuration managament. Configuration managament is a problem in the mechanical world too. Except if you design a power plant to Rev B of the drawing, and show up with a Rev A drawing part that doens't fit, you might be out millions of dollars and months of times because there is no 'recompile' button when it comes to giant machined parts.

As for your specific examples, 'different kinds of hardware' is no different than saying my system needs to work at -30F and 130F temperature. Materials behave very differently at different temperatures and we have to account for that. Some metals are weaker in temperatures as high as +25F. That's something you will see all the time.

You are also vastly over-rating the complexity of scaling. It's really not that hard. Are you really going to tell me it's harder to figure out how to scale a web site than it is to build a rocket engine? Because there are about 1,000 web sites out there with millions of users and only about 10 organizations building rockets.

No (which I admitted up-front). But have you ever worked on large, high-availability distributed systems? When you say that scaling is "really not that hard" I'm suspect that the answer is no. It is absurdly more complex than single-machine programming. There may be more people building large websites, but that probably has a lot to do with the fact that a lot more people visit websites than ride on rockets. If you look at the number of support staff needed to run a website like Amazon vs. launch a rocket, I bet they wouldn't be that far off.

I'm not saying mechanical engineering is easy, I'm just saying the software isn't easy either. I also don't think that you can draw the conclusion that because we have 60 years of mechanical engineering process that software should fit into the same processes.

Having worked on large scale systems I would agree with him. Scaling is only hard when you completely ignore it at the design phase. IMO, designing scalable systems is often easier, because they need to be loosely coupled to handle failures. Honestly, I think most of the hard real world software problem tend to deal with legacy systems and the near organic mess that builds up over time.

So what's your database system like. Well, we are 1/2 though the transition between A and B, we don't have a DBA so Bob wrote something to create build scripts based on changes made in this file. It's buggy and we are starting to try out C but if you ...

These papers all describe solutions to "hard real world software problems" and have nothing to do with legacy systems. If you think there aren't hard problems in software, you're probably not working on one.

doc4t's next sentence ("The number of possible combinations which you[sic] program can traverse is so big it doesn't make any sense.") is critical in understanding the sentence you quoted. Writing software is not so complex, but guaranteeing its operation is insanely complex.

We have fatigue/vibration, corrosion, and wear. What's the equivalent in software? There is a reason they park perfectly good airplanes in the desert - we can't gaurentee they won't fall out of the sky because it's impossible to perfectly predict fatigue.

And I have issues all the time related to things failing 3 or 5 years after they were built (yet they have a 40 year design lifetime). Metals always seem to find a new way to corrode and bearings find new ways to fail. There is no equivalent to a corrosive, hostile, environment in software.

Not to mention the random things thrown at you in the physical world. If you design jet engines, be prepared for birds to get sucked in (hopefully not too many, and if so, hopefully your pilot can land in a nearby river full of ferries to pickup the passengers). If you design buildings, get ready for earthquakes of unknown size, hurricanes of unknown wind speed, and terrorists with various methods of taking your structure down.

We can't gaurentee anything. In fact we can barely test most of the complex stuff because it's too expensive. Cars are cheap relative to most things. They don't crash 737s to find out what happens or shake an entire city just to ensure that it is built correctly. You have to predict all of this stuff using calculations and it largely goes untested.

Most mechanical components obey underlying physical principles that have linear or quadratic approximations, at least in certain regimes of environmental and other factors. Therefore, we can model the component and we can know when we are unable to model it.

We manage overall system complexity via physical/mechanical modularization, with things to insulate against thermal, mechanical, chemical, electrical coupling. By testing individual components, we have basic assurances on overall system behavior.

Software attempts to do this with "good design principles", but the truth of the matter is that just about any software component in a typical application can completely jack up the global environment for other components, and processes can make OS and environment modifications that completely break other processes belonging to the same user.

Try issuing performance guarantees on an airplane whose fuel pump can set μ0 and ε0 to -1 if the ground crewman that filled the wing tanks was named "Bob Null".

Computers are physical machines that obey the laws of physics. Flipping bits at a lower microcontroller level can be observed as literally directing electrons to travel to specific chip pins.

With unit tests and behavioral tests, we can assume basic assurances on individual components working as a whole.

Engineering also has good design principles. One does not make gear teeth perfectly angular (take a look at the Antikythera Mechanism) because it can lead to premature wear and will have poor performance. In fact, there are hundreds if not thousands of kinds of gear teeth, and interchanging them within the same application can have all kinds of long lasting effects. Take a look into any vehicle recall in the past 2 decades and see that nearly every one of them is an edge case bug that slipped by Q&A.

Not accounting for the string null being valid is a bad design principle within the domain of software. Just as using Frozen water as a bearing surface in high speed rotational machines (Hey! It's hard and slippery! It's perfect!) is a stupid mistake, not accounting for valid "Bob Null"s will also lead to premature failure if not for the database but for the business.

We've only been at software engineering for less than a hundred years. We've been at mechanical engineering for a good 2000 (see the aforementioned Antikythera). We might need a few more years to iron out best practices as an industry.

Here's the thing: physical processes and failures tend to average out to nice smooth functions with Gaussian distributions. Each additional random variable has a minimal contribution to the average state of the system. Wear and tear tends to accumulate gradually over time until some mostly predictable breaking threshold is met.

With digital computers, however, the size of the state space that the system can occupy grows exponentially with the number of bits of state in the system, and changing a single bit can result in an explosive cascade of changes to the rest of the system[0]. Accumulated random failures of computer software very rarely lead to a nice, smooth, predictable probability distribution. Software failures are not caused by anything remotely resembling wear and tear.

Read Feynmans analysis of the Challenger disaster if you want to see just how well an physical engineering problem can grow exponentially due to changing the properties of a single bit - a difference in temperature of a few degrees changing the mechanical properties of a rubber o-ring in that case.

It's still a good example when trying to convey the complexity of software to people who don't understand computers since most have an idea that space shuttles are very complex (which they of cause are)

My point was that that testing all possible combinations of how your app can execute is next to impossible unless you are willing to cough up a serious amount of money for rigid mathematical proving. Which would then make it too expensive.

I think it does a disservice because it overlooks the fact that people have figured out how to solve these problems.

All engineers are human. Whether you are working on a space shuttle, an airliner, a nuclear power plant, or an iPhone app, you are a human. Humans make mistakes. Humans overlook things.

So how do we engineer really complex systems with hundreds or thousands of lives at stake to an exacting standard - knowing that the engineers are human?

The answer is to build a process that catches mistakes. I don't think software engineering has really caught up with mechanical engineering in terms of process.

I know a lot of guys who love to wrench on cars. They swap parts, add horsepower, change out the suspension, etc. They can build a really fast car. But that's not mechanical engineering. They are mechanics.

In a lot of ways writing software is like that. Glue together some libraries and APIs the same way a tuner supercharges an engine. But that isn't engineering.

Obviously we don't need the rigour of the space shuttle to make an iPhone app, but if your application calls for that complexity (or your budget/liability is large), then you need to bring in the process mechanical engineers have been using for the last 60 years.

That means multiple people checking all the code. That means a well planned out arrangement/architecture. That means testing the individual parts thoroughly and the whole system together. And it means very specific configuration managament of every dependency.

It's not impossible, it's just not the willy-nilly fun part of hacking stuff together. It's the ugly paperwork inducing lame part of working in a big company. But that process if done correctly helps catch mistakes.

You can build software as reliable as a car, but that's not the issue. You cannot build software with all of the features desired by management in the time allotted and also make it robust. It's a matter of priorities, and robustness is not Adobe's priority.

Although you're right that for some projects, the poor quality is because it's more fun to just hack it together, but for many, it's a matter of business priority. I've worked on projects (avionics software) that had the rigor that you describe. I've also worked on projects where the developers consistently tried to add robustness, but management kept redirecting them to add more features.

"The answer is to build a process that catches mistakes. I don't think software engineering has really caught up with mechanical engineering in terms of process."

I agree. I'm not sure it ever will. But comparing software to a car and the relationship between the buyer and seller is too simplified. Software have bugs. Many more bugs than cars. Because it's not tested properly. Which we don't do because no one would buy it at the price which comes from proper testing.

You can accept this and write your contract accordingly or you can sit down, muck and be disappointed when it fails.

"There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies and the other way is to make it so complicated that there are no obvious deficiencies." — C.A.R. Hoare

"The computing scientist's main challenge is not to get confused by the complexities of his own making."

The OP are not comparing cars to software. They are comparing the natures of the commercial relationship between the owner and the car manufacturer and between the user and Adobe. Expecting free "recalls" from Adobe is not unreasonable.

Exactly. I was not saying software should be theoretically guaranteed -- the responder just assumed that.

Adobe has a fix to a serious vulnerability. Not releasing it when the cost to them is tiny is essentially criminal negligence, especially when they say the fix is available to those who are willing to pay...

This is the same company that owns Flash which runs on >99% of the desktop machines connected to the Internet.

Ok. The possible combinations of the way your application can (theoretically) run far outnumbers the estimated number of atoms in the visible universe - even for small programs. You just need a couple of loops in loops. If your program don't have it then I'm sure Node, Apache, Postgres, Rails whatever have plenty.

While many of these combinations may never happen you would still have to provide proof of all of them not causing your program to go into a state which you can not handle.

"and then go and make some of your own comparisons with the space shuttle"
This was a comparison of complexity - not a direct comparison between the two.

The possible combinations of the way your application can (theoretically) run far outnumbers the estimated number of atoms in the visible universe - even for small programs. You just need a couple of loops in loops.

Can you elaborate on this? I'm not convinced that this is true (but am willing to be proven wrong)

The user runs some client code which you wrote. In a browser which other guys wrote. Running on an OS made by some one. Sending data back and forth via protocols and network equipment with software that other people wrote.

You server OS receives the request and passes it to your load balancer which distributes to Apache which forwards to PHP which routes to SQL...and all the way back.

With the millions and billions of lines of code involved in these steps it could likely be a number of this magnitude.