Toyota had an odd pair of recalls this week, highlighting both the increasing importance of software within the automobile and further reinforcing a pet theory held by your humble author.
The substance of the recalls can be found on Toyota’s website, but here are the money shots:

Toyota will update the motor/generator control ECU and hybrid control ECU software on certain Model Year 2010-2014 Prius vehicles. The software’s current settings could result in higher thermal stress in certain transistors, potentially causing them to become damaged. If this happens, various warning lights will illuminate and the vehicle can enter a failsafe mode. In rare circumstances, the hybrid system might shut down while the vehicle is being driven, resulting in the loss of power and the vehicle coming to a stop.

Toyota will update the skid control ECU software on certain 2012 Toyota RAV4, 2012-2013 Toyota Tacoma, and 2012-2013 Lexus RX 350 models in order to address an electronic circuit condition that can cause the Vehicle Stability Control, Anti-lock Brake, and Traction Control functions to intermittently turn off. If these systems are off, standard braking operation remains fully functional.

Stuff like this is why engineering products for automobiles is about the worst career in the world. Not only are you faced with uncompromising cost controls, weight targets, and space constraints, you also have to consider the fact that your end user will be anywhere from Fairbanks to Death Valley with his foot flat to the floor, dead leaves in the radiator inlets, and dirty oil sloshing below the minimum mark in the sump. It’s nothing short of miraculous that cars work as well as they do, really.

In the case of the Prius, we have a situation where, presumably, too much current is being fed through something too close to a transistor, or perhaps the transistor is being too heavily loaded and overheating, the same way you can make the bottom of your laptop too hot to touch doing video editing in realtime, engaging in brute-force attacks on encrypted documents, or trying to load the various crazy Flash stuff embedded on this here website. Either way, it’s too hot to handle, so the system decides to lay off on the computing and/or the power transfer until the situation improves.

The other recall would appear to involve a potential “electronic circuit condition”. Allow me to take a wild-ass guess and say that it probably is a condition where a combination of inputs to the software creates a loop or a race condition. The latter term has nothing to do with civil rights or green flags; more a situation where a couple of variables are fighting it out for supremacy. When that happens, from the perspective of the user, the software simply goes out to lunch. Blue screen of death, endless spinning beachball, a Flappy Bird stuck in mid-air eternally because your phone rang right as you were also trying to cue up the next Fleet Foxes song. Of course, if the “user” is the ABS system in your Tacoma, then it, too, has to wait until the next reboot, which in the case of a car can range from a few minutes to the next time the battery is disconnected.

“Well what are you doing? Let’s get out of here!”

“Can’t. Computer’s jammed.”

“Jammed?”

“It says all its circuits are occupied. There’s no power anywhere in the ship.”

Ford moved away from the computer terminal, wiped a sleeve across his forehead and slumped back against the wall.

“Nothing we can do,” he said. He glared at nothing and bit his lip.

When Arthur had been a boy at school, long before the Earth had been demolished, he had used to play football. He had not been at all good at it, and his particular speciality had been scoring own goals in important matches. Whenever this happened he used to experience a peculiar tingling round the back of his neck that would slowly creep up across his cheeks and heat his brow. The image of mud and grass and lots of little jeering boys flinging it at him suddenly came vividly to his mind at this moment.

A peculiar tingling sensation at the back of his neck was creeping up across his cheeks and heating his brow.

He started to speak, and stopped.

He started to speak again and stopped again.

Finally he managed to speak.

“Er,” he said. He cleared his throat.

“Tell me,” he continued, and said it so nervously that the others all turned to stare at him. He glanced at the approaching yellow blob on the vision screen.

“Tell me,” he said again, “did the computer say what was occupying it? I just ask out of interest …”

Their eyes were riveted on him.

“And, er … well that’s it really, just asking.”

Zaphod put out a hand and held Arthur by the scruff of the neck.

“What have you done to it, Monkeyman?” he breathed.

“Well,” said Arthur, “nothing in fact. It’s just that I think a short while ago it was trying to work out how to …”

“Yes?”

“Make me some tea.”

“That’s right guys,” the computer sang out suddenly, “just coping with that problem right now, and wow, it’s a biggy. Be with you in a while.” It lapsed back into a silence that was only matched for sheer intensity by the silence of the three people staring at Arthur Dent.

As if to relieve the tension, the Vogons chose that moment to start firing. — Douglas Adams, The Hitchhiker’s Giude To The Galaxy

Your humble author is pretty good at getting cars to enter software failure modes. I experienced it recently in both the Nissan Juke and the Infiniti Q50S, in each case under conditions of speed, driver inputs, and available traction that I would cheerfully characterize as “abusive”. In fact, I’d say that it’s easier, in general, to “break” the dynamic systems of a car through hard driving than it is to break anything else. I’ve had far more ABS or stability-control failures than I’ve had, say, front wheel bearing seizures or dropped driveshafts.

There’s a reason for this, and now it’s time for my pet theory. Believe it or not, I’ve done a little bit of professional software development in my life. This will come as a great surprise to all of you who have considered my squeaky voice, prickly disposition, and tendency to quote Douglas Adams to be infallible evidence of a life spent as a Rhodesian mercenary. Do me a favor and keep quiet about this around the ladies, I always tell them that I paid for my Porsches by doing figure modeling. In any event, I’ve made some bucks writing software and I’ve spent some miserable hours dealing with other peoples’ work and I’ve participated in everything from solo development to the current XP/Agile/Kanban/Pivotal idiocy that’s sweeping the industry.

Once upon a time, software was written by people who knew what they were doing, like Mel and his descendants. They were generally solitary, socially awkward fellows with strong awareness of TSR gaming. They were hugely effective at doing things like getting an Atari 2600 to run Pac-Man or writing operating system kernels that never crashed, but they weren’t terribly manageable and they could be real pricks when you got in their way. I once worked with a fellow who had been at the company in question for twenty-three years and had personally written a nontrivial percentage of the nine million lines of code that, when compiled, became our primary product. He was un-fire-able and everybody knew it. There were things that only he knew.

This kind of situation might work out well for designing bridges or building guitars (not that Paul Reed Smith appears to miss Joe Knaggs all that much, to use an inside-baseball example) but it’s hell on your average dipshit thirty-five-year-old middle manager, who has effectively zero leverage on the wizard in the basement. Therefore, a movement started in the software business about fifteen years ago to ensure that no more wizards were ever created. It works like this: Instead of hiring five guys who really know their job at seventy bucks an hour each, you hire a team of fifty drooling morons at seven bucks an hour each. You make them program in pairs, with one typing and the other once watching him type (yes! This is a real thing! It’s called “extreme programming”!) or you use a piece of software to give them each a tiny bit of the big project.

This is what you get from a management perspective: fifty reports who are all pathetically grateful for the work instead of five arrogant wizards, the ability to fire anybody you like at any time withouiret consequence, the ability to demand outrageous work hours and/or conditions, (I was just told that a major American corporation is introducing “bench seating” for its programmers, to save space) and a product that nominally fulfills the spec. This is what you get from a user perspective: the kind of crapware that requires updates twice a week to fix bugs introduced with the previous updates. Remember the days when you could buy software that simply worked, on a floppy disk or cartridge, with no updates required? Those were the wizards at work. Today, you get diverse teams of interchangeable, agile, open-office, skill-compatible resources that produce steaming piles of garbage.

Enough of the rant. I can’t wait for the day when I never have to touch a computer again to make a living. Admittedly, it will be because I’m a sixty-three-year-old Wal-Mart greeter. But I’m looking forward to it. Where were we? Oh yes. An embarrassing amount of the software in the cars we drive is outsourced to programming farms where the wizards were long ago cut loose. Modern auto manufacturers sweat every detail of the unibody and the tire specs and the thickness of the rear door glass, and they create modern engineering wonders which they then proceed to load up with the cloacal expulsions of moronic bench-seated 120-IQ “programmers”. It’s no accident that software updates make up a large number of recalls nowadays. The software’s written by people who expect a chance at a do-over, not realizing that a Toyota Prius is a little harder to update than, say, a useless Android app.

Given the increasing evidence of this problem, what will the manufacturers do? Will they resurrect the wizards? Bring the programming in-house? Restore pride to the profession? Hell no. The future belongs to Internet-connected cars seamlessly upgrading their firmware twice a week. It sounds very advanced, and it is. But if you want something that reliably gets you to work or pumps its own brakes on an icy road, you might want to stick with the old stuff.

Mature- No, she could be just 18.
Intelligent – Not necessarily, and not necessary.
Lady – The less lady-like, the better.
Friend – You are completely missing the point! (“Oh, no, we couldn’t do that, it would ruin our wonderful FRIENDship!”)

I am quite sure that we will soon create the Infinite Improbability Drive although it will likely result in the death by lynching of its creator.

http://hitchhikers.wikia.com/wiki/Infinite_Improbability_Drive

Arthur: All my life I’ve had this strange feeling that there’s something big and sinister going on in the world.
Slartibartfast: No, that’s perfectly normal paranoia. Everyone in the universe gets that

Safety critical systems were rarely written by the wizard in the basement, Jack. They were written by engineers. Engineers know how to design complex systems, plan for failure modes, and work with other engineers to get the job done. It’s not that the people working on software now are any less intelligent than the people working on the suspension or brakes or body or manufacturing processes, it’s just that nobody ever taught them how to be engineers.

Nowadays we have a lot of cowboy hackers and not very many engineers, and not a lot of respect for engineering culture. The cowboy model works really well when you have the smartest person in the room writing code and you treat them with respect, but the failure modes when you don’t get that person are atrocious. When you try to bring the cowboy model to a 50 person team of average programmers you get crap like XP.

Software really is a lot more complex than hardware design is these days. The sheer number of “moving parts” (so to speak) in the software part of most modern products dwarfs any other part of the product. The sloppiness that leads to problems like these is simply inexcusable. Relying on having one super-smart programmer to write decent code and avoid these problems is no way to produce products that might kill people when things go wrong.

Although I work in semiconductors, not software, I was exposed to XP-related jargon through an internally developed java-based data management system created for our team. The result of these many “scrums” was a project that lasted for years and produced features that were 50% broken upon initial release and would take three to four releases before the original specification was met. Couple the poor to non-existent testing with “code freeze” periods, and we would have to trudge along for six months or more with poorly functioning systems that made routine tasks more difficult instead of more efficient.

“The result of these many “scrums” was a project that lasted for years and produced features that were 50% broken upon initial release and would take three to four releases before the original specification was met.”

I have seen the same movie twice now and it ends the same way as yours. Folks you can Godwin me all you like, but “agile” is Nazism for IT, it gives me much facepalm.

I concur with Jack but Windows XP (NT 5.1) was essentially Windows 2000 (NT 5.0) which despite 64,000 known defects at launch in 1999 was a huge improvement over the two operating systems it replaced (NT 4.0 and Win 9x). The age of MSFT is over however, smart users will switch the Linux as it is clear Redmond has been asleep at the wheel for a decade.

Learn Linux. I am no expert in it myself but it is clear to me from a vendor/operating system standpoint we now live in a multi-polar world. Linux has been around for a very long time but is starting to gain mainstream acceptance in the US (as it already enjoys in most of the world). Macs were a complete joke to be ignored all through my education and most of my professional life. Now OSX needs to be accommodated by IT as well as all of the other stupid toys Apple produces. Windows is no longer the default option and MSFT has nobody to blame but themselves. Ironically the only place MSFT enjoys complete market dominance is China where most of it is pirated.

In my not so humble highly paid professional opinion, while Linux DOES have a number of uses, it is best to always remember that it is only free if your time has no value. My time most assuredly has great value, thus I run Windows as my desktop. MacOS is pretty, but manages to be no more stable on the handful of hardware configurations that Apple supports than Windows on a nearly infinite variety of configurations.

As to what Jack was saying, THAT is why I stay firmly on the hardware side of the computer world. It is a lot harder to outsource hardware implementations to 3rd world countries. Though of course, hardware and software are ultimately conjoined twins.

Hardware was my first love but depending on what you do with it jobs are generally less plentiful and sometimes tough to come by (such as chipset designer). Just ask my three friends all of whom are Penn State Computer Engineering graduates. One develops .NET, one Java, and the third does third level support for robotics. In your profession KRhodes, do you simply “run” Windows on a PC imaged by IT or purchased by a vendor, or do you interact with its underlying technologies?

I kind of am “IT” – nobody images anything for me but me. :-) Actually, my company’s IT folks would if I wanted them to, but we Professional Services guys are given a lot more latitude than the rank and file. I would rather DIY.

I’m primarily a SAN engineer, though I also work extensively with NAS, virtualization technologies, and backup (though I try to stay away from backup projects).

This week I will be in San Francisco installing a new SAN, replacing a number of ESXi hosts with new hardware and upgrading to ESXi 5.5, migrating the VMs to the new SAN and the new hosts, and implementing new backup hardware and software. A fairly typical project for me.

I will freely admit to having absolutely no programing skills beyond the ability to write scripts. My real primary skill set is herding cats, which is the greatest asset you can have as an implementation engineer/consultant.

>> while Linux DOES have a number of uses, it is best to always remember that it is only free if your time has no value. My time most assuredly has great value, thus I run Windows as my desktop.

I use Mac OS X at home, Linux at work, and Microsoft Windows when I have to. I find Microsoft Windows wastes more of my time with virus updates (and the subsequent reboots), and poorly written software (e.g..: ribbon toolbar in MS Office)

Windows can be *made* as secure as Mac OS X and Linux, but out of the box, the admin user is on. In contrast, the root user in Mac OS X is not. And for Linux, you have to explicitly request it.

Plenty of online resources to help look at it. Plus through the proliferation of virtualization technologies you can download pre-configured Linux images you can run on your Windows based PC. Use your fav search engine and look into it. You don’t have to program anything to be a basic user of a Linus distro, just as you don’t have to interact with the shell in OSX (unless they removed it since 10.3?).

“I like Windows 7 because it works”

Gave me a hearty laugh as it barely “works” (it works in the same way NT 5.1 with ten years of patches works). Sure I like the 64 bit aspect of NT 6.1 (win7) but when my new Precision uses 2GB at idle after a cold boot on an optimized corporate image something is amiss. I learned all of the 5.x Windows platforms and still run them for personal use, but professionally seeing how screwed up the MSFT products have become makes me want to stop using them. If I ever get into a leadership role in either my company or in my next job I’m going to recommend shifting development away from their platforms, SQL Server is about the only thing left which is attractive in its segment. Windows 7 will probably be the last mainstream version of Windows and it will become what Windows XP has become and probably soldier on longer than it was intended because is replacements are so radically different/terrible. I think the proles had enough of 8 dot infinity and by the time the desktop hack, er fix, comes out for 2015 they will have moved on. IT depts are (and always have) what’s keeping MSFT in business, depending on the trend in 14/15 we may start to see a new direction. So much is changing on the user level, its just not a “Windows” world anymore.

@28: It works for me, but I don’t expect much from it. If I was doing more intensive stuff with my computer, then yeah, I’d probably actually need something different, but so far Windows 7 has given me no serious problems.

It’s horses for courses. If you are trying to make a very secure, rather single-purpose system, Linux/Unix may well be the bee’s knees for the job. I would not want to run a warship on Windows. But I will not run my desktop or the majority of my company’s infrastructure on Linux either. But a webserver? Sure! Linux has it’s place, but for the most part it is not on the desktop. And there is a big difference between Windows as a desktop client OS and the various flavors of Windows as a server OS, despite their more or less shared kernels.

Windows rules the enterprise world because it is “good enough” and Microsoft has really GREAT management tools. There is simply no equivalent to Active Directory in the Linux world. And since Windows has overwhelming market share, the majority of applications are written for Windows. Companies that develop in-house are more likely to have other OS around the place though. But again, specific purposes. Windows is the minivan of the software world. No one loves it or thinks it is stylish, but it gets the job done in a very efficient workaday manner.

You can’t reasonably denegrate Win7 for patches when Linux distro patches have proven just as necessary. Collect the number of and lack of predictability around updates required to keep a functioning Linux distro up and safe and it is a wash.

Agree with you that Bring Your Own Device and distributed platforms is the foreseeable future. MSFT and Apple won’t control nearly the CPU cycles they did in their heyday ever again. Linux, Android, Microsoft, and Apple will all carve out niches until someone else comes along and tips over the balance the way DOS did to mainframes and the way Android is doing to Microsoft.

I’ll concede your point on patches but there is other stuff about the “new Windows” which irks me. I’ll give you an example from today, I have a corporate image on my Precision and the hibernate function occasionally doesn’t always pick up when the lid is shut (I’m not sure why this is). When I brought it home last night it ran in my bag until it drained the battery. I realized what happened when I docked it at work today, no biggie. But when I booted it IE11 would not connect to the internet or our intranet, nor Visual Studio hit its source control server. After some troubleshooting I determined the PC could connect to the network and I could get online with Firefox, but none of the MSFT specific products were properly functional. I talked to IT who also was perplexed, we theorized something happened with the local user profile when it died and eventually they suggested to hit the “reset” button in IE options. This button and a reboot automagically fixed everything, IE11, TFS, Outlook, the whole shabang. Granted this level of MS eccentricity doesn’t surprise me, but I do know running XP on my M90 and M6500 in a previous life never experienced issues which prevented me for doing my job. I’m going to look into a good minimalist distro and run Windows sessions in VirtualBox.

To make it simple, Ryoku, OS X and even Apple’s previous MacOS were far more stable than Windows has ever been. I get the impression that Windows is now an example of “Xtreme Programming” while OS X is still created by “wizards.”

I have 20 years plus mixed IT experience. At best, MacOS was more stable than Windows back in the days of Windows 3.1 and Windows 95. For a good while once Windows went 32bit with XP, Windows was MORE stable than OS8 or OS9 – OS9 in particular was a hot mess. Apple did not even have proper multitasking until OSX was released. Now I would say it is dead even. Neither OS actually crashes very often. The apps running on them crash all the time. Typically if you have a crashy Windows system it is due to a piece of faulty hardware or a poorly written hardware driver.

But here is the thing – Apple only supports a VERY small and limited set of hardware with each release of OSX, and they are not afraid to dump support for relatively recent systems. In contrast, Microsoft will typically support ANYTHING. Any random collection of new or ancient hardware, and if you can get the driver’s Windows will run on it. Maybe not very fast but you can at least try. And it will run with acceptable stability. I have Windows 8.1 running on a machine in my garage that was built in the previous CENTURY! OSX 10.9 won’t run on some machines that were built 5 years ago.

So until Apple can produce an OS that can run acceptably on any random hardware, color me unimpressed by their accomplishment.

As for the hardware, it’s nice, but it is very much premium priced. And they make a very limited selection of machines. Personally, I have a 2013 MacBook Air as my work travel machine. But I run Windows 8.1 on it, as the majority of the software applications I need for work do not exist for OSX. I can’t be bothered to run TWO OS.

At Vul: My comment was directed to 28Days, this sites weird reply system simply misplaced it.

As far as I know Macs were more stable mainly because they never had to put up with the many programs that a Windows may, its a mix of owner demographic, product availabilityadaptability, and the general market.

I will grant some of your points, but you patently blow off the REASONS why Apple supports that “very small and limited set of hardware with each release of OS X.” The biggest reason for this is *stability*. By limiting that hardware, OS X simply doesn’t have to battle conflicting drivers and other instabilities. The automotive business should be the same–offer only a very limited set of *controller* hardware so that the system doesn’t have to battle incompatible devices. As such, they will be relatively stable and relatively easy to update.

Hey! Cars do have limited hardware! In most cases the controlling computers are designed and built either by the manufacturer themselves OR to their specs. The code for these devices are typically written BY the manufacturer’s engineers and simply should be well-enough designed to minimize or eliminate conflicts. This also means that issues of the types described above simply should not be happening. More interesting yet, it seems that Chrysler–once noted to have the best electronics division in the industry–is doing their best to regain that crown by willingly holding a new model off the market until they got the software right. Sure, it created some bad press from people who already dislike Chrysler/Fiat, but people who really pay attention to what the different companies do lauded Chrysler for its unselfish act.

No. Apple should never even consider releasing its OS to “any random hardware” because quite simply that would open Apple to the exact same instabilities Windows has to battle on a daily basis. Sure, you’ll get some good components and drivers, but the few pieces of junk hardware or junk code will crash OS X just as readily as it would crash any iteration of Windows.
Meanwhile, it is exactly that reason why OS X and Apple products are encroaching on Windows’ enterprise market. Its stability; its relative security and its hardware quality make the Mac no more expensive in the long run than any other PC. Since Macs usually operate almost trouble-free for five years or more, the enterprise customer does not need to replace the hardware as frequently, either–the Macs typically outlasting any TWO generic PCs performing the same tasks AND this includes using Windows itself rather than the Mac’s native OS X.

But reality is that OSX is not more stable than any modern flavor of Windows when run on decent hardware. There is just nothing in it. The only real inroads Apple is making into the Enterprise is iPhones and iPads, they effectively abandoned their server and storage offerings several years ago. For every Apple computer in the graphics department, there are 20000 boring little Windows boxes (or VDI thin clients) on cubicle warriors desktops, and that is not changing anytime soon.

I have nothing against Apples offerings at all, if you can get the job done with them and like them, have at it. But I am also not under any illusions that they are fundamentally better.

However, unless you’re willing to state that most Windows software was pretty much junk, your instability argument doesn’t make a lot of sense. Rather, it was the junk controller software–drivers–for equally junk hardware that cause 99% of Windows’ stability problems and still do. Limit the hardware to the best or near best available and you eliminate the issues. The problem with that is you end up having to jack up the price to pay for that better hardware.

One of the biggest arguments against Apple since they adopted the Intel processors is that they’re now using “generic, off-the-shelf hardware”. This assumption is wrong. Sure, it looks the same, but then resistors, capacitors, inductors and processors all look the same, unless you can read the codes on their surfaces. Can you tell at a glance which resistor is a 1% tolerance compared to a 10% tolerance? Most non-technicians can’t. Apple demands and has demanded for decades that the tolerances of their discrete components be far tighter than most users can imagine. In fact, having worked for one of Apple’s suppliers I’ve had to re-calibrate testing hardware just for Apple so we could test 100% of the devices we were shipping out by the truckload–tiny inductors smaller than a pencil eraser that had to measure to less than 5% ± the printed value. On devices measuring in the millihenrys and smaller, the permissible variance was less than the testers’ operational sensitivity prior to recalibration.

I think you’ll find that OS X is encroaching more than you think. Sure, it’s a slow process, but while every other brand of PC is seeing a DECREASE in sales, Mac’s are increasing and are moving onto the desks of those ‘cubicle warriors, clerks and secretaries.

But you are absolutely right on one point–one point that almost nobody else has bothered to note: “But reality is that OSX is not more stable than any modern flavor of Windows WHEN RUN ON DECENT HARDWARE.”(emphasis mine). When corporations were run by accountants, the almighty “bottom line” meant everything, which means hardware was purchased from the “lowest bidder” (sound familiar?). Even now, more often than not hardware is purchased at the lowest possible price and I’ve seen one corporation have to replace Twenty Thousand Laptops after only one year in operation because nearly 75% of them had already failed! Where is the cost savings when you have to put out as much money OR MORE for replacements after only one year? Macs have developed a reputation for reliability that supersedes any up-front expense when measured over the course of years.

As to the much vaunted Apple hardware – I’ve owned a variety of Macs over the past 20 years. Almost every one of them has had expensive hardware failures. Motherboards, power supplies, RAM chips, a graphics board. I have had failures of PC components as well of course, but at least they are cheap and easy to fix. The very best machines I have owned by far have been IBM then Lenovo Thinkpads.

Most recently the SSD in my MacBook Air took a dirtnap. It took Apple TWO WEEKS to get me a replacement, which is completely unacceptable. Since they use an oddball nearly proprietary part, I could not just buy one elsewhere. If I had the company standard Dell laptop, Dell would have sent a tech to my hotel to fix it, or I could have bought a drive at BestBuy and stuck it in myself. Lesson learned, but in mid-2012 there was no other machine as light and capable as the Air. There are other alternatives now. But I made my bed and I get to live with it for four years, which is our replacement cycle.

@wheelmcoy
Oddly enough, the only virus I have ever had was on a Mac SE30 of all things. I just use the built-in windows AV software in 8.1 and don’t worry about it overmuch. Of course I also don’t go clicking on every random e-mail attachment and poking in the darker corners of the internet.

I’m dating myself here, but we considered it a joke at the time because the software and media were not interchangeable with what was termed as “IBM and compatible”. Since about 98% of the computer world fell under this purview by the late 90s, you could comfortably ignore the Macs and simply dismiss their user/fanboy concerns. Now OSX and other Apple toys have real market share and can no longer be completely ignored, much to the chagrin of people like me. I remember in 2011 when I first took my current job the QA manager at the time wanted to know if our circa 2001 legacy ASP based product could support Safari (which it was not designed for). My rough words to her were something along the lines of “Who cares about Safari, simply mandate FF or Chrome”. I personally still don’t care about Safari because two real browsers exist for OSX (the aforementioned FF and Chrome), but I still had to make enhancements to the legacy product (to support IE10) and we still ended up modifying our support contracts for it to include Safari and began testing against it. The lesson I learned is you can’t always let your bias rule over you (and the inclination to ignore the fanboy sh*t), especially if it actually gains market share.

@KRhodes/Vulpine

You describe the situation well, OSX vs Windows. I was always told Apple was a hardware company whose software was more purpose designed for said hardware (although I may have been misinformed). Personally I like cheap used hardware and although I can see the use of purpose designed software, I am not in the camp who wishes to buy new hardware every so often just to keep up with the latest software release. Just like my cars, I want to run my hardware into the ground, and I would prefer if my software were written to a looser standard in order to accommodate many types of hardware (i.e. Linux). Does the Apple software for specific hardware model work better? Perhaps, I suppose it depends on what you need it to do.

“The very best machines I have owned by far have been IBM then Lenovo Thinkpads”

I’m typing on one, they are the cats tats for the money.

@sgeffe

A Microsoft Windows built on OS/2? That might of been a game-changer.

Incidentally OS/2 powered ATM machines for years. I’m not sure what those machines are using now but would be curious to know.

At 28: I remember my main reason for skipping Macs being that they couldn’t run very many games nor software, but now things have changed for sure.

Now its my lack of knowledge on them and general displeasure with fanboy nonsense, but then again, “fanism” put me off Volvos until I got one, I’ll consider a Mac in the future if the hardwares good.

I’ll admit that I can never really get around my bias’s, but I can “overwrite” them when needed.

At Vul:

The Windows OS was shipped on a number of different computers with different grades of hardware, to group them all is a bit naive as they use many different grades of hardware depending on the intended market.

I can say this much though, I’ve toyed with an old cheap mac and booted up my ancient 95 a few times.

The 95 will start and work just as it did back in its day, its a slow piece of junk by todays standards, but the hardwares pretty solid.

The equivalent Mac had a discolored monitor and ran even slower, I’m pretty certain it was a few years newer too.

But with how much computers have changed this experience doesn’t hold much weight, but it still pertains to your argument that “Apple good, PC BAD”.

And the fabulous GM doesn’t do the same? Man you’ve swallowed so much pure nonsense and regurgitated so many fantasies about Saab and GM, they must nudge each other at work, and look pityingly your way.

I don’t believe Toyota or GM knowingly sets out to unleash crap software on their customers, but apropos of this article, both may be guilty of not implementing and maintaining standards for their software development.

” 11 – 01 – 2014

General Motors Co. will recall 370,000 full-size pickups for fire risks after eight reported fires. The automaker is urging owners not to leave their trucks unattended while idling. GM said Friday it will recall 2014 Chevrolet Silverados and GMC Sierras to reprogram software that could lead to overheating of exhaust components, potentially causing engine compartment fires. The 2014 Silverado is one of three finalists for North American Truck of the Year, which will be awarded Monday at the North American International Auto Show.

When the truck idles, it should use two cylinders. But because of a software glitch, the recalled trucks idle with most of the cylinders. That causes the vehicles to overheat and leads to the fires. All of the vehicles that have reported fires are trucks with V-8 engines.

The potential hazard often is signaled by a continuous yellow “check engine light” and an “engine power reduced” message in the driver information center.

The recalled trucks have 4.3-liter V-6 and 5.3-liter V-8 engines and include about 303,000 in the U.S. and 67,000 in Canada and Mexico. Trucks with 6.2-liter V-8 engines are not part of the recall.

GM confirmed eight fires caused by the software flaw, three of which were on customer-owned vehicles, but said nobody was hurt. All incidents were in areas with very cold weather, the Detroit-based automaker said, and four of the fires were in trucks still at dealerships.

One of the fires was on an employee’s company-owned vehicle, whose garage was damaged. The problem was discovered in part because more trucks were being left to idle for longer periods during the recent extreme cold. All of the fires were reported in December and January. .”

It definitely still exists, but it’s rare. I used to work on telco systems – not safety critical, but still required to provide five-nines availability – and what we were doing was definitely software engineering. It’s a culture as much as a way of working, and with schools focused on producing lowest-common-denominator graduates to hack apps and NodeJS, I fear it will die out sooner rather than later.

I’m pretty sure it exists for FAA certified electronics (you have to be able to justify every line of code. If testing finds a bug it is a major scandal). Of course, telcom is disappearing and being replaced with VoiP (even at the behest of regional Bells), so don’t look there much longer.

Except that there are so many programming jobs you can’t find enough wizards in the basement (I’m willing to be real wizards can find DSP or GPU programming today), and that they want real money (and can get it for those jobs).

You can also get real engineers to do real engineering in software. The FCC demands bug-free software, and Boeing (and other aircraft makers) deliver (probably outsource/buy) it. As far as I know, medical devices have similar requirements. Basically it comes down to defect-free requirement documents (the engineers won’t touch a requirement document that has *any* ambiguous parts. Just imagine the groups Jack is describing get away with that). Once that requirements document is *frozen* (more features? Start an entire new schedule. This code doesn’t have those features. Deal.) code can be written and tested. Note that testing should be a formality (except it is taken extremely seriously) and any bug caught there is a major scandal (you are expected to send bug-free code to testing).

This type of code doesn’t take wizardry, but it doesn’t inspire “surprise and delight” either. It pretty much involves taking requirements that are practical pseudocode and turning them into a proper language, and testing the result. Don’t expect it on any consumer goods for any software unlikely to lead to megabuck lawsuits that can’t be avoided through careful incorporation and strategic bankruptcy (note that the huge payouts you see in the news are typically set aside by the judge or an appeals judge. We can’t have corporations bound by any responsibility).

So take your pick. Low cost, high feature, high bug code or high cost, low feature, bug free code (that locks a ton of design choices several model years ahead). I know which system I’d like for my braking system, and I expect it will be made to the same high standards as an Austin Martin brake pedal if a gun isn’t put to the head of a corporate executive.

It isn’t just software. In my company, the enlightened upper management of the ivory tower has delivered XP to us, the ‘real’ engineers of purely mechanical systems. As such people are wont, they worship the process and believe a trained monkey can execute high quality designs. So far, our data says they can’t, and they’ve been several years behind in delivering their mediocre product, too.

I worked for a large company that had a slavish obsession with processes, to the point that the process began to supersede the results. The financial engineers in upper-level management emphasized the process over the individual, or even the results, in an effort to devalue the contributions of creative, and well compensated, technical workers.

Who was the astronaut back in the ’60’s who got in a load of crap for saying during an interview, “I’m riding on an incredibly complex piece of equipment, every part of which was built by the lowest bidder.”?

It’s not like anything has changed. And having dealt with more than a few of those ‘programming gods’ in my lifetime, I prefer the current system. Yeah, when the programming god is working well, everything is fine, dandy and wonderful. However, if he’s having a pissed-off day . . . .

Yes — they were going for a record launch pace. (I don’t recall if the ISS construction was underway at that point, or if it was just for NASA bragging rights.)

(Columbia, OTOH, was indeed all hardware–too many tiles lost, allowing the superheated plasma into the wing structure on re-entry, which led to catastrophic structural failure of the vehicle, IIRC. From the beginning of the program, those tiles were considered a weak link.)

I remember cartridge video games… on pre-internet consoles. It either WORKED, or you got shat like ET on the Atari 2600 that was unplayable. There were no patches, no fixes… if your code wasn’t perfect and the game had killer glitches, word spread and sales died.

Now we’re all beta testers, doing the final QC for companies only interested in increasing dividends.

My Sega Genesis (original 1988 model that came packaged with Altered Beast) simply won’t die. It must be either too simple or too well-made to die from normal gameplay. I’ve had the thing since I was 5 and it’s always worked, even after sitting around for a few years gathering dust.

If I had instead been given a NES back then, I’d probably have had to replace the cartridge pins inside the thing a few times…stupid flawed insertion mechanism.

The 2600 was even more extreme. It didn’t even have a *frame buffer*, so it had to be doing the right thing every cycle or you wouldn’t see anything on the screen (the count would get messed up and you wouldn’t see anything).

Even the PS2 discs couldn’t be easily patched, but I suspect that such things are still true in handheld devices.

As I was going to post in response to a comment last night (before my login auth. cookie decided to expire while I was writing it), the “wizards” (as I would call 90% of the coders in my govt. IT shop) just need enough guidance to see the river and navigational buoys! Don’t throw bunches of stupid stuff, or “rules for rules’ sake” in our way, and we’ll do fine! If we’ve been around the place longer than you, and you start getting pushback, there is likely a good reason! Listen to our input!

Again, give us a chart to stay off the rocks, then stay out of the way and let us prove our worth!

When I was the sales VP of a small IT firm, I was also put in charge of hiring programmers for various projects. The owner was a programmer himself, and he thought it was a complete waste of time hiring greenies who only had 2-5 years of experience. The work was shoddy, and they lacked the algorithms for planning that only comes with experience.

I got fed up myself looking at resumes, to the point where I lost my temper and e-mailed everyone who sent me a resume that week to stop sending language experience in years, and start using hours instead. 10,000 hours of experience in C++ is much more meaningful to human resources than 5 years. You would think they mean the same thing, but they don’t.

TTAC has yet to address the effect of “Brookout and Swartz vs. Toyota”. This goes much further than what Jack brings up here.
http://www.safetyresearch.net/2013/11/07/toyota-unintended-acceleration-and-the-big-bowl-of-spaghetti-code/
Toyota lost this case, and as a result is settling these cases everywhere.
ABS and stability control can fail without being missed in most driving situations.
This brings to mind Wayne Knight as Dennis Nedry in Jurrasic Park.

Toyota settles because it is less cost and brand damaging.
Fact and what a jury believes may not be the same.
Better article:
http://www.businessweek.com/articles/2013-11-07/toyota-accelerator-lawsuits-keep-coming

The difference is I don’t know anyone likely to really push for a flying car. Old people and parents of non-driving kids have a real need for an electronic chauffeur.

Also, the whole “flying car” idea is stupid on its face. If it flies, why would you bother with roads? My guess is that someone will realize just how many rich people live in less regulated countries and start building STOL (Short Takeoff or Landing) aircraft that can takeoff/land in a parking lot.

“It works like this: Instead of hiring five guys who really know their job at seventy bucks an hour each, you hire a team of fifty drooling morons at seven bucks an hour each.”

You forgot the to add the ‘outsourced to India’ part, but then it is surely less than $7 or $5 an hour for them. Indian outsourcers are famous for throwing bodies at programming and not actual talent. The end result is bug ridden code so monstrous, and poorly documented in very broken english to the point where it is impossible to ever move the code to another vendor, or even take support in-house again, and binding you to continued support and development with the TATA or WiPRO unless you want to wipe the millions you already spent and start fresh with a different vendor. In the end, any cost savings are marginal, as the low price of the quote isn’t adhered to at all, timely-ness is non-existent as the longer they work, the more they can bill, and when they have you like that with no options, that offshore support savings is gone as they say double the support people are needed.

Maybe that’s not how it happens in the automotive industry, but it does in mine. Even IBM workers are mostly offshore now or in Canada where it is easier to get work permits to bring people in from India. Won’t get into the L1 and B1 visa abuses by the groups operating in the US, you can visit techinsurgent.com for that.

I was hired by a private equity firm that was having problems with one of their portfolio companies. Turns out the company outsourced their development to an Indian company that thought they could have one set of code for all of their clients. They put in craploads of conditionals – and not just #ifdefs, we’re talking “if” statements with like 10 levels of nesting. It was a train wreck. I kind of got them fixed up and made them promise not to work the outsourcing company. They didn’t pay attention and got in trouble again. When they called, suddenly my schedule was full. I wasn’t going to try to clean up that mess again.

My current employer does not use contractors or outsourcing, but my previous one did (small division of healthcare giant). The code coming back could be typically described as a train wreck. Pch101 may have a point, but I would think even difficult code has a pattern its authors could follow for maintenance. Most of the code I saw either suggested its authors never went beyond 101 level coding in their educations [?] or their attitude was if the finished product was not “good enough” they would have to start over with a new project and new billing codes (to ding you again).

No knock on India, per se, but one might care to read the N. Y. Times lead article today on all the fake and improperly compounded (often in unsterile conditions) drugs manufactured in India (and China!) that make up a a goodly part of the U.S. drug supply. Unsafe drugs are a much greater problem than unsafe cars. I prefer as few pills as possible and as little vehicle tech as is reasonable. (All this is not to mention the fundamental design flaw of requiring a person to take his eyes off the road to perform a standard operation.)

This post is so true you can read this kind of stuff off Alliance@IBM. When the business consultant turns your workplace upside down with lean and six sigma, you end up with outsourcing to India @ 1/3 the cost (meaning you can hire 3 times the number of morons that would simply omit your test cases that their codes cannot comprehend).

A gentleman who certifies to the ISO 9000 standard was talking to my Dad and I as we were cleaning fish at my Dad’s boat club one day several years back, and I’ve never forgotten his words: Basically, ISO 9000, six sigma, etc., comes down to standardizing ways of shuffling paper! Nothing more!

This is true not only of programming code, but also all other services. We have drawings, analysis, etc., work done by India & China, and despite their low labor rate, they burn through the budget like a CA wildfire.

One thing in particular I’ve noticed: they refuse to learn. It is the polar opposite to Kaizen. We should learn something from every task we perform, and then use that learning to improve each new task. That’s how ‘wizards’ are made. None of the outsourced workers we use will ever become wizards, and precious few of the younger engineers in my office will, either, because they’ve drunk the Kool-Aid and their knowledge/skills plateau in the first few years on the job.

I sat in on a design review recently where a couple of these young guys listed objectives of a test. They then showed the design of the apparatus, and I instantly recognized it was physically impossible to extract what they wanted to measure. Instead of even doing a single calculation or free-body diagram to check, they just replied: “No, no, no, no no no nonono. It’s okay because …”

We’ve had openings in my group unfilled for years now because we can’t find any candidates that are even suitably qualified to any sort of original work. They all expect to look it up in a book or procedure.

@redav, Being able to Google or look up how to use a programming concept or function is a given language, or seeing examples of how a design has been used in other situations or contexts is OK, but it seems that what you’re saying is more that they can’t APPLY what they glean from that Interwebz search in a wider context.

I can certainly see where that’s a problem!! But even younger workers still get it: a recent hire to my department will KICK ** MY ** A$$ in this regard eventually. Twenty-six years old, and he is constantly LEARNING as he goes (more like SOAKING IT UP like a sponge), and is able to apply the concepts instantly! A “wizard” in the making!

They’re starting to do this where I work – essentially, it’s a “What’s the BOTTOM LINE TODAY?” attitude. Broken code from half a world (and 10 or so time zones) having to be repaired by our local folks – who are going to take the hit for all of the miscommunication and simply bad product that WILL cause schedules to slip…
Basically, the “A Million Monkeys on a Million Typewriters will eventually turn out Shakespeare” method. (This is not a racist inference, BTW, it’s just the best analogy I can think of).
It’s just the way to throw the cheapest resources at a problem so that the stock price is attractive to investors in the short term, and to hell with tomorrow. As long as EVERYONE ELSE is doing the same thing, you’re “competitive”.
It doesn’t change the fact that the “wizards” (who are ultimately responsible), are now on the hook to fix this stuff – so they fear for their jobs as well.

I was witness to one inter-departmental struggle. Team A said they could complete the software in two months. Team B quoted two weeks. Two weeks came and went, and team B produced incomplete and crappy a software.

But the lesson here is not that management should have gone with team A. Management’s response was at least they got something in two weeks.. Had they gone with team A, they would have gotten nothing.

So as long as crap remains better than nothing, this will be the state of affairs. But for me, nothing is better than crap! <— hope you all see what I did there. :)

It sounds like you have a bone to pick and use a recall to do it. The irony is that it contradicts your last complaint about the cost and features of automotive electronics.

This recall, in fact, answers your question about why car electronics cost much more than consumer electronics. The level of liability and the requirement of reliability in automobiles is much greater than that of a pocket GPS. That costs money. Google doesn’t have to pay to recall your phone because you cannot sue them for crashing your car (yet).

Cost aside, this article downplays the complexity of modern software. The software and electronics systems in these cars is often as complex as the car itself. In the case of this recall, we have a complex series of interacting software and hardware components that cause a problem under a very small set of circumstances. Do you expect the old software whiz to foresee every situation like that? What evidence do you present to back up your case that this is due to cheap programming? Well before Windows, I remember getting so used to Ctrl+Alt+Delete that I could do it in my sleep.

Regardless, the old days of ROMs involved the merest fraction of code that newer systems require. That meant that programmers could troubleshoot them in much greater detail. Complex systems are never going to be entirely bug-free, especially when consumers (and TTAC reviewers) constantly demand consumer features at a consumer price.

Sounds like a pre-emptive recall and not something that actually has happened. I’m sure the same could happen in most controllers, except Toyota got burned by UA.

Unintended acceleration, unintended stopping… if you believe these articles you’d think Toyotas just eratically start/stop without their drivers being in control at all. The reality is, 99.9999999999% of all Toyotas just drive from A to B as intended.

This overheating issue may just happen after 25 years in Arizona sun, and when the Moon is aligned with Venus and Mars, it seems Toyota are just overcautious.

It is just a software upgrade, I wish Mazda could upgrade their metal software to prevent corosion, or chrysler could upgrade their wheel sofware to prevent wheels falling off, or Ford could. upgrade their color software from color fading out….

I realize it is a hassle, but at least it gets fixed, proabbaly takes 15 minutes, free coffe and cookies…

The ability to fix things with a just a simple software update is really quite amazing. And really not a big deal at all. And soon the updates will happen via the cell data network and you won’t even have to take the car in. Perfectly fine with me.

Realistically, we COULD make cars with the intended (and never achieved) reliability of the Space Shuttle. But then they would cost $1B each. I’d rather have a car I can afford that has to be updated or repaired occasionally.

For how long do you expect this runs first time every time? Let’s be reasonable here – even axes and anvils wear out eventually. I’m sure someone could design a car that will require nothing at all for 20 years 500K miles. But only Bill Gates could afford one. Reality is that it is an exceedingly rare event to have a modern car that won’t start or strands you on the side of the road. Today a horribly unreliable car is one that lights up a warning light once in a while.

Back in the day, if you wanted the latest revision of a transmission, you got out your wrench set. Today, you upload software. I prefer to keep my wrenches in the toolbox whenever I can.

As for the end of support – the overwhelming majority of issues are going to be found in the first few years of a cars life. No matter how well you test, you will not find every issue before release. Jack certainly has a point that software gets released before it is ready, but isn’t it better to at least have the ability to fix it after the fact? Otherwise, out come those wrenches again. Electronic support is no different than parts support – do you realistically expect to buy parts off the dealer shelf for your car for more than 10-15 years?

I have some pretty strong Luddite tendencies, but compared to some of you I feel like the man from the future.

krhodes1, you completely miss the point. I do not expect a car to never need service or for parts to never wear out.

The reality is we are already at a point where minimal maintenance keeps a car working fine through 200k mi or more. That’s great. The problem is we are adding stuff that isn’t as reliable and actually decreasing their longevity.

New cars have technology ‘features’ that don’t work right. My 13-yr old car’s stereo has never frozen, crashed, needed rebooting, or glitched in any way. No, it doesn’t support internet connectivity, gracenote, or firmware ‘upgrades.’ Given a choice of that old stereo or the new ones that aren’t bulletproof, I choose the old one.

I don’t need complex infotainment systems, camera-controlled self-parking, steering assists to compensate for wind, automated wipers/headlights, self-adjusting seats, etc. I would gladly not have those features if they bring down the reliability of the car.

Not a big deal until the software update “fixes” a hardware problem by changing or reducing functionality in order to avoid replacing hardware – like when Honda decided to “fix” the hybrid battery failures in their Civic and Accord hybrids by reprogramming the software to use less of the battery’s charge capacity to prevent failures under warranty that would cost Honda money (and resulted in owners seeing significantly reduced MPG as a result of the fix).

I sold mine (’06) before Honda came out with the secret firmware flash. Sorry to hear they screwed you with it.

Electronic controllers can also compensate for a poorly designed mechanical system. For example- what’s almost as good as electronic ABS? Taking the time to properly engineer the mechanical components of the brake system (brakes that are easy for the driver to modulate, balance the front/rear bias, and a proportioning valve that actually does the job).

And yet electronics are the reason why modern cars start and run perfectly no matter what the weather. And are far easier to diagnose issues than ever before. And save your @ss when you have a brain fart and run into something. Or someone else has a brain fart and runs into you. But if you want, you can always pickup a nice mid-70s car with nothing electronic in it but the AM radio and enjoy. Enjoy the 15mpg, 0-60 in 18 seconds, and doing a mystic dance around it to get it to start in the cold. And the tuneups every 5K. And of course it will pollute the environment many orders of magnitude more. But enjoy that!

Electrical Engineers develop electronics. Regular programmers and computer engineers develop embedded systems. Understand that “the bar” for being an electrical engineer is significantly higher than for being a programmer.

EE/CE guys are usually much more skilled at being close to the machine, and they command a premium price, though competition from China/Korea is putting pressure on their profession just like Eastern Europe, India, and Asia-Pacific nations do for CS.

This is near and dear to my heart because (A) I love cars and (B) I write software for a living, though not embedded car software (or as we call it “firmware”).

Embedded systems work is hard, thankless, and it’s very close to the metal. Students these days are taught way above the metal, in languages like Java and with resources that seem infinite in comparison to the old days. Most embedded systems guys remember scrounging for words of memory, newhires think nothing of malloc’ing multi-megabypte buffers.

Also, people get seduced by the ability to “push to production” all the time. Many of the sites you’re used to use a continuous deployment paradigm whereby a dev will code-up a patch, push it to a staging system, a gazillion automatic tests are run against it, and if it passes all those it goes straight into production without a human seeing it. In that domain, it’s not a bad thing, but it’s challenging to work that way with embedded systems. You have to run lots of “simulations” of the hardware against your code, and your tests are only as good as your simulations. Manufacturers are building fewer and fewer prototypes and running fewer and fewer test cycles prior to release, because–as you mention–“we can always patch it in the field.”

Regarding the “morons” now working, there’s more nuance to it than that. The “Wizards” are notoriously bad at knowledge transfer, and hellishly defensive of the design decisions they made years ago. They also don’t scale: When you’re making 1 car with telematics, it’s one thing. When you’re making EVERY car with telematics, you need more devs because you have the same 5 year product schedules. Companies react by hiring or sub-contracting to conglomerates and they’re–by definition–not Wizards, because they’re generalists not specialists.

Last, corporations view embedded systems as a cost center, not as a profit center, so they’re managed as such. Senior devs are backfilled by contractors, jobs are outsourced, etc. What looks good on paper generally yields poor results.

Jack,
I also have to confess that I once used to do assembler software development for Z80 based embedded systems and I agree that the agile “1000 monkeys banging away on 1000 typewriters” theory of software production is a step in the wrong direction. But there are good reasons why software engineering has changed:

1. Projects have become much more complex.
The complexity of some software projects simply rules out the use of solitary wizards. From increasingly complex functional specs to compliance with third party governance (meeting PCI DSS, HIPAA, SOX etc) projects have simply gotten too big for just a few people to have everything in their heads.

2. Single points of failure
Why would a company have their $50 million development project depend on the work of one (or even four) individual(s)? That’s a risk only an upstart would take.

3. IP leaks
“Stuff only the wizard knows” is not really company intellectual property. By forcing everything to be documented it does create more work (and usually less efficient code) but it also allows for better management of company IP. So when the wizard leaves because the foosball table room got converted to meeting space, not much is lost.

I have absolutely nothing but respect for people who code against the metal. The closest I’ve ever done was some C development on a palm pilot, which is radically different than C development on any modern system.

I’ve had the misfortune of managing a team of overseas developers. I’m not sure if it’s universally so, but all of the coders I worked with were… not very good. I saw a lot of resume inflation, and absolutely no imagination. Good coding is like art, it’s not something everyone can pick up. You can’t just code through a spec, you have to understand everything that’s going to go into a product, how it’s going to be used, etc. Most people just don’t have that ability.

People love to bash our educational system, but I dunno… it seems to me that we end up with much better coders than the rest of the world. Raises for teachers all round, IMO.

I have the up-most respect for engineers along with the stuff they have to put up with, incompetent stylists, idiot bean counters, it ain’t easy seeing something you’ve worked on for a good while get ruined, nor dealing with garbage designs like the Juke.

However I have no respect for programmers, far too often I’ve played games, used phones, or other computer enabled tech and they’ll bug up. Its gotten to the point where I wouldn’t mind giving todays programmers a peice of my mind about their irresponsibility.

That being said this articles helped me to understand whats going on better, companys hiring far too many overly-eager pin-heads and making them work in harsh conditions.

Gimme the arrogant moody wizards, they may not be fun to hang around with on a bad day, but they’ll have far more skills and care than a naive, spineless student.

Trust me, we know there’s a problem, and the problem is cyclical usually correlated with the size of codebases. This happened before: Structured programming, OOP, Convention-over-configuration frameworks, etc. The wetware (us) hasn’t progressed much as far as holding things in our heads short and long-term, so we’ve developed methods to modularize. However, like scaffolding, those methods collapse under their own weight eventually, and we build new, better scaffolds.

Right now, the “agile” stuff Jack’s harping on is a reaction to how requirements and constraints will change throughout a project. We’re building tooling that can keep up, but it’s not consistent across the industry. Most of the money’s in Web and Mobile, so those get the attention, and embedded is still using runtime-efficient techniques that doesn’t do terribly well with huge codebases.

Re: “Right now, the “agile” stuff Jack’s harping on is a reaction to how requirements and constraints will change throughout a project. We’re building tooling that can keep up, but it’s not consistent across the industry. ”

This. With the waterfall system of the “glory days” it can be over a year from spec completion to product ship. That may have been acceptable when tech moved slower but it isn’t now.

Also relying on wizards is unsustainable. The wizard whose work Jack admires the bulk of which he developed in the 1990s and that he is still lovingly caring for and improving…well guess what he’s approaching retirement age. Oh and wizards aren’t very good at skills transfer

I work in the kind of wizard-rich shop Jack reminisces about. In the last three years half of them have retired. Most of the remaining ones are or soon will be eligible to. My product is responsible for 100s of millions of revenue and is well on its way up the creek due to past reliance on wizards. We hire newer guys to replace them but even with willing mentoring from the wizards (who know they will be retiring soon so have no territoriality) the newer guys don’t catch on and don’t last, always finding a way to get assigned to newer products that use more in vogue technology, if not quitting the company to go work for a web or app startup.

The other side of the coin is that “when you hit the brakes, the brakes work” spec can be controlled, written, and tested in engineering-based development systems, and then work *every* time the driver hits the brakes. Using agile and similar fad-based development means that after you notice a bunch of dead drivers, you have two guys trying to figure out why the brakes aren’t working.

I can only hope that ECU and similar controls are run by engineering, and have absolutely no connection (either in the manufacturing organisation or electrically with the exception of a highly isolated bus to send information back to the driver) to the infotainment division. This really won’t happen, but maybe someone (Honda? Tesla? Hyundai?) will get it right and make cars that work.

You’d better believe people are trying to adapt manufacturing processes to “manufacturing” software. Lean/Six-Sigma, etc. Looks great on a slide, but doesn’t work in practice. “Operational Excellence” (read:efficiency) *can* work if you’re doing v2, 3, …, N of the same codebase, but that’s rarely what’s asked.

Fascinating article and comments. With no overall agreement between folks who look at projects from opposite stances. I don’t know whether to be happy or concerned.

Nothing the average man can do about errant code, so I think I’ll just try and forget about the possibilities of tragedy and trust to luck, just like we all did before collapsible steering columns and seat belts.

Recalls usually represent an effort to reduce the consequences of the worst-case scenario of failure, not the actual risk or rate of failure.

We have lot more recalls today largely because automakers can’t afford to ignore these things. Now there are governments across continents who constantly monitor this stuff, as well as consumer watchdogs who demand it.

As one example of this, I would suggest that TTAC compare the number and amount of penalties handed out by NHTSA during the last Bush adminstration to the sanctions imposed during the Obama administration. A defect as blatant as the Ford Pinto fuel tanks could not happen today.

Agreed. I would guess that recalls have gone up since the unintended acceleration debacle, especially at Toyota. The increasing rate of recalls likely has less to do with an increasing rate of problems, and much more to do with the fact that manufacturers, especially Toyota, are more concerned than ever about massive lawsuits and hyperventilating news anchors.

Jack, you went down the software (firmware) path as an explanation for this problem, but I’ll offer another:

Maybe it’s really a hardware problem caused by some components not meeting spec. It’s a lot cheaper to re-flash the ECUs to run differently, than it is to replace the hardware. Personally, I think a firmware problem would have appeared sooner than 4 years (in the case of the Prius). Sounds like component aging to me, or corner cases on the thermal environment coupled with aging.

That’s an excellent point. The software is only as good as the hardware. Given the pressure on suppliers to me price points and achieve minimum spec, I cringe at the thought that many otherwise durable vehicles will be too expensive to repair by replacing degraded electronics, long before their useful life has been reached. There were companies that rejuvenated older cars with buggy electricals by producing entire replacement wiring harnesses for popular models, particularly British sports cars. It’s much more complicated now, but the payoff for a company doing something similar could be huge.

I am a 15+ software developer/architect(mostly enterprise java), who is in the middle of a career change from consultant to business owner. I concur with this article’s rant, and have watched software geeks evolve from revered gods to interchangeable resources. The truly gifted technical people mostly retired, and those that didn’t get tracked into an “architect” role where they guide resource allocation, but lack real power.

These days it is project managers in charge, and they are mostly about power and not technology. Many decisions are made based on which vendor/outsourcing firm gives the greatest kickback. I have seen this both as a vendor and a client. Corruption runs rampant and have I even seen a manager in change of the database of a publicly traded company get caught skimming money with the full complicity of the DBA team. This was never made public.

While I may sound cynical and disillusioned I am very bullish on the future for the wizards. The complexity of business systems is growing exponentially, and only the very smart people with the help of automation will understand. Mark my words the wizard’s day shall come again.

I’m afraid that even wizards may reach their limits in many cases. In fact, most common products (such as tablets, cellphones, and desktops) are not debuggable by anyone. The only thing one can do is configuration control and finding the change that broke the box.

Wow, I didn’t realize there were so many software geeks on TTAC. Jack writes on a subject close to my heart, and I agree with him. Yes, some wizards aren’t good at knowledge transfer and some are prickly. But some are really nice guys — Steve Wozniak comes to mind.

Those wizards have provided decades of reliable software and that in itself is worth a lot. That they are retiring presents the problem of knowledge transfer, but are the alternatives mentioned by Jack and in the comments — pair programming, eXtreme programming, scrum, — an answer? Those techniques just make the manager’s job easier, but won’t improve software.

What will improve software appears near the end of the post:

“Will they resurrect the wizards? Bring the programming in-house? Restore pride to the profession? Hell no.”

In other words, bring in people who really care. Sadly, things have to get worse before those options will ever be considered.

>> Does anyone have evidence that software has gotten less reliable over time?

I’d be interested in a formal study too. Right now, evidence is only indirect. That agile, XP, pair, scrum have appeared tells me there’s a problem. And then there’s personal experience we all can relate to.

In addition to reliability, we should also be interested in usability. That’s harder to measure since its more subjective, but usable software would be lead to fewer human errors, and I’d argue fewer machine errors only because the programmer cared enough to make the software usable.

Back in 2002, MIT Technology Review published “Why Software Is So Bad”:

An honest effort, but they missed the mark, Their tagline was “For years we’ve tolerated buggy, bloated, badly organized computer programs. But soon, we’ll innovate, litigate and regulate them into reliability.” It is now 2014 and we are no closer to reliable software.

Ask how many lines of code it takes to do anything these days. If a computer crashed after executing the same amount of lines of code a Win3.1 (or earlier) computer would run before crashing, you would never notice the time difference between powering on and crashing.

The problems with software creation have been carefully documented since 1975 (the publication of the Mythical Man Month) and the answers haven’t changed significantly since then: smaller teams, longer schedules, fixed requirements. Since management (and to a certain extent, customers) aren’t willing to pay for working software, they use their own methods which deliver software faster, cheaper, and more buggy.

The mythical man month mentions “no silver bullet” as a conclusion. I find that false, that there have been a cornecopia of silver bullets. The thing is that each “silver bullet” can allow programmers to create n times more code with the same amount of bugs, and then management pushes for more features in less time with the same amount of bugs. Not so bad for the infotainment module, absolutely insane for the ECU/braking/airbags.

Part of the problem is Moore’s Law. Computing power has gone up enough that programmers don’t have to write concise, elegant code the way they had to in the early days.

The Apollo Program’s Guidance computer had 2K of RAM and 32K of ROM and a clock speed of 1.024 MHz. It had just 4 16-bit general purpose registers.

That works out to about a quarter or half of the computing power of a base IBM PC when it first came out. I doubt many of today’s production programmers could write code for an IBM XT, let alone for the Apollo mission.

Also, when real punch cards had to be punched and collated, then compiled and run, a small error could waste a large amount of time.

FWIW, I also think old time doctors had better diagnostic skills because they were less reliant on technology and tests.

In some areas like computer vision and image recognition, every bit of machine power is needed and there are still programmers handcrafting critical code.

It’s also much, much, more complicated than the old 8088 days even with higher level languages. Now we have parallelism to deal with – thousands of cpu cores processing the data simultaneously. There’s also multi-threading to deal with as well. It’s a much more complex environment if you’re at the bleeding edge.

I’ve done assembler on Z-80s, 8088s, and now work with GPUs (NVIDIA & Adreno) and Snapdragon 80x SOC based microcontrollers. Even using a higher level language like Julia, it’s a far more difficult programming environment than the days when we were dealing with memory and computing resource restrictions.

Funny thing, using your own assembler code you almost always know *exactly* what is going on in the CPU (well, with those old 8088s, not so much with a i7). Higher level programming, with all those APIs (or writing into device registers when low-level coding. They work, but are less documented) basically gives you something that may or may not work as documented (if documented at all).

There’s a huge difference. Chips are *engineered*. There was a verification engineer that made sure every single transistor does what they are supposed to do. When something goes wrong (think the Intel fpmul debacle of ~1990) it is [inter?]national news and was on Dave Letterman’s top 10 (note for today’s kids: that was similar to what ‘going viral’ would be today). For an API to go wrong* would be business as usual. You just don’t expect software to be bugfree.

Can’t argue the issues of multithreading/multiprocessing. Of course, my first experience with it was writing my own multithreading program in assembler (and writing the preemptive task switcher in assembler for it, natch). Later I would learn about the wonders of queues, and now understand that if you can have a wizard properly design the queueing system, and build your multiprocessing around producer/consumer queues, you shouldn’t have significant problems. If you have to share memory may the FSM have mercy on your soul.

* you can’t imagine how happy I was to find out that Python’s [2.x’s] SHA hash module returned SHA-0 [insecure] instead of SHA-1 [more or less secure, a bit better than MD5]. Not documented at all, either.

The ECU in the Saab 900 APC had multiples of the computing power of the Apollo Guidance Computer.

Either IBM or the Watson family donated a mainframe computer to Brown University in the early 1960s, and the University built a building custom-designed to house it. The building occupied one corner of a city block. It was spacious and airy inside, with a large double-height lobby. Inside, there were rows of key-punch card machines and readers and tape machines, and the actual computer occupied the rear of the building.

That computer, about which a huge fuss was made, had less RAM than a first-generation Apple Macintosh.

About which I always wondered, why did Apple think that it had to license and then buy the McIntosh hi-fi company’s trade name, which is spelled differently? The mere sound of it? To me it looked like an abundance of caution, and I did help litigate a few IP cases including confusible trademarks. (The trade name they should have been more leery of was “Apple.” When they brokered a deal with the Beatles, they did not foresee that a computer company would eventually sell music by download.)

They are. What, you wanted someone who paid for both the supercomputer and the programming to do something other than predict the next tick of a stock and buy or sell milliseconds ahead of it? Too bad.

What the Saab doesn’t have, that NASA’s systems have had, is layers of redundancy. The automakers seem to be following the Christmas tree lights strategy: when one goes out, they all go out, and you have to find the one bad light. If the re-flashes are adjusting to component deterioration over time, as SCE to AUX suspects, then some redundancy might help. But even NASA has had to transmit workarounds for failed components, the equivalent to re-flashes, so continuous updates might be the best way to go anyway, at least until somebody comes up with a solution for component degradation.

“Part of the Voyager probes’ longevity comes from the fact that they were robustly built and included plenty of redundant components. Even having two machines to accomplish the same task is something that NASA doesn’t do much of these days (Imagine if we had two Curiosity rovers on Mars).”

The dynamic of wizard vs manager is as old as time itself. Its merely run into a recent uptick in relevance due to the surge in managers being standardized by business schools and in company’s willingness to trust those schools to produce managers instead of doing that training and selection themselves. Young hotshots don’t work well with established hotshots/wizards. Especially since most companies with resources don’t have owners in their day to day to remind managers that they aren’t the primary producers of product and to remind the wizards that playing nice is also something they get paid for. None of this is IT specific.

The rise of hr as a financially crucial sub fiefdom of management had probably contributed as well.

tedward> The dynamic of wizard vs manager is as old as time itself. Its merely run into a recent uptick in relevance due to the surge in managers being standardized by business schools and in company’s willingness to trust those schools to produce managers instead of doing that training and selection themselves.

The obvious repercussion of this is that this new crop of managers lack much domain expertise, which is rather critical when making decisions about highly technical product development.

—
In the larger picture, all of this is the product of an economic system which prioritizes short term financial success over long term consequences. Bean counting will make the next few quarterly results better for wall st (you know, the stuff that matters) than r&d. That means money guys at the top who naturally hire those like them as subordinates and it’s downhill from there.

For a while inertia will keep it going as marketing budgets and various gimmicks supersede product development, but eventually everyone is left wondering how the new upstart came to surpass them as the wheels fall off the wagon and the cycle starts anew.

So to all those above wondering how the alphabet soup of business BS came to pass at their current job, consider it part of the natural circle of life for capitalist finance.

Jack: I didn’t know you had programming chops, but I am not surprised. I’ve been in software for 16 years. Great Developers, I mean really really good developers, we’re talking the best grade A talented ones, the the ones I buy the Macallan 18 and Edradour 1993 cask strength grey market scotch for because when the shit hits the fan they are the ones who can fix it, those devs all have one thing in common.

They are all, to a man, also a musician. Talent levels notwithstanding, they all play instruments, are in bands, make pilgrimags to the Blues Hall of Fame in Memphis, and write some of the smartest code I’ve ever seen.

So not surprised, but it confirms my own theory of the correlation between reading and playing music and coding.