Posted
by
Soulskill
on Tuesday February 07, 2012 @02:25PM
from the to-infinite-loops-and-beyond dept.

astroengine writes "So it turns out U.S. radars weren't to blame for the unfortunate demise of Russia's Phobos-Grunt Mars sample return mission — it was a computer programming error that doomed the probe, a government board investigating the accident has determined."
According to the Planetary Society Blog's unofficial translation and paraphrasing of the incident report, "The spacecraft computer failed when two of the chips in the electronics suffered radiation damage. (The Russians say that radiation damage is the most likely cause, but the spacecraft was still in low Earth orbit beneath the radiation belts.) Whatever triggered the chip failure, the ultimate cause was the use of non-space-qualified electronic components. When the chips failed, the on-board computer program crashed."

Not even necessarily low level. I once had a weird intermittent problem in a PHP driven web system. After a couple of weeks of diagnosing (largely trying to find a case the could more-or-less reliably tickle the bug), it turned out to be an interaction of a bug in the Redhat version of that day (2001) with a bug in the particular CPU we were using. PHP code just happened to trigger it under certain conditions. Since the box was at Level 3, we had to drive an hour down there and replace the machine.

And long ago I worked on Perq workstations, which had a stack-machine CPU (the CPU was a 15x15 inch board filled with TTL). The expression stack was four chips. The system was designed around the chip spec - NEVER DO THAT!!! Chips can not be depended to go at exactly the design spec - some are slow, some are fast. As a result, every CPU had to be tested at installation with those four chips inserted in different locations, essentially in order of speed. If a fast one came after a slow one in the slots, the CPU would barf. Basically someone just kept swapping chips around until it worked.

We were just discussing some of the remarkable repairs done in software to accommodate problems in various interplanetary probes - truly amazing stuff.

But on the chance you were serious, depending on where that chip was, it may have been beyond something manageable by software.

A chip in a power controller could take down any or all of the processor components, or render access to control circuits impossible.

The linked article also states

Everything was working well with the spacecraft immediately after launch, including deployment of the solar panels, until the command to start the engines was issued. When that did not happen, the spacecraft went into a safe mode, keeping the solar panels pointed to the Sun to maintain power.

How many times do you supposed they actually tested engine start IN THE SPACE CRAFT? I'm guessing ZERO.

non-space qualified parts being used in some of the electronics circuits. This is a design failure by the spacecraft engineers that might have been caught had they performed adequate component and system testing prior to flight. But they did not.

So design failure, due to radiation, prior to the craft getting near the strongest radiation belts. Unbelievable. Occam would be skeptical.

This sounds to me like some on-board internal source of radiation, or induction, or simple overload, fried a chip somewhere in some un-specified circuitry, most probably in the engine controls. This seems far more likely than an external radiation source given the shielding the physical design would provide.

I doubt space qualification made any difference at all. The window for space radiation in the brief time it was operational was small.Rather I suspect under-spec parts, over voltage or high current draw, or internal shielding oversights.

How many times do you supposed they actually tested engine start IN THE SPACE CRAFT? I'm guessing ZERO.

I'm sure they tested the engine multiple times. I'd figure the stress of the launch (vibrations, etc, etc.) causes something to fail either due to shoddy construction or small debris falling onto something.

I doubt space qualification made any difference at all. The window for space radiation in the brief time it was operational was small.

Exactly. I doubt all those laptops on the ISS are radiation hardened but they last quiet a while anyway.

How many times do you supposed they actually tested engine start IN THE SPACE CRAFT? I'm guessing ZERO.

I'm sure they tested the engine multiple times. I'd figure the stress of the launch (vibrations, etc, etc.) causes something to fail either due to shoddy construction or small debris falling onto something.

I'm sure they tested the engines too. Its probably a tried and true engine. The Russians tend to make very good motors.

But I seriously doubt they tested it in the space craft using the space craft's wiring harness. They used the harness on the test bed platform.

There are many aspects to radiation hardness. Radiation can flip one or more bits, resulting in bad data or program crash. Radiation can cause latchup, which will last until power is cycled; if the design is bad, latchup can fry a part. Rad hard parts are designed to be resistant to latchup. Really bad radiation can damage a part that isn't even powered.

A laptop can live through bit flips, and with luck it can live through latchup, and be functional after power cycling. Spacecraft control generally has to be always on; power cycling in not an option. Thus the design requirements for spacecraft control must be much stricter.

Actually, darwin is kind of right. The difference between 120nm transistors and 45nm transistors is quite substantial. Between random radiation, natural wear due to thermal cycling, and period electrostatic discharges from handling and plugging in connectors, it is not surprising that the older chips are sturdier in general.

But he may have just invoked the "They don't make them like they used to" logical fallacy, because sure there are some 20-year-old SNES machines, but how many of them died 2 years after production? Compare that percentage to the figure for PS3's and you have your answer.

As another EE with experience in rad hard space qualified design, he's not being self-contradictory. He's spot on.

If your CMOS structures are prone to latchup in the presence of single high energy events, then shielding does you no good. The amount of shielding necessary would more than consume the entire payload mass budget. Adding insufficient shielding just creates showers of secondary particles, each with more than enough energy to cause latchup alone, therefore rendering you at a statistical loss compared to no shielding whatsoever.

With this in mind means designing the CMOS structure to make shielding unnecessary. For example, build your circuits on bulk insulators instead of bulk semiconductor.

Just because you can't understand it doesn't mean he's self contradictory. You just missed his point. And then attacked him.

100 times smaller in area per bit? Which makes it 100 times more susceptible,

Or 100 times less susceptible assuming a random dispersal of cosmic rays. Smaller targets.Depends on the density of the rays I suppose.

But in any case, that amount of errors WOULD be noticed if it were infact occurring and going undetected and uncorrectedby the hardware. Just about zero memory goes unused in the modern computer. They strive to use it all in one way oranother. Unused memory is wasted memory.

Computers correct for these errors. Parity checking either in hardware or software. You can compare

Which makes me think of something I've been wondering for awhile, now that Intel has quit making the 386 are we gonna be seeing more failures like this in the future? Because from what i understand Intel kept making the 386 rev for so damned long (last chip rolled out in 09 IIRC) because its large die area and primitive but functional design made it trivial to harden for military and aerospace use. Now again from what I've been told due to the die shrinks that a modern chip, even something as old as the P3 or P4 would be hell to harden simply because its smaller dies and tighter tolerances would make it hell to protect from bit flips caused by cosmic rays, not to mention outright frying the chip from radiation exposure.

so are there any modern chips that would be easy to harden without being insanely expensive? Atom? AMD Geode? I'm sure with its GPU and dual cores Bobcat would be right out, maybe Via C3s? While ARM would be a good guess its die shrinks to fit in mobile phones would probably make it insanely expensive to harden yes? So while i'm sure the military probably bought a warehouse full of 386s before intel shut down what happens when they are gone? do we have a viable modern chip that withstand the rigors of space without costing insane amounts of money?

I asked one of the main AVR designers from Norway if it was ok to set a configuration, or a constant in RAM during initialization and trust with 100% certainty that it would not change during operation. He said that even on the worlds cleanest power supply, and absent the presence of any EMI, he would still NOT recommend it.

If you run 10 AVRs for 1000 hours you will see bits flipped. Many times it only effects a RAM variable that is constantly being recalculated anyways, so it causes little if any disruption to the operation of the device.

It really sucks when its something critical like a timer counter control register.

If anyone would like to duplicate my testing, I'd be glad to send code, but all you have to do is set everything to a known value, and then read it over and over til it changes. It doesn't take as long as you think (or hoped) it would! It also gives you a good idea on how well your PCB takes care of your Micro.

Always check, and if necessary, reset your hardware configs during runtime! Those "all of the sudden it started acting up, so I turned it off and back on again and it was fine" problems just disappear!

I still remember the time my CON_0 register read 8! Although I'm sure it'll happen again, you'll never notice it!

Well... if you read TFA (or actually the first TFA linked), it is clearly written:In a report to be presented to Russian Deputy Prime Minister Dmitry Rogozin on Tuesday, investigators concluded that the primary cause of the failure was "a programming error which led to a simultaneous reboot of two working channels of an onboard computer [...] Likewise, cosmic rays and/or defective electronics are not the leading suspects behind Phobos-Grunt’s demise.
The summary is clearly bolting together two contradicting reports.

In a report to be presented to Russian Deputy Prime Minister Dmitry Rogozin on Tuesday, investigators concluded that the primary cause of the failure was "a programming error which led to a simultaneous reboot of two working channels of an onboard computer," the Russian state-owned news agency RIA Novosti reported.

However, the third link says nothing of the sort. It sounds like TFS is just a mishmash of conflicting theories from different articles.

To follow up, the article saying that it was a chip failure is dated yesterday, while the article claiming it was a programming failure is dated today. Presumably, this is new information to shoot down the previous claims, but TFS (in typical Slashdot "editorial" style) fails to actually make that distinction, and puts both claims together as part of a single summary.

Chip failure, but it was a software error that lead to not handling the chip failure gracefully. Space qualified stuff has to be much more redundant and capable of handing failures of multiple components.

A while back I read some interesting discussions between satellite engineers about the tradeoffs between space qualified and not space qualified chips. From what I remember you gain resistance to radiation, but lose in other areas such as resistance to physical damage (e.g. a solder joint coming loose due to launch vibrations) because they're so far behind the state of the art that you may have to put a lot more chips on the same circuit board.

I'm not a satellite engineer, but wouldn't it be easy enough to just install a lead shield around the PCB to protect from most radiation? As long as the shield's not too thick, it shouldn't add too much weight, especially compared to using older-technology chips that'll take up more board space.

I'm not a satellite engineer, but wouldn't it be easy enough to just install a lead shield around the PCB to protect from most radiation? As long as the shield's not too thick, it shouldn't add too much weight, especially compared to using older-technology chips that'll take up more board space.

Well, that depends. Even on Earth's surface, we have to use ECC in more demanding application. In LEO, you lose the protection of the atmosphere but you still have Earth's rather strong and large magnetosphere. But this was an interplanetary probe. Once you get out of the radiation belts, interstellar and intergalactic particles start hitting you. You can't protect from those with a lead shield of any reasonable size. Pretty much the only way is simply to make the chip simple, rugged and design it with components (transistors) large enough that a particle flying through won't bother you much. Or add redudnancy. Or both, if possible (that's the usual case).

Many chips are never designed to meet military or space specifications: the extra certification is very, very expensive and there are design compromises between performance and ruggedness. Furthermore, the testing you suggest for space qualification, if failed, results not in a mil-spec component but a component that has been destroyed by the test. In some cases, samples of a given batch are heavily tested to verify the batch, but those devices are considered damaged and not sold.

Some rad hard type devices are of no interest to consumer design due to the poor performance caused by the compromises involved in achieving hardness. Rad hard devices aren't designed as often due to the small market, and the design is more difficult and takes longer, and certification takes time, too. Thus, the devices are older technology. Additionally, rad-hard parts (the actual transistors inside the ICs) are bigger physically than conventional devices, which also means they can be fabricated on older technology equipment. Thus, with respect to current commercial technology, space-qualified devices are often older technology.

The second link in summary leads to an article that is internally contradictory. That page from Discovery News is all over the place.Which is not surprising given the bio of the author [discovery.com]:

Klotz came to Brevard County, Fla. (aka The Space Coast) as a copy editor for the local paper 24 years ago. She switched to writing because it was obvious the reporters were having way more fun than the editors for the same money. After a year or so of writing for the business section, Journalism major trying to wear the big girl shoes.

to my knowledge, only the Apollo Guidance Computer has ever truly achieved hardware failure tolerance. the Apollo 11 LM radar fault overloaded the computer, but was able to continue due to restart logic built into the AGC that was able to pick up critical tasks from where they were when the computer was restarted and drop non-critical tasks, and all with a very small fraction of the capabilities of current technology (although I think from memory they were able to fit 2 transistors on a single chip!). the AGC is really a marvel of (past) engineering and computer science. the reliability problem alone would be insurmountable with today's garbage. probably part of the reason why we haven't been back there since.

mil spec isn't proofed against hard radiation; it does some soft radiation and EM not quite up to airburst-strength pulse. Space spec has to withstand high energy radiation such as Cosmic, X- and Gamma rays way beyond what you'd encounter 5 miles below a thermonuclear burst, otherwise it'll get outside the VA belts and simply die.

In an embedded system - particularly a critical system - you usually have software aware of the state of interfacing hardware. Additionally, you should have some redundant systems so you can handle a hardware fault on one of them. The article says "two chips failed," with no further details. I'd assume the guys calling it a software error are doing so for a reason - likely those chips were part of some databus interface, D/A or A/D converter, or something that the software *talks* to (as opposed to runs on)

If only. The reason ICs cost so little is that the cost is spread out over millions of parts. As my analog circuits Prof would say. "Your very first IC off the line is going to cost a million dollars. Everything else after that is free." So to buy one or two ICs that are radiation hardened is probably going to cost that much since it will most likely be custom. Now that's not to say they can't reuse some of the masks for an existing IC to make it cheaper, but It won't be that much cheaper. My guess is that they would want to redesign the part anyway if it is going to be in a radiation intense environment. The radiation could cause some weird quantum effects in the IC that might mean they want the transistors to be larger for reliability purposes. But that last part is just a guess since I am not an IC designer and thought my electronic materials class was nothing short of voodoo.

Long story short, they probably saved more than $5 for using a COTS part, but they probably lost the probe by the part not being radiation hardened.

I dunno, seems to me it'd be quicker just to order your parts from Digikey instead of going to Radio Shack, buying a cell phone and contract, then dismantling the phone to desolder the part you need (and hope you didn't bust the part in the process)... Sure, Radio Shack is convenient for a lot of things, as long as all of those things are cell phones and expensive Ethernet cables.

When I worked in the test equipment industry, we had a term for the lowest grade of parts that still worked when binning components: The radio shack bin. I once built part of an emergency prototype for a test equipment cooling system with radio shack parts. The prototype was sent to Taiwan where it failed prematurely due to the marginal components. Never Again!

How much did they save by using Radio Shack parts in a Mars probe? $5.00 even?

This is not the first time something like this happened to the Russians. In the 1970's, the Soviet Mars 4 [wikipedia.org] probe failed in flight. The reason? Due to cost savings, the transistors used had had their gold parts replaced with aluminium ones, which were prone to chemical degradation (a.k.a. corrosion). The Soviets then realized that they had manufactured three more probes of the same series using the same (unfit) transistors. Now what did they do? Of course they launched them! Guess what happened? Mars 5 failed

Space Micro [spacemicro.com] doesn't list the prices of their components or systems, nor can I find any from anyone else. Honeywell [honeywellm...ronics.com] don't list their prices either. Atmel seem to have dropped out of the field. Linear [linear.com] don't list the prices for their space-hardened stuff. Don't see any for BAE [baesystems.com] either, or Intersil [intersil.com]. Empire Magnetics [empiremagnetics.com] require a lot of personal data before they give you access to even the price classification information. Not the prices, just how they're classified.

You've got to allow for a year's worth of traveling outside of an atmosphere and then operating on Mars for the duration of the mission. This analysis of radiation for manned missions [esa.int] suggests you're looking at 3.5 mSv per day, then 20 rems per year [solarstorms.org] in most of the places of interest.

I'm going to figure that the top-line components will cost 100x that of their conventional counterparts, due to the higher-level of precision and QA that are required. It might well be a good deal more. In Russia, you've also got to pay for smuggling decent-grade hardware out of the US, as all of this stuff will be under massive amounts of regulation.

My guess is that the cuts would have saved enough that those doing the cost-cutting could buy second homes in Switzerland.

In my experience... hardware problems are acceptable if there's a software work-around. Special acknowledgement isn't given to software for fixing hardware bugs... it's just expected since hardware is arguably more expensive to change.

The summary is so contradictory because it quotes from 2 articles, and each of them is completely different. One says that the parts were space-tested and fine, and the other says they were never space-certified and were definitely bad. The first one says instead that a software bug caused parts of the system to reboot. The second doesn't know what happened and just blames faulty hardware.

In other news, U.S. radars were not responsible for the highly confusing and contradictory summary posted this morning to a Slashdot story about Russia's Phobos-Grunt probe. A thorough investigation has determined that the story's chips should have been able to withstand the radiation received when the story was transmitted through the intertubes and routed over northern Alaska. Instead, investigators blamed a typing failure on the story editors. "A series of tests showed that the editing was lousy and sloppy, and disciplinary action will be taken on those responsible," a spokesman said.

Fun to read the comments here. I've done embedded stuff and you need to be defensive. You can see at a glance who here has never done defensive programming before, or embedded or safety critical programming, all blaming the hardware. There's 3 states so you got 2 bits of input and a disallowed state comes in. Deal with it, don't just curl up and die and blame the hardware designer. There's a 12 bit A/D conversion result stored in two bytes, and there's a 14 bit number found there, deal with it don't just curl up and die and blame the... . Theres a cycle start button and an emergency stop button and both are simultaneously on. Deal with it. You reboot a mission critical (or safety critical!) CPU and a minor auxiliary input A/D doesn't initialize, do you burn the plant down in a woe is me pity party because one out of 237 sensors aren't coming on line, or do you deal with it?

Finally radiation is a statistical phenomena. There is no such think as radiation free. If they used non-rad hardened parts, its gonna crash maybe 10000 times more often. Thats OK, you program around that, assuming you know what you're doing. Radiation hardened does not equal radiation-proof. If there was a single bit error, or a latchup on a rad-hardened unit, with a poorly programmed control system it would have failed just as well, its just that a rad hardened chip would have made it a couple orders of magnitude less likely. A shitty design that has a 1 in 20000 failure rate due to better hardware instead of 1 in 2 is still a shitty programming design, even if the odds are "good enough" that it makes it most of the time with the better hardware.

The editors no longer write or read anything. They just cut and paste. Submitters no longer write anything, they just copy the first paragraph or two of an article. I swear that some days all of the articles are probably just submitted by a very short perl script.

10. "Mars probe? What Mars probe?"9. Forgot to use The Club8. Those lying weasels at Radio Shack7. Too much Tang6. Made by G.E.5. Them Martians musta shot it down with a ray gun4. Heh, heh, heh... Our space probe sucks -- heh, heh, heh3. At least we didn't blow all our money on some dork screwing around with a car phone2. Remember Watergate? Well, Nixon's up to his old tricks again!1. Space monkeys

The Planetary Society entry says that two modules failed and then the main computer crashed. Probably irrelevant if the computer crashed or not if there were significant failures in the electronics. Perhaps if the computer had kept going there woud have been some communication of what had gone wrong.

One of the commenters wrote "It is rather unlikely radiation caused the failure. Russians said the failure was due to an SRAM WS512K32V20G24M from White Electronics. This part is a module containing 4 CY7C1049 chips from Cypress and is actually screened. While the Cypress part is very susceptible to Latchup," No idea if this is true or not.

It's worth noting that the Space Shuttle's navigation system had three identical computers who all 'voted' on the result, and if one disagreed it took itself out of the system. And there was a fourth computer made by a different company, using a different architecture and different programming language, that monitored the three. In retrospect, I think that's a pretty good idea. Having two different architectures makes having the same programming error occur in two different systems very unlikely.

There's hardware to deal with that - a watchdog timer can reboot the system quickly.

Assuming the system comes back up with a working CPU and RAM, then the main computer should be able to work around bad peripheral or components on the bus. I think that's what the article is getting at.

On military aircraft, they use VM's to run the OS and software. Communicate between systems is passed synchronously and requires that each module know the state of the other modules. There is never an assumption that the other system will just work - all messages require acknowledgement and verification of results.