Posted
by
Soulskill
on Saturday August 30, 2014 @10:54AM
from the red-rover-red-rover-send-updates-right-over dept.

An anonymous reader writes: NASA's Opportunity rover has been rolling around the surface of Mars for over 10 years. It's still performing scientific observations, but the mission team has been dealing with a problem: the rover keeps rebooting. It's happened a dozen times this month, and the process is a bit more involved than rebooting a typical computer. It takes a day or two to get back into operation every time. To try and fix this, the Opportunity team is planning a tricky operation: reformatting the flash memory from 125 million miles away. "Preparations include downloading to Earth all useful data remaining in the flash memory and switching the rover to an operating mode that does not use flash memory. Also, the team is restructuring the rover's communication sessions to use a slower data rate, which may add resilience in case of a reset during these preparations." The team suspects some of the flash memory cells are simply wearing out. The reformat operation is scheduled for some time in September.

It's running on solar power, that's how it lasts 10 years. Though the rechargeable battery must be tough to take so many recarchings.

Ideally, you have redundant systems for such a situation, where you can take one of them down and use the other to do the booting, formatting, programming, as if there were a user sitting right next to it. They say it has a flashless mode of operation, but the way I think of it, as in a regular PC, with a BIOS, you can reformat the harddrive without booting off of and using th

tl;dr on the whole post BUT... I've had my iPod nano in daily use for the past 8 years and it's still going strong. True, it doesn't need to power any motors - but the design specs probably also allocate a lot less weight to the battery.

Sometimes when I sound mocking, ironic and sarcastic, I'm actually serious, as in ironic-ironic, or sarcastic-sarcastic. A lot of Americans simply smack the phone down on Indian tech support, saying gimme somebody who speaks English. I patiently listen to them struggle through it.

As it happens, for flash, read errors are often transient. A better model than DRAM style ECC is to treat it more like a disk drive with checksums on each block. If you get an error, reread the block. And if you have a problem writing a block (e.g. the readback is wrong), just use a new block. Surely you've noticed that your USB thumbdrive gradually gets smaller with time as blocks wear out. (In space hardware, back in 2000, wear leveling was done manually.. still is as far as I know.. there's no nice rad

ECC use is standard with all flash storage. Flash is so unreliable that it can't be used without it, and it has nothing to do with the hard radiation environment on Mars.
As for wear leveling, it's been standard since at least 1990 with the first attempts at flash storage. Why the rovers don't do it, I don't know. Maybe because it requires too many cycles of an already limited processor, plus dedicated storage space to keep "use counts" of all the flash blocks.

This would make an interesting movie plot where they have to recall all the older, laid off rocket scientists working at McDonald's and bagging groceries at the supermarket to reboot an idle probe on a far away planet because it's the only one that can be repurposed to save the earth from an asteroid impact. But only the old guys know the hardware and can reprogram the firmware.

If you're so smart, why aren't you advocating using BCH codes or Reed Solomon codes or some form of forward error correction code over code and data stored in flash so random bit errors in flash won't affect the code that is stored in the flash? What is your super clever alternative?

You're a poster child for Dunning-Kruger [wikipedia.org]: some random on the Internet who thinks he's smarter than the folks who designed a Mars rover that lasted over 10 years past its 90-day expected life.

There's also the matter that better ECCs cost more overhead. You can detect single bit errors with a simple parity bit, but double errors will go undetected. And even something like Reed-Solomon can't correct all the errors it can detect. Spacecraft going to mars have very limited mass budgets, there are often better places to spend the extra mass than on an additional redundant flash chip (and associated circuitry).

You can detect (2^32-1)/(2^32) of every possible failure pattern with a CRC. With a combination of a multiple bit error correction algorithm (with most correction schemes n bits can be corrected with 2n redundant error correction bits) and then the CRC can be used to tell if you correctly corrected the data.

Most of the hardware cost is the launch vehicle, not the rover.Most of the people (salary) cost is the people working on the data generated (all accross the universities around the world who analze the data and write papers), not the designers.

Underspeccing it wouldn't have saved much.

There's one that breaks this rule, the JWST. Just the endless redesigns have gobbled up so much money, I don't believe there will be enough Science generated by it to cover the build & launch costs.

You're a poster child for Dunning-Kruger: some random on the Internet who thinks he's smarter than the folks who designed a Mars rover that lasted over 10 years past its 90-day expected life.

Not too often but occasionally the stupid get lucky and in some perverted way lack of knowledge and consideration of detail can lead to better outcomes.

After awhile one has to admit having to be careful when you transmit for fears it would even be possible for commands to be misinterpreted or designing something which knowingly continually writes to flash memory using DOS era FAT filesystems is not a winning play no matter how much you throw the reliability arguments at the wall and expect them to stick.

Actually I am an engineer who has designed many error correction circuits for communication and storage systems. I think I know how much I know about error correction systems, which is plenty for this conversation.

While the statement was made in Slashdot jackass style, the question is legitimate. Why didn't they do any or more ECC on the flash that is failing. There is probably a perfectly fine answer like "We knew the expected error rate and It was designed to la

On the other hand, it is all still working. It reboots occasionally. My computer does that. By reformatting, they will map out any bad sectors, which is probably the issue, and it'll run for another 10 years. Sounds like a smart technology tradeoff to me. Use cheap, off the shelf hardware, and KISS it to death. Write a special driver, or build special hardware to do ECC, and you end up with a bug that causes the system to freeze in an unrecoverable way.

The chances are that "reformat" isn't what we think and includes one of more of:

1) Rewriting cells and allowing wear-levelling and sector-replacement to take place, and make bad sectors as bad.2) Write-testing and manually avoiding those sectors that don't perform as expected.3) Rewriting all the critical storage functions to avoid the already-known bad sectors.

It's the kind of thing that anyone can play with. Not saying it's not risky on a remote device, but BadRAM etc. patches have been in places for years and that's a way to run Linux on machines with faulty ***RAM****, not just long-term storage.

Many years ago, a bad sector on your hard drive was something you found out with scandisk (or previous tools) and then it was marked as bad and that was the end of that. Your PC wouldn't use it and so long as it wasn't the boot sector, that was the end of that. It was only the "creeping" bad sectors, where you got more bad sectors over time, that would really worry anyone.

I imagine that it's not at all difficult to make sure that multiple boot sectors were in place if you really wanted to but why bother? The chances are billions to one. Chances are this hardware has MUCH better fault tolerance and multiple hardware watchdogs, firmware, and boot attempts to make sure it eventually gets back up SOMEHOW.

There's a reason that even FAT stores two copies of the allocation table, why Linux ext filesystems store multiple copies of the superblock, etc. They come from a legacy where the occasional bad sector wasn't a problem and where 20Mb of hard drive cost more than the computer did so it was better to cope with the fault than just tell people to buy a new one. And their predecessors were (and still are) mainframes with hardware that's just that fault-tolerant in the first place anyway.

It's not at all hard to write a filesystem that can cope with not only damage, but even recurring damage. You've seen PAR files presumably? The same could easily be done on a filesystem-level basis (and I imagine, somewhere, already is for some specialist niche).

It's not that big a deal once they KNOW that's the problem. The biggest problem is that they only "suspect" that's the problem.

Hah, I remember running the DOS debugger, poking into a certain address in the memory to access the MFM BIOS, then you could do a low level format where you could enter the sectors to mark as bad. Those were the days...

I always thought that the disk controller should do idle scrubbing. Are there any modern SATA disks that do this?

No, the drives themselves don't do this because it pulls the head away from where the host wants/expects it to be. This would result in a lot of unexpected thrashing. If scrubbing is to be done, it is best done by the OS as a background task.

You've seen PAR files presumably? The same could easily be done on a filesystem-level basis (and I imagine, somewhere, already is for some specialist niche).

While all hard drives now do their own Hamming error correction (or something better), RAID2 is the same idea for "raw" storage that doesn't: you write explicit ECCs to redundant volumes to allow recovery from both drive loss and bad sectors.

RAID5 with modern drives gives all the same resiliency, as the drives do the block-level ECC themselves, so you never see RAID2. But for a pile of flash memory, that's the filesystem-level equivalent of PAR files.

It's not at all hard to write a filesystem that can cope with not only damage, but even recurring damage. You've seen PAR files presumably? The same could easily be done on a filesystem-level basis (and I imagine, somewhere, already is for some specialist niche).

You mean like RAID-5? Because RAID-5 was part of the inspiration for the PAR2 format.

Not modem reset. The filesystem on Spirit had bunch of temp files and other stuff from the Earth-Mars flight, and apparently it just ran out of inodes. So basically they had to remote into whatever constitutes a bootloader with 20 mins of latency and remove some of the no-longer-needed files.

I dunno so much these days. Its 10 years old and got a few miles on the clock plus collection for the new owner would be an issue. On the plus side vandalism won't be a worry. For a few centuries anyway.

Why didn't they plan ahead for this sort of operation in the beginning, making it painless and 'reliable' ( as possible ).

That's a joke, right? We are talking about one of the two rovers [xkcd.com] that was sent to Mars on a mission planned to only last 90 days. They didn't see "flash memory wearing out from use" as a contingency they needed to plan for.

You're a poster child for Dunning-Kruger [wikipedia.org]: some random on the Internet who thinks he's smarter than the folks who designed a Mars rover that lasted over 10 years past its 90-day expected life.

I believe NASA is operating under the assumption that the rover's on board flash memory is still serviceable. 10 years ago flash memory was still in its relative infancy. A reformat and reload risks bricking the rover completely.

I believe you're assuming that the flash used on a rover that went to mars, and encounters all kinds of crazy radiation, is in some way similar to the crappy OCZ thing you stuck in your PC 10 years ago.

You're a poster child for Dunning-Kruger [wikipedia.org]: some random on the Internet who thinks he's smarter than the folks who designed a Mars rover that lasted over 10 years past its 90-day expected life.

Don't forget, we don't hear what the techies are talking about. What we're hearing is what the techies told to the PR guy distilled down to a journo, being summarized in The Register (!) and some other soft-tech sites, finally an inaccurate summary on the frontpage of Slashdot.

I wouldn't be surprised if it were just a "fsck.ext4 -cc" (I know it's not an ext4, it was't even released when Opportunity soft-crashed and bounced around on Mars nor it runs Linux).

We commanded a shutdown, which terminated thecurrent communication window, and the loss of signal occurred at the predicted time. Fifty minutes later, we commanded a beep at 7.8125 bps to alert us if the shutdown command did not work, and much to our disappointment, the beep was received!

really a fun read...im guessing theyll be doing a lot of similar stuff

Flash memory isn't the Rover's problem. It's still running XP and there are no more hot fixes. At this point the Rover's system has massive "bit rot," not to mention that it's been hacked countless times by the Chinese. Undeterred by this seemingly insurmountable problem, Microsoft has donated a Windows Phone for communications back to earth and a Surface Pro to power the Rover "because it's just like a computer." They didn't say just who's going to operate their touch-only interfaces. It all makes perfect