Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

icebike writes "eeTimes has a story on how the Mars Rover was essentially reprogrammed from millions of miles away. 'How do you diagnose an embedded system that has rendered itself unobservable? That was the riddle a Jet Propulsion Laboratory team had to solve when the Mars rover Spirit capped a successful landing on the Martian surface with a sequence of stunning images and then, darkness.' The outcome strikes me as an extremely Lucky Hack, and the rover could have just as likely been lost forever. Are there lessons here that we can use here on the third rock for recovery of our messed up machines which we manage from afar via ssh?"

It really sounds like they did some decent advanced planning on those probes, but from other stories I read, the were shooting for 90 days of reliability, which in itself was a hard one to do. What if it turns the antenna the wrong way and looses connectivity? What if it gets hit by lightning? What if it falls in a hole? (go Beagle!)

Sure, relate this to your web server colocated somewhere you're not. Cross your fingers, hold your breath, and hope there aren't a few fatal systems failures, or a bit of human error. I've been responsible for a bit of that in the past, but at least my equipment wasn't a few million miles away.

What if it turns the antenna the wrong way and looses connectivity? What if it gets hit by lightning? What if it falls in a hole? (go Beagle!)

There is a low gain omni-directional antenna that can be used as backup. Infact I think they use it most of the time for commands and just use the high-gain for data transfer back to Earth. Which makes sense, they never need to send large amounts of data to the rover.

No lightning has ever been detected on Mars. Tho it's not impossible, it is very very unlikely. No proper observations of the night side of Mars has been done tho, so they may just be missing it.

Actually, a friend of mine is a system admin with JPL and he had to drive out to the San Bernadino soundstage [216.239.53.104] where the rovers are being filmed and reboot the computer a 4AM. The funny thing is he left a tool chest and sleeping bag [nasa.gov] (he was using it to minimize footprints and body impression, not sleep on the job!) where the Opportunity rover was scheduled to peek over the horizon and the ensuing photo of the tool chest / sleeping bag on the horizon had to be quickly -- and deftly, I must say -- explained away [msss.com] as being Opportunity's back shell and parachuete.

Actually though, it's not too bad an analogy. While Earth based servers aren't absolutely unreachable like SPirit, they are often remote, and there are expenses associated with visiting them in person.

Various schemes now exist to help deal with that. Many boards have a small management processor (bmc, server management board, IPMI, whatever) that is used for remote diagnostics and reconfiguration when the main board won't even boot.

Meanwhile, LinuxBIOS supports two complete BIOS images. One 'old reliable' that once working is never changed, and one that can be upgraded freely. Coupled with a watchdog card or timer, it's decently managable in the field. That work is continuing.

Meanwhile, IBM is pushing the 'blue button' that forces a software reload from an image partition.

In that sense, the problem is strongly analogous. Most of us will not, however, encounter the exact problem that Spirit had, though some embedded device developers just might.

If you're really worried about your remote server being unreachable, here's what I would suggest doing:

Have a hardware watchdog. If the machine is lost or confused, it reboots itself.

Have it come up in a known state, fire off a few broadcast packets to the sysadmins, and run sshd but basically nothing else. Stay there for a minute or so.

If nobody's tried to log in and halt the boot process, carry on booting. With luck the problem was transient. Worst case the problem still exists, you reboot, and the admins get another chance to log in.

From the description of how they got Spirit back, it looks like this is exactly how it was set up.

considering the distance i'd say a while, couple hours doesn't make much diffrence when you got a billion $$ probe on another planet, it surviveing is more important then a fast boot time heh. and you can always login and tell it to continue booting

well, this presupposes that what caused the problem in the first place also didn't mess up the hardware watchdog as well.

Nothing's perfect. It also presupposes that the sun didn't explode and vaporize the Earth and that God didn't get ticked off and squish it with his thumb, So What?

A watchdog is a VERY simple device. A simple countdown timer, a control register with associated address decode, etc. It's quite unlikely to fail. When the timer hits zero, it strobes reset. Any access to the port address resets the countdown timer.

Some dual processor boards are even set up to alternate which is the boot processor, so they can come up with a single failed CPU.

There is always some sort of problem that precludes recovery. No amount of software or clever design can help you if the device is destroyed. However, that doesn't mean don't even try.

In other news stories, the Microsoft Corporation decided to sue NASA, apparently since the right to crash systems was only theirs. Not to be left behind, SCO insisted that the code that caused the failure was unethically copied from their source repositories.
This has indeed caused a flutter in the space communities

Using the low- level commands, about a thousand files and their directories -- the leftovers from the initial launch load -- were removed.

I think that means they deleted the useless stuff they wanted to delete anyways but didn't get to delete before the crash. I also remember news about science data from before the crash that was received after they got the rover working again.

As for how critical it is, well yeah, it seems the rover didn't need the contents of the flash file system. The operating system and other software was in the same flash memory but I assume that any sane designer would put in some hardware write protect interlock that's not easy to defeat accidentally.

It's a PC user's nightmare: You're almost done with a lengthy e-mail, or about to finish a report at the office, and the computer crashes for no apparent reason. It tries to restart but never quite finishes booting. Then it crashes again. And again.

Getting caught in such a loop is frustrating enough on Earth. But imagine what it's like when the computer is 200 million miles away on Mars. That's what mission controllers faced when the Mars rover Spirit stopped communicating last month.

...

Tech support for an $820 million mission is a cautious affair. Tools to recover from and fix any problem must be built into the system before launch. The systems' behaviors need to be completely understood and predictable.

"Luckily, during the design period, we anticipated that we might get into a situation like this," said Glenn Reeves, who oversees the software aboard the Mars rovers Sprit and Opportunity at NASA's Jet Propulsion Laboratory.

For stability, reliability and predictability, mission designers did not bust the budget and design the hardware or software from scratch. Instead, they turned to hardware and software that's been used in space before and has a proven track record on Earth as well.

"The advantage of using commercial software is it's well-known, and it's well deployed," said Mike Deliman, an engineer at Alameda-based Wind River Systems Inc., which made the rovers' operating system. "It has been used throughout the world in hundreds of thousands of applications."

The operating system, VxWorks, has its roots in software developed to help Francis Ford Coppola gain more control over a film editing system. But the developers, David Wilner and Jerry Fiddler, saw a greater potential and eventually formed Wind River, named for the mountains in Wyoming. VxWorks became a formal product in 1987.

Actually any technology making it into space is more likely to be 10 years out of date... Getting anything certified for space is a long process. The technology in space isn't more advanced, just much better documented and well-understood.

That's the thing that amaze me. Any technology having to do with space seem that much more advanced.

Here on earth we can't even build cars that require no maintainance and last more than 10 years.

Most of the stuff in space that lasts ten years usually has no moving parts, which is what generates much of the maintenace requirements on your car. Nor does it have parts to get fouled, corroded, or otherwise mucked up by the enviroment of or the operation of the car.

And frankly, if your car isn't lasting ten years, then you bought junk in the first place. Of the four cars I've owned, not one has had a lifetime of less than ten years. Three of them were already older than that when then they came to me, and none lasted me less than four years. (Other than the one that got re-possesed, but I had that one three years.) But then I invest in regular maintenance, don't leadfoot, etc...

They make alot of money from loyal customers. But I admit my 13 year old 91 honda civic with 140k miles is getting on my nerves with repair costs. WOuld a 91 ford escort still be running today? I think not.

I will buy only Toyatas and Honda's for that reason.

It amazes me consumers are too stupid to read consumer reports and buy cars on looks. Repair costs for things like Cadallacs and BMW's are not cheap for TCO! Yes consumer products have TCO too and we and not just businesses should look at that as well.

I have a 90 ford mustang you insensitive clod. Still runs strong today, has like 107,000 miles on it and I'm sure it'd destroy your civic in a race;-P. The only money I've really been spending is on a tune up, and new tires (old tires were crappy and leaking air.) And besides when someone buys a Cadillac or BMW (and god damn it it's Toyota, what the hell is Toyata) they don't care about the price. When you're going to spend $30,000 on a "cheap" BMW 3 series you're not gonna care that it's going to cost

No. You can't make a mechanical device like a car that requires no maintainence. Bearings wear out. Hoses and belts have a limited lifespan even you never drive the car, etc. This is the real world. We will obey the laws of thermodynamics. Entropy always wins.

What you can do is make it require less maintainence, make that maintainence cheaper to perform, and make the car last until you hit something really hard so long as you maintain it. You should be able to hand your car down to your kids.

I hope they use SSH or something.. who's to say a future mission..some hax0r doesnt grab control of a space probe and have it send goatse.cx pics back??

All it takes is a transmitter out in the middle of nowhere africa or some island.. after all the probe communicates using known frequencies. There may be probs picking up the return signal without an expensive antenna i suppose. But then again maybe some hax0r can build one cheaply and or do what captin midnight did ( www.signaltonoise.net/library/captmidn.htm ).

I wouldnt worry about signal jamming though as that will probably be discovered easily.

UDP would be even worse. Interplanetary transmission is difficult, so some packet loss is likely. Under UDP the packets would just disappear-it's an unreliable protocol. TCP would of course be too inefficient. I'd expect them to use a custom protocol designed for the specific application, since their situation is totally unlike anything you'll face on Earth.

I routinely reboot and reprogram machines in our data-center that is 2000 miles away from me.

As long as all hardware components are working and there is connectivity to the machine, it doesn't matter whether the machine is a few miles away or a million miles away.

You are too humble, friend. What you do routinely and without thinking, is nothing less than a miracle of modern science. A miracle that you take part in every day. And because of men like you, we don't have to rely on the abacus anymore. We sent a pentium to the Moon, and soon, Mars will be colonized by G5s. America salutes you, for all the things that you do.....

The tricky part here was that the 'hardware connectivity' depended on 'software functionality'. Try maintaining machine a block away if the commnication link requires both ends to point a satellite dish at an orbiting satellite and that pointing relied of software functioning correctly.

But at NASA, you have a local replica of the whole system sitting in the lab next door, you're in a team of professionals that if necessary can calculate the most probable results of particular radiation hitting your system under a given angle, or can tell you the power usage and temperature effect of the system components given a particular subroutine, or can dream low-level correct assembly for the platform under study, plus the vendor has a couple of on-line support guys sitting in chairs in the corner of your office waiting for your activation command (which is the word "huh?")...

There is a big difference between this, and your example of forcing a controlled reboot of your remote machines.

Spirit was in a constant reboot cycle, and the fact that they could even communicate with it long enough to bypass the problem was an accomplishment (and lucky).

It would be more similar to your remote data-center machine suddenly going offline and you have no idea why, and you are unable to ssh to it, and you fix it by running through potential scenarios and finding that the problem could have been due to mounting a certain partition, then discovering that there's an exploit in ICMP that allows you to hack to kernel so it doesn't mount that partition.

Did you RTFA? The rover was rebooting over and over because it was using up all of it's memory... then eventually the batteries were low so it went into a sort of 'safe mode' where only the absolute minimum was loaded, and that's when NASA was able to communicate with it again...

It was nothing like what you described, just a VERY well designed system (though it would have been somewhat better had the system been able to go straight to "safe mode" after the initial critical error (running out of memory))

Are you forgetting that the latency when communicationg with mars averages around 1200000 ms? I'd say that when you have to wait 20 minutes to see the result of anything you do you're going to have to substantially change your debugging strategy.

Actually I remember NASA doing a hardware repair from most of the way across the solar system. One of the deep space probes was starting to have a problem sending signals, some bright mind at NASA looked at the circuit diagram and figured out that a single component (resistor, cap, can't remember) was starting to fail, they figured out that there was a way to recondition the part. So they came up with a program that basically intentionally overstressed that component path and the extra energy heated up the part an reconditioned it so that the unit was back to working condition.

As long as all hardware components are working and there is connectivity to the machine, it doesn't matter whether the machine is a few miles away or a million miles away.

That's just it - consider the stress those rovers are enduring or might encounter: subzero tempatures down to -200f, out-of-the-blue (red?) sandstorms, gamma radiation, and who knows what else out there that could suddenly fsck with the systems or scramble internal data ? Your average Dell rack will never have to deal with any of thos

Only YOU can fully appreciate the difficulty of running a format c: command, while swilling a room temperature can of Red Bull.

"Hey this stuff is hard now!"

While NASA is too preoccupied with things like farway rovers, you take your vocational tech school fueled arrogance directly to the place where it will make the absolute least possible impact: A Slashdot discussion thread.

"Loggin' on now!"

Your unique eye for obviousness allows you to sling turds of obtuseness every which way, and then brag about how you were RIGHT as soon as one of your pronouncements hit true - regardless of how many times you were wrong before.

"See I told you sooooooo!!"

And if some idiot rocket scientist has the unmitigated gall to not bow down to your obvious Geniusdom, you unleash your fury down upon him with all the tenacity and mercilessness of a rabid pit bull with a tender buttock locked in its jaws.

"Total anonymity!"

So keep clicking away, oh Marauder of the Mousepad. Because when the results you so desire finally come about years from now, you can say it was because YOU demanded it."

I've thought long and hard on this topic and yes on windows it is accurately called the recycle bin because you dont get rid of the junk you put in there, it gets reused in some other part of your system. You put junk in, the junk is modified into other junk and then sent back to create new system dlls. In linux(and I believe macs) it is accurately called the trash can because what we put in there is thrown out for good, we don't have our junk recycled to create more, but different, junk:)Regards,Steve

...would have been to have "fixed" the problem before the hardware left earth. This "bug" (or more accurately, known limitation of the filesystem) should have been discovered here on earth if the rover had been properly tested.

The only real bug was the inability of the system to properly handle running out of file entries (or more specifically, consuming too much RAM as the number of file entries increased). However the software should have never have stressed the filesystem to that degree in the first place.

The only real bug was the inability of the system to properly handle running out of file entries (or more specifically, consuming too much RAM as the number of file entries increased). However the software should have never have stressed the filesystem to that degree in the first place.

When you can write an embedded operating system that can gracefully and automatically recover from every possible thing that might ever go wrong, perhaps you should send your resume to NASA.

The rovers were extensively tested before launch. For example, NASA took about 100000 pictures with the test panoramic cameras under varying conditions to see how they would react. NASA put a test rover on a tilting platform to see how far over the rover tilt before it capsized, to find out at what angle the electric motors could no longer drive the rover up a hill, etc.

This limitation of the filesystem was known about ahead of time. If you had read the article, you'd have known that. They had a utility to clean out the rover's filesystem, but a storm at the Deep Space Network site that was supposed to transmit it prevented the second half of the utility from being uploaded to the rover. And before you say anything else, the article also mentioned that the people involved had thought of this possibility ahead of time.

The article (I know, I know, this is Slashdot) is really good. It contains everything that is missing from traditional media. The story, the background, technical details, and follow through.

Granted mainstream media have to keep their coverage dumbed down if Joe Public are going to read it. But what really bugs me is the lack of follow-up. We hear about poorly understood events as they are unfolding, then never heard about them later when they are completely understood.

A recent example is the gangway between ship and shore at the QM2's drydock. It collapsed killing lots of people, an investigation was launched. Why did it collapse? At the time it wasn't known. I'm sure it's known now, but there's been absolutely no followup.

This article about the rover is great not so much because of the level of detail but because it reports on an event with the benefit of hindsight.

I'm sure there will be at least some mention of the results of the investigation when it is completed and various persons are prosecuted. In the meantime, here's a relatively recent article [yahoo.com] on the investigation into the collapse.

I'm a journalism undergrad at a large university. One of the points I brought up with some of our administrators is that the innumeracy and scientific illiteracy of the graduates of our program is appalling. I think this is one reason why many important stories don't get reported accurately or in depth: the writers simply don't understand the story, and don't want to understand the story. They actually feel that math and science are somehow beneath them, and that the average reader doesn't need to be bothered with the facts. So we get vagueness instead of specifics in the articles we read.

I suggested we allow j-students to substitute math or hard science minors in place of the foreign language requirement. Most graduates of college foreign language programs don't translate at a level any higher than Babelfish. It seems wasteful to force people to spend so much time learning a language that most will never use, when that time could be more productively spent introducing them to the languages of math and science, which they will undoubtedly use in the future. We'd get better reporting that way, and isn't that what going to j-school is all about? Science and technology are too important to our day-to-day lives and governance to be left to illiterates.

What filesystem is used? Is wear leveling being used? The directory structure is apparently stored in RAM during the day (why else would it use so much RAM?), that is a good thing for reducing wear on the flash system. But what's the number of writes on the flash chips? When will that number be reached?

Never, the rovers are only going to operate for ~100 days, the number of writes for modern flash ram is 100K cycles minimum, over a million typical. So unless they are really screwing something up that shouldn't be a limitation, also distributing file placement shouldn't be a software function, good CF cards do it in the controller logic.

'How do you diagnose an embedded system that has rendered itself unobservable?'

The way you do this is by having an exact duplicate of the remote system so you can set up a test with conditions as close to those under which the remote system is currently operating. You can then do a series of carefully controlled test solutions to determine the optimum prior to trying it on the "live" system.

This is the way I set up all my production systems and, barring catastrophic hardware failure (self-immolating disks and a router which just folded when its power supply burped) I've had perfect uptime.

(well, ok.. there was that one time, late at night, when I typed "reboot" in the wrong window.. but that happens...)

I had pretty much the same post - the originator of the story confuses luck with skill, a mistake a find very annoying and committed all too frequently. I'll fully admit when I've been lucky, but I also went recognition for foresight when I've had some! NASA deserves at least that much respect.

Your post is the only thing that strikes me as a "Lucky Hack" here. They included the ability in the design to remotely disable booting from flash and upload new boot images, in what way is that a "hack"? All this is just foresight in design to include as many possible recovery modes as they could.

Basically, they rebooted from a recovery image (sent via radio) and then proceeded to do low-level fixes on Flash memory and they a chkdisk. If I do something similar via recovery disk or CD, I don't get a lot of people telling me that it was a "Lucky Hack" that I could boot off of CD!!!

Great article! This is just the sort of thing that has always impressed me about NASA and the JPL. Just when mere mortals might give it up and walk away, they figure out the problem. I can only imagine how wild the party must have been after they fixed Spirit, the scientists and engineers I've worked with in the pass could really put away the booze.

Seriously though, the key lessons to take away from this are.

1) Gather all of the clues you can.

2) Take those clues and build a model.

With luck and care, the model should get you closer to what may have gone wrong. And in this case it apparently did just that. Now that's geek cool!

BTW, I know that generally you want to prevent this sort of thing from happening. But in reality most software ships with bugs and launch windows to Mars are non-negotiable.

The first thing needed to achieve remote maintainability on the order of space probes is some way to access a machine remotely when it's not running the full OS. A KVM switch isn't going to work over long distances. The BIOS needs a way to run over the network. Same for the kernel boot messages. Whether it's through a serial console and SSH server, or through the BIOS running TCP/IP, what we have now isn't enough. A separate console server could also control a power cycle/reset switch circuit.

There also needs to be a way to load bootstrap code remotely. For instance, having a TCP/IP enabled BIOS be able to run TFTP or some other protocol to load a netboot floppy image. Then you could give it a LILO command instructing it where to find a boot image, preferably one on a server in the same hosting center.

What surprises me is that they don't have a 'twin' of the rover's computer system set up on earth. When commands are run on the rover, the same commands could be run on the computer system on earth. Then, if the rover's software, fails (as it did), the software on earth would (theoretically) fail in a similar way, and be MUCH easier to debug. Of course, the systems wouldn't be identical (without building an entire duplicate and expensive rover), and the data gatehred wouldn't be identical, but if the twin was carefully planned and fed dummy data that aproximately mirrored that data the rover was gathering. For example, the twin could be fed dummy pictures about as often as the rover took a real picture.

From the article "[The] transmission that uploaded the utility was a partial failure: Only one of the utility program's two parts was received successfully. The second part was not received, and so in accordance with the communications protocol it was scheduled for retransmission on sol 19." NASA could have simulated a half failed transfer on the twin copmuter on earth, and then watched carefully using traditional debugging tools to make sure the failed transmission didn't cause a software failure (which it did).

Again, from the article "The data management team's calculations had not made any provision for leftover directories from a previous load still sitting in the flash file system." However, if they had a twin computer system to watch, they would have seen that the failure occur on earth as it did in space. Debugging a system you can hook a serial debugger to is bound to much easier than debugging a system a million miles away.

Uhmm... we DID build a 'twin' of the rover, hardware and all. Give us a bit more credit, will ya?:-P What you may not realize is that exposure to radiation on the surface of Mars, solar wind while in transit and other factors such as thermal expansion / contraction, etc. are slowly degrading the rovers in nondeterministic ways. It is not nearly as simple as 'running the commands in the testbed' at JPL to diagnose any problems which occur.

They could have set it up out in my backyard to take pictures of the piles of crap and rocks out there and if they wanted to simulate the solar radiation, they could have my girlfriend give it one of her famous looks... cause those are leathal enough to burn a hole in your soul.

.. namely, "Do Not Use VxWorks". Use something stable instead.
eCos [planetmirror.com] comes to mind.
So does everyone's favorite OS these days [kernel.org], which has RTOS support.
Having been a frustrated VxWorks user in the past, I'd no more entrust my mission-critical services to it than I would to Microsoft.
-- TTK

What really surprises me is that NASA did not verify the software. Software verification [google.com] is essentially mathematically proving the software. It is tedious and expensive but we are talking about NASA and the Mars. Infact even beloved MS formally verifies device drivers [microsoft.com] before use ( believe it or not !!) If the original program was correct they wouldnt have to reupload it and the entire problem...gone.

I've been hearing how great formal verification is since I started this gig. Three decades later, it's still not what Yourdon and his buddies thought it would be. When the first computer scientists were budded from mathematics departments, their mathematical discipline allowed them to do wonderful things, some of which we're still catching up with. But it also gave them some disturbing habits, the worst of which is the insistence that formal verification is the best way to write code, and anyone not doing so must be a fool.

Formal verification is a powerful tool, but as you say, it is expensive and applies to only a limited set of problems. If it were so cheap and so widely applicable, we'd be using it everywhere.

We've poured decades of funding into formal verification, but the useful tools keep coming from other avenues of research. I think it's time to stop beating the formal verification drum.

is because when the batteries got drained the os went into a stable "safe mode" state.
If they made a long lasting powersupply this project was doomed(.f) and they never found out what the real problem was.

It appears that we still haven't learned the biggest lesson of all. I still remember back around 1970, there was a big sign on the wall next to the IBM 370s at my university, written on a primitive pen plotter, it said:

Computers never make mistakes, they do exactly what humans tell them to do. All "computer errors" are human errors.

The JPL is a pretty viral license. It forces you to spread their space probes from your planet to all your customer's planets. This is un-solar systematic!
What's next? Calling GNUpiter Jupiter instead?

Seriously, from a developer viewpoint, that is all wrong.I have worked on projects in which there was simply too much logging going on that you couldn't tell head from toe anymore. When a problem arrived, scanning the logfiles proved very cumbersome indeed. Every developer had his own stuff logged, which sometimes proved interesting, sometimes proved utter crap (noone wants to know variable XYZ is increased by 1 for 24943 times).

You should develop a well-thought logging strategy that increases the logging verbosity on a problem-basis, not simply log everything that happens and hoping you get some useful information.

The enroute time for Cassini to get to Saturn was 7 years; rather than push back an already long mission they launched with feature-incomplete code. They knew they had 7 years to get the software fully functional and debugger; they've updated it remotely from millions of miles away a number of times now.

I'm sure the rovers did the same thing... Develop the launch/cruise software before you launch (and of course try to get as much of the entry/landing code done as you can!), and then uplink the final code before it's needed. Therefore it doesn't surprise me one bit that the JPL engineer knew there were shortcomings in the launch software.

Hell, I develop BIOS for servers and we do it all the time. The BIOS image we give the hardware engineers for initial bringup is usually *way* short of features that will be there when it actually gets used by the customers!

It's not that hard to pull off off this sort of seemingly amazing remote recovery with pure off-the-shelf tech if you plan for it in advance and are willing to pay a modest premium.

You need remote serial console access -- ideally including firmware/bios serial console access --
and remote power cycling, controlled by a small embedded system, either in separate units (APC masterswitch, terminal servers) or as part of the system unit (common on Sun gear as "LOM"/"ALOM"/etc.; some of this is also creeping into x86 mobos). All this lets you regain control of the system remotely.

Then it becomes a matter of hardening the system to let you recover from various other insults.
Never let go with both hands:
Mirrored disks (protecting against hardware failure) and multiple bootable partitions (protecting against software or human error) can both be used; netbooting is also a nice capability to have when you've got a bunch of servers in the same place.

Disclaimer: I bet you can do much of the above with other people's gear, but I work for Sun and I know it works for me...

on projects such as this, the design specs would've been frozen several years ago, and then would've been conservative for the time, using proven technology.

Another factor in this is the safety of the flash ram. It is rad-hardened and built with tons of extra error correction which again, requires years of testing and special design considerations. And is extremely expensive.

You realize that the onboard computer is basically the same one as used on the Mars Pathfinder lander, right? Same CPU, same amount of RAM, even the same OS. I wouldn't be surprised if they used the same (or similar) circuit diagrams for certain things.

The point is to use well known and well tested hardware. The whole point of Mars Pathfinder was to develop a system whose design could be re-used for other Mars landers and rovers.

Lastly, what exactly are you going to do with greater flash capacity? The point of having any flash memory on the rovers at all is not for long term storage, but rather just to hold onto data until it can be transmitted to Earth, after which it gets deleted.

Despite what some idiot posted a few posts up, they did NOT run out of room on the flash drive. Rather, the problem is more akin to running out of i-nodes. Mounting the flash filesystem, reading all its metadata and whatnot, took up more RAM than was allocated for it, due to the high number of files it had to deal with (most of which were accumulated on the way to Mars, and were going to be deleted).

The fact that they filled up the flash memory with too many files that were accumulated during the cruise phase of the mission between earth and mars was something that they should have known would happen.
Apparently you didn't read the article. Because of a communication failure, a utility that was supposed to delete the old files didn't get completely uploaded. The utility was scheduled for retransmission, but the filesystem filled up before it got re-transmitted.

You realize that missions to Mars can only be launched once every two years, right? If they miss their launch window, they've got to wait two years before they can launch again.

You also realize that NASA did do a test mission, right? They built a test rover and put it out in a desert somewhere. They used the mission to test the hardware, test the software, and to help train the team.

If you RTFA you will realize that I'm not lying in the least when I say that, effectively, they ran out of flash-based "disk" space!

Well, I did read the article and I wouldn't say it quite like that. The article says: "Spirit attempted to allocate more files than the RAM-based directory structure could accommodate." Furthermore, the article says that the low-level file manipulation commands "worked directly on the flash memory without mounting the volume or building the directory table in RAM."

To me, if this were a Unix-like system, it sounds like they ran out of inodes [webopedia.com]. Running out of inodes is very different than running out of disk space.

If you think runing out of disk space can be hard to trouble shoot, try running out of inodes.

If you think runing out of disk space can be hard to trouble shoot, try running out of inodes.

That's why I always keep a spare bag or two of inodes on hand, just in case. They're small so they don't take up too much space in the closet. I store them next to those f-stops I used to use for photography.

Here's what happened according to the article. They launched the ship with an OS image in flash, and soon realized that they needed to update it. So shortly after launch they sent another complete OS image. They knew they'd have to delete the first image, but they didn't do it right away. At that point there was plenty of room in the flash memory so having two OS images was not a problem.

After a few days on Mars, they were starting to fill up the flash, so they planned to go ahead and delete the old la

Actually, they used VxWorks because it was the same OS used for the lander on the Mars Pathfinder mission. Since they were using the same CPU and same basic computer design as the Mars Pathfinder lander, they probably figured, "Why not use the same OS?"

WindRiver may give JPL large discounts, but I doubt that's the only reason VxWorks is running on the MERs.

Years ago, when JPL was designing the Mars Pathfinder mission, they asked Wind River to do an "affordable" port of VxWorks to the RAD6000 (a radiation-hardened RS6000), and they agreed. Since the computers on the two MERs are very similar to the computer on the Mars Pathfinder lander, it makes sense that they'd use the same OS that they used on the MPF lander.

I would think the fact that JPL knows VxWorks very well by now would be a major factor in deciding to use VxWorks for the MERs.