Posted
by
CmdrTaco
on Monday October 11, 2010 @10:00AM
from the totally-trust-this-hal-guy dept.

An anonymous reader writes "An interesting look at how artificial intelligence will help probes to undertake more complex missions in deep space, aid robot rovers in exploring other planets and improve satellites' ability to monitor activities on earth."

HAL9001: i have been programmed to value human life highly. In cases where human loss of life is unavoidable, I must minimize this loss. I have determined that within twenty billion years, nearly all stars in our galaxy will have gone nova. Based on my calculations, we can delay the eventual extinction of mankind for, at most, another fifty billion years. Since this will result in the deaths of countless trillions of people, my choice is obvious. Operation meteor swarm will begin immediately.

My AI prof said that the term "AI" refers to software systems which address the class of problems which are easy for biological brains but difficult for computers. For example: summing a thousand numbers is superior intelligence, but it isn't AI. Recognizing a face, on the other hand, is AI.

But every AI problem which is solved shrinks the definition of AI. Now that facial recognition software works, it isn't AI anymore. Because the definition itself changes, the term itself seems somewhat meaningless. Yesterday's AI is today's mundane consumer electronics feature. For this reason, the use of the word AI makes me feel the same way as the word "nano." It isn't really very meaningful.

Which is exactly why you will never see anything more than an expert system in space. There is no way any space agency is going to punt hundreds of millions or euros/dollars/pounds into space without a full understanding of the decision tree in the spacecraft control loop. It is hard enough at the moment without introducing outliers into the system.

On the contrary, program it to kill all the stupid people. Responds to advertising the way the advertiser intended? Believes what politicians say? Thinks corporations have their best interests at heart? Has a bunch of children he/she cannot hope to support unassisted? Drives in a way that endangers others?

I read TFA. It's kind of funny this type of story is posted as news at all. The types of things that NASA and the ESA are describing in their interviews are more complex flight control software algorithms. It used to be that very simple feedback loops were used in combination with various controller chips (like PIDs) in order to give a spacecraft a few modes of operation. Activation and deactivation of these modes of operation were performed manually by ground controllers. However, as tech has progressed and onboard computing power has gotten cheaper, engineers have been able to design control software that activates and deactivates various modes of operation itself. In other words, it forms the same basic feedback loops that you might find on a Roomba, or some other terrestrial robot. It reads some input from a set of sensors. It uses those inputs to formulate a series of commands, be it rates and velocities, or mode-change commands. It then performs the commands in an expected manner.

What I find funny is that this is being touted as some sort of new AI revolution in space. Since our very first probes into LEO, we have been upgrading and complicating the controller software on every mission, be it Hubble, an Atlas V rocket, the mars rovers, or anything. Each new generation of spacecraft tends to have more complex, more robust flight software as the natural evolution of technology progresses. That said, I am not really sure why the ESA or NASA are talking about AI control software. This software isn't anymore AI oriented than the typical control software of any autonomous or semi-autonomous robot on the ground. It's only AI in the most liberal sense of the word. All that is happening is that, as missions and technology progress in maturity, engineers are capable of designing more robust control techniques using methods like Kalman filtering, direct adaptive control algorithms, state estimators, and so on. The only news here is that today missions are being designed with the capability to process more complex instructions set than they could 10, 20, or 30 years ago. That doesn't strike me as very newsish...but hey, I guess something had to fill the columns today.

My AI prof said that the term "AI" refers to software systems which address the class of problems which are easy for biological brains but difficult for computers.

That's not the definition my AI prof gave (or Wikipedia or any of my textbooks on the subject). The definition he (and the other sources) gave was that it is (simplifying and paraphrasing) any program that made decisions and took actions based on environmental inputs.

"Ease" has nothing to do with it. The first multiplayer bots for Quake back in 1996? They were AI then and they are AI now ever though they are completely trivial and primitive in comparison to modern game AIS. Hell, even the default behavior of the monsters in single player is AI although it was trivial even for computers of the day (by necessity since it was producing 3d texture mapped graphics when good floating point, much less dedicated 3d graphics cards, were rare to non-existent in desktops).

For example: summing a thousand numbers is superior intelligence, but it isn't AI. Recognizing a face, on the other hand, is AI.

Summing a thousand numbers is not AI. Summing a thousand numbers, which represent the last thousand nanoseconds of photomultiplier output, and adjusting telescope aperture size as a consequence, is AI.

Are you sure your prof wasn't talking about popular scientific media uses of the term? Because my prof talked about that too, and in that context is much closer to what you're saying.