Interesting talk at Google by Jeff Hawkins from Numenta. They’ve built an interesting and scalable implementation of machine AI that’s based on some of the latest research into how the brain works. What I love about their approach is that they’re clear that building machine intelligence does not mean that the way that the brain works should be duplicated in silicon… there are significant differences, and there’s no need to create a copy of the brain in a computer to get machine intelligence. If I lived in Silicon Valley, I’d be very interested in working for them (not that I have a Ph.D in computer science)… very cool stuff.

The funny part for me was that as I was watching this, I was wondering (especially now that Ray Kurzweil works at Google), “What would Ray think of this?” And then the first questioner at the Q&A at the end of the talk was… Ray Kurzweil.

And while smart machines are already very much a part of modern warfare, the Army and its contractors are eager to add more. New robots — none of them particularly human-looking — are being designed to handle a broader range of tasks, from picking off snipers to serving as indefatigable night sentries.

…

Three backpack-clad technicians, standing out of the line of fire, operate the three robots with wireless video-game-style controllers. One swivels the video camera on the armed robot until it spots a sniper on a rooftop. The machine gun pirouettes, points and fires in two rapid bursts. Had the bullets been real, the target would have been destroyed.

…

“One of the great arguments for armed robots is they can fire second,” said Joseph W. Dyer, a former vice admiral and the chief operating officer of iRobot, which makes robots that clear explosives as well as the Roomba robot vacuum cleaner. When a robot looks around a battlefield, he said, the remote technician who is seeing through its eyes can take time to assess a scene without firing in haste at an innocent person.

Although there is a risk of lowering the barrier-to-entry for war by sending robots rather than humans to do the fighting, our main threat (and therefore our main area of military focus) over the next 40 years is going to be terrorists, not other nations, and most terrorists won’t be able to afford these robots. Of course, they have suicide bombers, which enable some sophisticated kinds of attacks that aren’t otherwise possible.

Better they bomb a bunch of robots, for now.

One thing I wonder about… when the military has a significant number of these kinds of robots, and AI grows up, someone will mix that peanut butter and chocolate and we’ll have some military-specific robots that are capable of performing some sophisticated decision-making. When that happens, will we still be so cavalier about sending them into battle, when we know that they’re exhibiting identifiable signs of intelligence? And will we eventually have to establish a command structure in the military that includes AI-based commanders?

I’m guessing that, at some point in the next 40 years, we’ll have artificial intelligence capable of holding an officer’s rank, and probably General or Admiral.

P.S. Sorry I’ve been away… working on the book! I’ll try to do both now… walk and chew gum. I can do this.

According to Jonas Ekmark, a researcher at Volvo headquarters near Gothenburg, Sweden, this is just the start. He says we are entering an era in which vehicles will also gather real-time information about the weather and highway hazards, using this to improve fuel efficiency and make life less stressful for the driver and safer for all road users. "Our long-term goal is the collision-free traffic system," says Ekmark.

Ultimately, that means bypassing the fallible humans behind the wheel — by building cars that drive themselves. Alan Taub, vice president for research and development at General Motors, expects to see semi-autonomous vehicles on the road by 2015. They will need a driver to handle busy city streets and negotiate complex intersections, but once on the highway they will be able to steer, accelerate and avoid collisions unaided. A few years later, he predicts, drivers will be able to take their hands off the wheel completely: "I see the potential for launching fully autonomous vehicles by 2020."

And maybe I can get a job as the “lead driver”:

The most ambitious of these projects, a collaboration between seven European manufacturers and universities, would also allow up to eight cars a little more than a yard apart driving in convoy, controlled by a lead vehicle operated by a professional driver.

Ordinary drivers would book a place in convoys and hand over control of their car to software on the lead vehicle. Steering, acceleration and braking would be controlled by an on-board computer that uses data sent wirelessly from the lead vehicle, along with information from cameras and radar and laser detectors on the front and rear of the car itself.

Drivers will be able to work, read, watch films or even sleep while their cars are driven for them. "It will be like sitting on a bus or a train," says Ekmark. When the convoy nears an exit at which drivers wish to leave, they can resume control and continue their journey.

As long as I can continue to control my own car, I’m happy to see this. Most people are really lousy drivers, anyway. I’d be happier to see them not controlling their own cars.

Of course, this is just an intermediate step until we have AI-level computing, controlling cars whenever we want. No later than 2025, I’d say.

We’ll have the inevitable system failure with fatalities, and then the inevitable breathless news stories saying, “Are the new cars safe?!?” And then we’ll remember that yes, they’re way safer than humans driving, and we’ll move on, make better and better automated driving systems, and raise a generation of kids who have cars (solar powered, of course) but have never actually driven. We’ll also have two different kinds of driver’s licenses: one where you’re allowed to conduct your own car, and one where you’re allowed to be in the car, but only if the AI drives.