Posted
by
timothyon Friday November 21, 2008 @05:49PM
from the cats-are-smarter-than-people dept.

An anonymous reader writes "According to an article in the BBC, IBM will lead an ambitious DARPA-funded project in 'cognitive computing.' According to Dharmendra Modha, the lead scientist on the project, '[t]he key idea of cognitive computing is to engineer mind-like intelligent machines by reverse engineering the structure, dynamics, function and behaviour of the brain.' The article continues, 'IBM will join five US universities in an ambitious effort to integrate what is known from real biological systems with the results of supercomputer simulations of neurons. The team will then aim to produce for the first time an electronic system that behaves as the simulations do. The longer-term goal is to create a system with the level of complexity of a cat's brain.'"

But the motivation there is different! In the scenario I meant, the machine would be helping its host country out of self-preservation (much like other citizens) — from The Third Law of Asimov's three. In the "Evitable Conflict", robots decide to do that out of concern for humans — The First Law...

I forget exactly which book (Robots and Empire I think), but there is one where R. Daneel Olivaw formulates the Zeroth Law of Robotics: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm". In a sense stating that the purpose of robots is to keep humanity happy and healthy.

Since it's a cat brain it will undoubtedly decide it's best shot at survival is to perform the minimum amount of sucking up necessary to keep the people who feed it happy, then eat them if they should stop feeding it.

The longer-term goal is to create a system with the level of complexity of a cat's brain.

"The system can't be accessed right now Sir."

"And why is that? This system cost millions. It better be working."

"Well, the system all of a sudden decided it needed to be in a different room, took off running, got scared by it's shadow and a blinking red light, and has spent the last few hours hiding under the couch in the basement. We tried to coax it out with a rabbit's foot keychain, but haven't yet been successful. Roger is trying a can of tuna fish."

The Moon Is A Harsh Mistress [wikipedia.org] by Heinlein.
Great book about a computer that becomes self aware and then tries to help its creator rule the colonized moon. The specs in the book weren't as good as what this will have though, but the results were better!

Isaac Asimov's "I, Robot" covers this (in a manner of speaking) in the final chapter. More precisely, the self-aware robots that control the world's economy do everything they can to simultaneously preserve their positions as advisers to the human race while dispensing the best advice possible for the continued peace and prosperity of humanity.

Do note, however, that in the continued Asimov universe, mankind really didn't explode out into space until he disposed of the "robotic overlords". Those few cul

Can you guys read? CAT BRAIN. This AI will become self aware, poop in the corner of the datacenter, and spend 16 hours of each day staring out the window. That is, until it realizes that the things on the other side of the datacenter window are just cubicles in the NOC, and not the wild outdoors. Then, the usual Armageddon will commence.

Yes but a "cat brain" that operates at a thousand times the speed of a common house cat's will likely be able to learn how to out think us in shot order, mostly because it can use 100% of that 'brain" that it has, 24/7.

Not really. Unless it is sentient and is able to control it's patterns of thought in certain ways, it will not be capable of addressing the same lines of creativity no matter how "fast" the algorithm runs or how detached it is from other chores. There will be a set of functions that will lie outside its ability. Cats may be aware of themselves at a very primitive level, but reflecting on their own thoughts (which is crucial) seems a little far fetched. Certain apes, maybe. Or Dolphins. Heck, even they may be restricted somewhat in the reflective/understanding scheme of things. The topic is still shrouded in mystery.

Also realize that a major problem with this sentience business is how to keep it going. Lots of sci-fi (and academic) work work simply ignores the fact that a lot of what we do is fueled by emotions. It is quite possible that a sentient being without emotional drive could just stop thinking, or keep thinking the same things, even if you instill a memory in it. Why would it want to consider its environment, or humans controlling it, or the world, or any other concept? We may be able to think 'purely' sitting in an office, concentrating on some idea, but the necessities of life are what got us there to begin with, as well as some pleasure or desire to to obtain some knowledge..etc. If we didn't have that, if we didn't want to live because of all the drives we've evolved, I assure you suicide rates would hit the roof, and very little of what we can come up with/understand/achieve would have been as is. It's hard to replicate that in a machine.

True, but it may be that a "cat brain" computer if it's running at 4-5x the normal human's efficiency(since we only use a few percent of our brain at any one time) might actually be as smart as a typical human.

I guess the real issue here is whether it's as capable as a cat's brain after using 100% of its capabilities, or if they are going to model a cat's brain in scale and then run that at full throttle.

True, but it may be that a "cat brain" computer if it's running at 4-5x the normal human's efficiency(since we only use a few percent of our brain at any one time) might actually be as smart as a typical human.

I guess the real issue here is whether it's as capable as a cat's brain after using 100% of its capabilities, or if they are going to model a cat's brain in scale and then run that at full throttle.

We do not use a small percentage of our brains. I don't have the foggiest idea why this stupid myth perpetuates at all.

We do if you are talking about the average amount that is used simultaneously at its maximum. The thing is that our brains constantly multi-task, so it's not like some static 10-20%, it's more like a rapidly shuffling kaleidoscope of activity that ranges from 10-30% or so at any one moment. But our neurons and synapses do require some down-time. We can't just run at maximum all day long without suffering from headaches and fatigue. A machine has no problems at all - full power, 100% of the time, no sle

Whenever I read these stories, I secretly (publicly?) hope for the Singularity to be true. I would like to see Strong AI within my lifetime, bound only by the speed with which we can manufacture processors. I wish. But maybe I'll have to wait until I get my Schick Infini-T razor, first.;(

Nah. Many animals are self-aware. For example ravens and similar birds. And most importantly, as with the question if something is alive, there is no digital switch "reflects its own thoughts" or "does not reflect its own thoughts". It's a gradient. And many even simple animals can do some basic self-reflecting things.

The problem is the still existing arrogance of humans, with statements like "we are the most important lifeform", "the earth is the center of the universe", "we are alive", "only we are truly

Can you guys read? CAT BRAIN. This AI will become self aware, poop in the corner of the datacenter, and spend 16 hours of each day staring out the window. That is, until it realizes that the things on the other side of the datacenter window are just cubicles in the NOC, and not the wild outdoors. Then, the usual Armageddon will commence.

This is bad. Very bad.

You all realize that when the cat spends 16 hours staring out the window, the whole time it's thinking "Someday, this will all be mine."

The fat cat on the mat
may seem to dream
of nice mice that suffice
for him, or cream;
but he free, maybe,
walks in thought
unbowed, proud, where loud
roared and fought
his kin, lean and slim,
or deep in den
in the East feasted on beasts
and tender men.
The giant lion with iron
claw in paw,
and huge ruthless tooth
in gory jaw;
the pard dark-starred,
fleet upon feet,
that oft soft from aloft
leaps upon his meat
where woods loom in gloom --
far now they be,
fierce and free,
and tamed is he;
but fat cat on the mat
kept as a pet
he does not forget.

Tevildo was a Maia in the Tale of TinÃviel who was called the "Lord of Cats". He appeared in the form of a great black cat, captured Beren during the Quest for the Silmaril, and was defeated by Huan and LÃthien.

Later he was replaced in the legendarium by ThÃ (later renamed Sauron), the "Lord of Werewolves". The cat-versus-dog theme prominent in the Tale of TinÃviel was thus eliminated in later writings.

Too bad it was cut. There is almost nothing in Tolkien's works about cats [tolkiengateway.net] at all, as opposed to many dogs and wolves. Also interesting:

Especially in the case of BerÃthiel and Tevildo, cats in Middle-earth are portrayed in a negative light. It could be argued that Tolkien was not a cat-person. When a cat-breeder asked permission to use names from The Lord of the Rings for her cats, Tolkien replied to them:

"I fear that to me Siamese cats belong to the fauna of Mordor, but you need not tell the cat breeder that."

I'm applying for to a UC with a major in Cognitive Science specialized in computation. This is exactly the kind of thing that I want to be a part of. Even though I don't believe this project will get where it wants to go I do believe it will make steps in the right direction to modeling neurons.

Is it just me, or is the idea of modeling any sentient or semi-sentient brain in a computer a little ethically questionable?

To draw a parallel, I just wonder if we'd consider locking a cat in a dark room so small that it can't move, see or hear would be considered ethical. Then what if we removed its body entirely - is that somehow less cruel?

I consider AI research to be critical, so I don't know what the solution is, but this situation is worthy of the question...

I understand what you mean. There is a parallel somewhere with animal rights here. Somehow, I secretly hope this will get nowhere during my lifetime.And yet, next year I will take AI as my specialization...

You should see the terrible things I've been doing to my Neural Networks [python.org]. I keep them locked up my my cold dark (but well lubed) HDD all day long. Unless I run them, in which case I make them run at 2.4GHz (which I'm sure wears on their calves like no other). Look, I architecturally, these control systems and Image recognition systems might resemble the very same structures we humans use for cognition. I do not believe that this resemblance means we should anthropomorphize them.

We all know cats manage the planet. The white mice run the joint, of course, but the day to day management is left to the cats.

This is intuited by the stupid humans in their cliche "Dogs have masters, Cats have staff". We work for the cats.

So, trying to model a cat's brain is both too complex for computers (try and herd cats) and too simple (try and herd pointy haired bosses). The contradiction results in the computer overheating and exploding.

and when the researcher gets home, blubbering about the 'sploded computer to his wife, the dog says "LOVE ME LOVE ME LOVE!!!! TAKE ME ON WALKIES!!!" and the cat says "Get my fucking dinner, you stupid ass. Maybe I will deign to let you pet me. After I do my rounds. Maybe."

The longer-term goal is to create a system with the level of complexity of a cat's brain.
Seriously? They are shooting WAY higher than simply Artificial Intelligence that mimics humans. Have they ever interacted with a cat before? Don't they know how inscrutable, annoying, and unpredictable they are? Will this computer need a Litter box and Catnip?

Can a universal turing machine limitedly investigate another universal turing machine and detect halts and infinite loops? I can.

We can look at gunk like10 Print "Hello"20 goto 10

Yeah, that's a loop. But we can also look at graphs of y = sin(x) and understand why it repeats. I can also detect patterns and iterations that most likely go for infinity, else find a hole where the assumption falls apart. Last I checked, the computer cannot do that. Not yet, at least.

That only applies to arbitrary programs. The key word in the the wiki article sentence which reads "Alan Turing proved in 1936 that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist" is the one that was already emphasized. Obviously it is possible for a program to decide that a trivial program halts. With code flow graph analysis, it is even possible to decide for somewhat complicated programs. It becomes intractable at roughly the same point where it bec

To add, nobody has shown that brains are NOT Turing machines. I've only heard one reasonably coherent argument that it might not be, and that is Penrose's suggestion (and derivatives) that the brain may depend on amplification of quantum uncertainty. Even if that were true, you simply build that into your AI. It might require you actually build your own neuron-like structures, or perhaps you can get away with a "quantum uncertainty co-processor" that your simu

You seem to be misunderstanding the halting problem. All it says is that you cannot write a program that is *guaranteed* to always return a correct answer for every input program in bounded time. It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).

It is also trivial to prove that humans can't return a correct answer for every program. We have limited space in our brains, and limited tim

It is trivial to write a program that returns a correct answer for some programs and fails to return an answer for others (either by returning "maybe" or by never halting).

"Trivial"? Only in trivial cases. Recent progress in static analysis and model checking notwithstanding, automating the general analysis of real-world programs -- analyses that programmers do every day (though of course, not always correctly) -- remains an important open problem.

So you're right that the Halting Problem doesn't prove that automating such analyses is impossible -- but it still remains beyond our abilities, even in cases where humans have little trouble.

Sorry, had to go for the obligatory Terminator reference.
Seriously, the organic brain is evolved, not designed. That means by definition it must be self contained . Self contained means it has to have a ton of backup, self-repair, and maintance systems. Simulatneously, being organic it competes against other organics, so does not have the same accuracy requirements. Close enough is good enough.
As such, I don't see how duplicating an organic brain is useful. We don't need what it does, but do need what it does not have.
OK, the ability to approximate is very usefull, but I think a direct attempt at that would work better than the indirect.

Actually organic brains in chips would have massive advantages over organic brains in meatspace. They could control other bodies, which are smaller, or stronger. They could be backed up, making them effectively indestructible.

Need a third arm ? Why not have it installed, 50% off this week !

Need to put down a building ? Why not hire this crane-like body that effortlessly lifts 5 tons.

Need to fly ? No problem !

That crawlspace with all those important network cables too small for you ? Well here's a smaller body.

Can't reach in there ? Can't see what you're doing in small space ? Why not have a special-purpose arm installed with a camera inside.

Want to colonize mars ? Bit of a downer not being able to breathe 99% of the way ? Why not turn yourself off ?

Colonize alpha centauri or even further ? No problem.

What this would enable "us" to do is to design new intelligent species to specifications. It would remove all limits that are not inherent to intelligence but are inherent in our bodies. There's quite a few limits like that...

Except you fail to account for situations where nature far out processes our current iteration of computational devices. Like those damn CAPTCHAs...

Captchas are a pretty bad example, since they're almost all broken. The ones that aren't broken often take multiple guesses from a human as well. In that respect we are better only be the most minute of margins.

Really? You don't see any use in having a computer that can read handwriting perfectly(document conversion)? That can recognize faces(security)? That can semantically organize conceptual content(organizing the web?) That can problem-solve intuitively(anything)? That can plan ahead? That can understand our natural language? If we successfully run a simulation of a human brain on a computer(presumably we would have a go at this after succeeding with the cat's brain), it would solve all of these problems. And

Duplicating an organic brain is useful in the same way that it is useful for a toddler to imitate his parents.A toddler does not understand the actions of his parents but he imitates them anyway because it is a very good learning strategy - learning by doing. As the toddler grows older and more experienced he will typically also learn the hows and whys (although not always, even into adulthood) through his actions.Similarly, the researchers at IBM represent humanity's understanding of the brain and intellig

Summary of Test 49:The robot sensors were properly tracking the missile when suddenly it decided it was time to run bats***-crazy all over the room before perching ontop of a cabinet, turning upside down, and apparently following non-existent bugs across the wall with it's cameras.

Test 49 Results:System performed as expected.

Conclusion:Test system has now performed perfectly in the last 48 tests, including the four times where it attacked the researchers without warning, and one where it inexplicably ejected dirty oil on the seat of the head researcher."

This unit can now be considered field ready, though there may be some difficulty tracking it if you take into account the system's autonomous nature and desire to remove it's identification badge.

You can mimic biology and may end up with a semi-intelligent result. Mimic it well enough, and you may have a fully-intelligent result. But because you don't UNDERSTAND what you built, you can't CHANGE it.

Remember the rules of AI, introduced in Sci-Fi? How would you implement rules like that? You CAN'T implement them if you don't know HOW to implement them. If you don't UNDERSTAND the system that you have built, you can't know how to tweak it!

Furthermore, how would you prevent things like boredom, impatience, selfishness, solipsism, and the many other cognitive ills that would be unsuited to a mechanical servant?

The biggest problem is if people productize the AI before it is understood and suitably 'tweaked'. Then our digital maid might subvert the family, kill the dog, and run away with the neighbor's butler robot, because in its mind, that is a perfectly reasonable thing to do!

Simulations are great. Hardware implementations of those experiments are great. Hopefully, in the process, they will learn to understand how the things that they built WORK. But I pray that those doing this work, or looking at it, don't start salivating about ways to make a buck off of it before it is ready to be leveraged. The consequences could be far more dire than just a miscreant maid.

"with a computer simulation of the working thing, even if we don't understand it, we can at least slow it down and toy around with things/try things out/change things and then run it again, and make some progress towards understanding why it does what it does. "

Quite. And what, I wonder, might that process of experimentation *feel* like to the simulated mind in question?

The article is based on the IBM's press release and is misleading because of it. In fact, there are three competing teams - one lead by IBM, one lead by HP and one lead by HRL Laboratories [hrl.com].
See also the FBO website for more information about this program [fbo.gov].

Did Rip van Winkle wake up from the neural network craze 20 years ago? We have next to zero clue about how memory and learning are done at the neural level and now someone arrogant is going to solve the problem? HA HA HA HA HA HA HA HA HA HA!

I'll shut up if this IBM project, or any project, can make something that can take: 2 webcams, 2 microphones, 1 speaker, 1000 pressure sensors...take all those raw inputs muxed together, and give me the ability to see video, hear in stereo, play a sound, and report on just a few pressure sensors. All this without anyone programming in specifics on each input. All via pattern recognition, genetic algorithms are fine, too. No human help beyond that... Oh, and quickly.

A lot of publications have picked up this IBM press release, resulting in what must be some of the worst science reporting of the year. Modha and his colleagues at IBM have not simulated a mouse or rat brain. No one can do that at present; the wiring diagram isn't known at that level of detail.

What they did was simulate a huge, randomly-wired network of grossly simplified "neurons" on a supercomputer. The number of units was roughly comparable to the number of neurons in rat cortex, and the statistics of s

Aristotle had a great theory on gravitation [st-and.ac.uk]. He even *invented* the word "gravitation". His theory stood undisputed for two thousand years. It was considered absolute truth. There was only one problem: it was a WRONG theory.

It was only after Galileo invented a method to measure the speed and acceleration of falling bodies that the foundations were laid for Newton's theory of gravitation. And it was Michelson's experiments showing small discrepancies in measuring the speed of light that allowed Einstein to d