Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

schliz writes "IBM's research director John E Kelly III has delivered an academic lecture in Australia, outlining its 'cognitive computing' aims and efforts as the 70-year 'programmable computing' era comes to a close. He spoke about Watson — the 'glimpse' of cognitive computing that beat humans at Jeopardy! in Feb — and efforts to mimic biological neurons and synapses on 'multi-state' chips. Computers that function like brains, he said, are needed to make sense of exascale data centers of the future."

Oh come on. lets be fair. If it turns psycho, it has a 25% chance of it becoming a CEO. Either way, they sure as shit know how to make a lot of people a ton of money. Isn't that what we hire CEO's for? Even the psycho ones?

Just Google for the PBS Nova episode on Watson. It's not self-aware, but if it was, he would have HAL today! Mind blowing achievement that I think gets little attention. If only we could pair the Siri interface with Watson, and have him tie back to Google, Wikipedia, and Wolfram Alpha, the amount of discoveries we could make would happen in weeks if not days.

I'm convinced this will be the case. The logic is almost there, and the hardware just needs a few more generations to shrink them into a home PC. But o

Mind blowing achievement that I think gets little attention. If only we could pair the Siri interface with Watson, and have him tie back to Google, Wikipedia, and Wolfram Alpha, the amount of discoveries we could make would happen in weeks if not days.

Oh boy; here we go again. As a cognitive scientist, I'm appalled by/. people buying on the hype.

which means, in other words, that while they remain "logical", they can't comprehend us.

But computers might one day also be "illogical", in the sense, as Hofstadter put it, as "subcognition as computation": human thought as an emergent property of a highly interacting system. This cowboy in washington is one query in millions. Here's a paper by Robert M French that clearly show some crucial distinctions (for the really interested people in this debate, check out his paper's on the Turing Test also; they

Frankly, not every biological intelligence wastes neural capacity on politics, let alone political slander-mongery, so yeah, it's reasonable to point out the political bias inherent in the question, let alone question its value as some kind of politically-correct faux-Turing test.

As a cognitive scientist (if that is indeed true), you really should do a little more research (beyond Hofstadter).

AI (AGI in particular) does not necessarily imply imitating humans. It's a bit of a homophobic slant to think that intelligence equates to the human implementation of intelligence. If a machine can exhibit the main features of intelligence (inference, learning, goal seeking, planning, etc, and other factors depending on your definition of intelligence) then it is by definition, intelligent.

This is the same tired argument I've seen over and over again, but it's simply not true. While we don't have a consensus on universally accepted definition of intelligence, most researchers agree on what this definition must, at minimum include (as I noted above - inference, learning, goal seeking, planning, etc). I don't think AGI will arrive as an announcement from some group that "AGI has been achieved!", but rather will creep into our technology over time and will probably not be accepted as true AGI un

firstly they so should have named Watson Multivac. Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely. first of all computers are very dumb they are like idiot savants what they do they do very well and very quickly, but they are still stupid and need to to given very explicit instructions, they have a bad habit of doing exactly what you tell them and not what y

If Watson can extract a reasonably sane relational object model of the information, then yes, it could produce the source code for that model.

MSS Code Factory 1.7 Rule Cartridges [sourceforge.net]. instruct my tools how to do it. Not very complex, actually, just non-trivial. Took a while to figure out how to do it. Took a while longer to work through a few applicaton architectures to figure out what works best. Now I'm working on the next step -- actually finishing a full iteration with web form prototypes, database i

You people really can't let something be on it's own can you? Just like in the 1860s:"Negro can’t take care of himself, so we’ll put him to work. Give him four walls, a bed. We’ll civilize the heathen"

Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely

If you read the Stantec ZEBRA's programming manual (from 1958), it tells you that there are two instruction sets and recommends that you use the smaller one dubbed 'simple code'. This comes with some limitations, such as not being able to have more than 150 instructions per program. This, it will tell you, is not a serious limitation because no one could possibly write a working program more than 150 instructions long.

Compared to that, a language like C is close to magic, let alone a modern high-level

That's a semantic distinction and it could be argued that a query IS a program (i.e. it invokes a set of programmed steps to produce a result).

The following "query" does something (inserts data into SaleableItems table) and makes decisions (saleable or not saleable)
INSERT INTO SaleableItems
SELECT CAST(
CASE
WHEN Obsolete = 'N' or InStock = 'Y'
THEN 1
ELSE 0
END AS bit) as Salable, Itemnumber
FROM Product

Secondly as for the idea that we will someday no longer need programming languages and to simply state what we want and it magically write compile debug and give you exactly what you want not likely.

Not likely? There's a whole FAMILY of languages that do just what you describe. Ever coded in SQL? LISP? Scheme? It's called declarative programming, with the jist being you tell the computer what you want it to do, not how to do it.

On the other hand, what's the difference between giving a human an order in English and ordering a computer in a programming language, besides that a computer can be trusted to obey the best it can?

This is kind of off topic, but this reminds me of an article I read (maybe in time magazine) that was about how in the next 40 years or so we will have computers powerful enough to emulate a human brain. The point of the article was that once we reach that capability, humans will basically become immortal because we would just copy our brains onto a computer and not have to worry about our fragile organic bodies failing on us.

It's very interesting to think about all the effects a breakthrough like that would have on humanity, but I also wonder if something like that is even possible. Just because we can emulate the human brain doesn't mean we can transfer information off of our current brains. Even if we can transfer the information, will our consciousness with a computer brain be the same as our consciousness with an organic brain or will we experience the world completely different than we do now? Once we have eternal life as computers do we even bother reproducing anymore? If our only existence becomes as pieces of data in a computer are we even humans at that point? And is the real way humans wind up going extinct just the result of a power outage at the datacenter where we keep our brains?

Like I said, this was pretty off topic. But the title reminded me of that article I read. This [time.com] might be it, I'm not sure though.

I imagine mind uploading would have to be by destructive readout. Destroy the brain in order to extract the information from it. Getting the kind of resolution required for scanning is going to take a nanotech revolution too - if you just sliced it up and used conventional microscopes, it would be time-prohibative.

I imagine mind uploading would have to be by destructive readout. Destroy the brain in order to extract the information from it. Getting the kind of resolution required for scanning is going to take a nanotech revolution too - if you just sliced it up and used conventional microscopes, it would be time-prohibative.

I think it's reasonable to assume that once we have the technology to mimic a human brain (the most complex part of the human anatomy?), we would probably be able to have completely artificial bodies (including reproductive capabilities).

This is certainly not a new idea. It is sometimes referred to as the "rapture of the nerds" version of a technological singularity [wikimedia.org]. Ray Kurzweil [wikimedia.org] is a big fan of the idea and one of the major proponents.

As to the actual feasibility, I ran across Whole Brain Emulation: A Roadmap [ox.ac.uk] a little while ago, which discusses the possibility given our current knowledge of how the brain works. It provides dates on how long Moore's Law would have to continue based on varyingly optimistic assumptions about how much work is

This is certainly not a new idea. It is sometimes referred to as the "rapture of the nerds" version of a technological singularity [wikimedia.org]. Ray Kurzweil [wikimedia.org] is a big fan of the idea and one of the major proponents.

As to the actual feasibility, I ran across Whole Brain Emulation: A Roadmap [ox.ac.uk] a little while ago, which discusses the possibility given our current knowledge of how the brain works. It provides dates on how long Moore's Law would have to continue based on varyingly optimistic assumptions about how much work is

Computers are already considerably more powerful than human brains at certain tasks, but they work in a completely different way. The way they work hasn't changed since the first steam and valve driven computers were developed, they are just a lot smaller, can deal with a lot more data at one time, and do it a lot faster. They just blindly follow the instructions given to it by the programmer, and there is no way you could program it to invent some completely new thing that nobody has ever thought of befo

Why not? It's already possible to program computers to learn, within narrow limits. Processes like neural network training. There is no theoretical reason why a computer could not be programmed with all the cognative ability of a human - it is merely an engineering task which has thus far proven insurmountable. Given enough research, more powerful hardware and the attention of a few geniuses to make the vital breakthroughs it should be achieveable.

Computers don't "learn". They collect data and are instructed in how to use that data in calculating the answer to future problems. The theoretical reason why they can't be programmed with the cognitive ability of a human is that computers use boolean algebra and human brains don't. They have things like emotions which can't be programmed using existing assembly language.

Until we understand how a brain actually works and how the brain of a genius works differently from a normal person's brain, we would be simulating a dead brain, or the brain of someone in a persistent vegetative state.

Again, I'm not saying that this is impossible, just that we are not any closer to doing it than we were in the 1940s.

This is kind of off topic, but this reminds me of an article I read (maybe in time magazine) that was about how in the next 40 years or so we will have computers powerful enough to emulate a human brain. The point of the article was that once we reach that capability, humans will basically become immortal because we would just copy our brains onto a computer and not have to worry about our fragile organic bodies failing on us.

You'll have to resolve the unresolvable "transporter problem" raised by Star Trek: if we create an atom-by-atom copy of your body and brain, and then destroy you, does your consciousness transfer to the copy? Or do you just end? Either way, the copy is going to insist that he is you and that there was no interruption in consciousness... but he would say that simply because of his memories.

Also, what kind of weird stuff would happen if we just started duplicating ourselves in the same way you can duplicate an operating system installed on a computer. We could wind up with millions of copies of our brains all existing at the same time, having conversations with each other.

Have you considered another scenario? Just 2 weeks ago I posted this in an article about artificial brain cells:
Every day replace some brain cells in a human with an artificial one. Take five or six years, and replace every cell he/she has. At what point does this become artificial intelligence? Would the consciousness of said person survive the transition? If you succeeded, would an exact copy of the result also be conscious? I don't think I'd volunteer, but I'm sure someone would.

With the end of the desktop, it makes sense that the end of "programmable computing" is at hand (followed surely by the year of linux on the desktop). That said, imagine how amusing it would be if there was a union to protect programmers (hah, no more 100 hour weeks!). I can see them working to protect the jobs this inevitable innovation will extinguish. Whatever, onto the next thing, until every useful human task including innovation itself is taken over by the machines. At which point we'll still brig

It's something you hear about from time to time, the end of programmers. It was a big topic in the mid 90s, for example, when languages like Visual Basic and Applescript would bring programming to the masses. There's a story I think of whenever I hear that kind of talk going on, to remind myself my job is probably safe:

In the early 1970s, there was conference on microprocessors, and one of the presenters really got superlative when he was talking about how big sales would be. One of the tech guys scoffed

The tools become more and more powerful and do more and more of the "grunt work" of programming, but I've yet to see or hear of a tool that can automate the creativity needed for implementing business logic, appealing GUI design, useful/readable report layouts, etc.

As pleased as I am with my own tools, I still wouldn't be so foolish as to claim they could eliminate the need for programmers. The hordes of junior copy-paste-edit juniors, yeah, but those aren't programmers -- they're meat-based xerox machi

the idea that desktop computing is dead and that we are in a post pc world makes me giggle, just where do people think the programs are going to be made for there phones and tablets i would like to see some one try writing even a small program on iphone. writing, compiling (which can take a long time eve on a descent desktop, would be unimaginable) and debugged on such a form factor would be ridiculousness. and there are thousands uses for a pc that would be horrible on a tablet, all office work for starte

BTW: if you have the means try to go a week without using a desktop.I tried to do it and failed 35 hours into the experiment. You just can't be productive on tablets even with the silly keyboards, no matter how many of them you have.

Some of us used to use Borland C at 640x480 resolutions on a text-based EGA/VGA screen without a mouse. If an Iphone had a keyboard, then it might be possible -Maybe a clam-shell style i-phone with dual screens, like the Nintendo 3DS.

The human brain is remarkable, but it is also loaded with problems. We expect computers to be exact, and we raise hell [wikipedia.org] when they're off even the slightest. Brains, on the other hand, make errors all the time. If we tried to use anything brain-like to do even a small fraction of what we use computers for today, they would be totally inadequate. A much better option, IMHO, would be to use programmable computers for the vast majority of tasks, and brain-like computers only for things the programmable ones

First article not found. Second article says neuroscientists and computer scientists are approaching brain emulation from different angles for different reasons, and (despite sour grapes from the former group), IBM's achievements in this area are "a milestone in computing" and "deserves its accolades in full'. That sounds more like glowing praise then serious questioning.

From what i've learned, your self awareness, long and short term memory and a fear of death equate to sentience. if you wanted to copy someone it would be easy but from that point on you would just have everything in common until you started to make different choices.

If instead there was a way to slowly replace your brain with mechanical/electrical/nanotech over a period of time then you would be more or less the same person with the same co

I think by "consciousness" you are referring to one's functional memory. But "consciousness" is usually used to refer to one's awareness - i.e., one's soul.

In any case, I understand your point. Transferring one's memories would not necessarily transfer one's consciousness ("soul"). Instead, one would merely have a copy. Since we have essentially no understanding of what consciousness (the "soul") is, we cannot transfer it, or even know if it can be transferred.

IBM seems to think that if you only had a sufficient number of neuron-like (whatever that may be) connections, a brain (whatever that may be) will automagically appear.

There's no good reason to have blind faith in this notion, and it's not likely to be any more likely than more than 60 years of fabulously wild predictions of what computers will do in the next n years.

But it's not impossible, and three cheers for IBM for throwing wads of cash into the game. It'd be great if other big outfits chased dreams