Posted
by
ScuttleMonkeyon Monday January 11, 2010 @05:25PM
from the not-so-hard-hardware dept.

A new type of "wet computer" that mimics the actions of neurons in the brain is slated to be built thanks to a €1.8M EU emerging technologies program. The goal of the project is to explore new computing environments rather than to build a computer that surpasses current performance of conventional computers. "The group's approach hinges on two critical ideas. First, individual 'cells' are surrounded by a wall made up of so-called lipids that spontaneously encapsulate the liquid innards of the cell. Recent work has shown that when two such lipid layers encounter each other as the cells come into contact, a protein can form a passage between them, allowing chemical signaling molecules to pass. Second, the cells' interiors will play host to what is known as a Belousov-Zhabotinsky or B-Z chemical reaction. Simply put, reactions of this type can be initiated by changing the concentration of the element bromine by a certain threshold amount."

Too soon man! These aren't neurons or real cells, and won't be susceptible to viruses (the biological type). Sounds like they're just micelles or maybe liposomes [wikipedia.org].

Now see, had you waited a few years until they realized that there's no point in reinventing the wheel, that they should just use actual neurons to make a wet computer rather than these things (which presumably don't replicate or repair themselves and probably won't be as efficient), then that would be really funny. Except now it won't be. Ruin

I'll have to change my opinion that we won't ever have true artificial intelligence. A chemical based computer could possibly become intelligent. After all, thought itself is only an electrochemical process.

But of course! A computer is not defined as an array of silicon transistors, a computer is any device which processes information. That includes everything from mechanical calculators to the human brain.

Can you cite a reason why silicon-based systems shouldn't be as capable carbon-based ones? Silicon-based have developed at a blistering pace as compared to the carbon. (Though I admit that they have the advantage of actually having intelligent designers . ..) I mean, life has a head start of a few billions of years!

I can't speak for the GP, but I think we'll exploit properties we don't fully understand (say, by growing neurons on a grid that interfaces with them) much faster than we'll be able to translate those properties into other systems.

Can you cite a reason why silicon-based systems shouldn't be as capable carbon-based ones?

Because there's no evidence thus far for consciousness and cognition in anything other than carbon-based wetware.

You can hypothesize that consciousness and cognition are just another kind of computation, and a lot of people do, which near as I can tell is how we get the idea that silicon-based systems will someday do cognition and maybe even self-awareness when we find the right algorithm. B

Because there's no evidence thus far for consciousness and cognition in anything other than carbon-based wetware.

I'll go further and stipulate that such a thing doesn't exist. But that in no way answers my question.

But there are no such systems at the moment, and there's no particular evidence that given hypothesis is correct. It may well be self-aware intelligence is tied to the particular mix of phenomena that take place inside of carbon brains.

Because there's no evidence thus far for consciousness and cognition in anything other than carbon-based wetware.

Given the the extremely tiny portion of the universe we've actually explored, "no evidence" proves nothing.

It may well be self-aware intelligence is tied to the particular mix of phenomena that take place inside of carbon brains.

That idea is popular in some circles. I find the arguments in favor less than compelling. It seems to come down to the idea that intelligence is too complicated a phenomenon for any mechanical system. Considering the fact that we're only beginning to understand how complicated the universe is, and the sophisticated emergent behavior it can produce, that strikes me as a willfully ignorant attitude.

Given the the extremely tiny portion of the universe we've actually explored, "no evidence" proves nothing.

It's true that absence of evidence is not evidence of absence and I'm sure there's a lot interesting things left to discover. It's not a particularly supporting argument that something specific is likely to exist, and it doesn't work for machine intelligences any better than it works for elves or pink unicorns.

It seems to come down to the idea that intelligence is too complicated a phenomenon for any

To spell it out: The action that gives rise to consciousness/intelligence might be simple, but also bound to specific substances and processes. Or it might be complex but bound to specific substances and processes. Or it might be complex but not bound in any way. Or perhaps simple but not bound in any way. In other words, it's a completely orthogonal concern... which is to say it "has nothing to do with that hypothesis at all."

Can you cite a reason why silicon-based systems shouldn't be as capable carbon-based ones?

Capable at meth, yes; more capable even. But capable of thought and sentience? I know how computers work, and they don't think. You can program them to make people think they think; hell, I programmed a Timex Sinclair 1000 with 4k of memory and tape drive to (usually) successfuly pass a turing test. I can assure you that it's pure anthropomorphism.

We don't even know what thought is. How can you construct a device you d

Again billions of years head start! If something like human intelligence arises from silicon in fourty million years (the clock started, what? Sixty years ago?) then it will have happened in only one percent of the time as carbon.

I think that you grossly underestimate our understanding of chemical building blocks of cognition. But, putting that aside, I think your argument recommends our efforts on the silicon front. Again, in only a few short decades we have gone from purely sequential (serial) designs

Wet computer: Hooking the logic circuits of a Bambleweeny 57 Sub-Meson Brain to an atomic vector plotter suspended in a strong Brownian Motion producer (say a nice hot cup of tea)...used to break the ice at parties by making all the molecules in the hostess's undergarments leap simultaneously one foot to the left, in accordance with the Theory of Indeterminacy.

I fail to see why you need chemical based computers in order to construct artificial intelligence.

One could build the system out of tinker toys and achieve the same results, at different speeds and different costs.

There is nothing that signifies intelligence which is provided by one construction method that is not present in another. Electrical, Optical, Mechanical, Chemical, Pneumatic... They are all just a means to an end.

Chemical reactions have a sort of random-ness to them that electricity through a wire can't duplicate. When the circuit isn't complete, electrons aren't moving. When two chemicals aren't reacting, their molecules still shift about in either their gaseous or liquid form. They could be affected by anything that comes into contact with them, depending on the substance it could be magnetic and thus making their movement affected by all sorts of things.

Think of the number of random events that can occur on the cellular level.

Computer software has gained the ability to learn, and gained the ability to change itself, even learned how to reproduce itself.

What it hasn't gained is the ability for abstract thought, which I attribute to the incredible amount of random events that go on inside our brain.

Though I could just be blowing smoke, I'm no physicist or chemist or biologist.

But we don't actually have true randomness (though you could argue neither does nature, perhaps quantum mechanics is all a bunch of hoopla and the very outcome of the universe can be determined, but we'll save that for another day).

Point is, a random number generator when rebooted will generate the same random numbers when rebooted in the same scenario. Tested myself across many lanuages - I have yet to see a true random number generator.

Our algorithms are VERY deterministic. Lets say there is a purple car,

Sure we do, if you actually care and want it. All you need is a source of true entropy.

I have yet to see a true random number generator.

BAM! [hackaday.com] And as of clicking that link, you can no longer say that.

Note that the actual implementation doesn't use a lava lamp anymore. For a general discussion of sources of true randomness, see here [wikipedia.org]. The number generated from these devices are as random as the physical processes behind them.

Actually electronic systems can indeed have randomness - it's called noise.There's a lot of effort put into designing existing logic gates etc in computers to ensure that they are not switched off/on from noise, and this gets harder and harder as the designs get smaller - because our current designs depend on say, A & B always generating exactly the same result. This also means that the logic circuits have to use higher voltages, use more power to get above the noise threshold.

Lets say there is a purple car, but in my memory I remember it as blue... Why did this happen? When did this happen? Surely 2 moments after seeing the car I would remember it as blue, but later that week I recall it differently. Was it in the process of putting it into long term memory, or did it degrade over time in long term memory?

But it would never be random. You are making a wrong assumption about how human memory works: that everything we see is transferred into long term memory, although sometimes that information is degraded or retrieved wrongly. But only a percentage of what happens to us is remembered. It islikely that you don't remember the colour of the car you saw at all (wrongly or rightly). All you remember is that you saw a car, maybe of a particular type. You then randomly retrive an image of that type of car from a mem

Yes - and to generate thousands of those each instant and compute them would be taxing on the system - and expensive. Whereas this project is attempting to build both the random generator AND the computational hardware required out of an existing model.

Because by the definition of Random-ness in nature, I mean that everything is affected by their surroundings. I could have a different mood based on my temperature. Truly you could program an algorithm for that, but what about Barometric pressure? Okay now ho

There's a lot more randomness to electricity through a wire than most people think, especially at the micrometer scales of modern processors. Electricity through a wire is in fact impacted by "all sorts of things" including magnetism and background radiation. Generally though, this is seen as a bad thing to be corrected for, because randomness doesn't make for reliable computation.

You would be mistaken to think that random background interference will necessarily lead to personality. Try putting a powerful

The thing is, although you CAN make a computer out of tinker toys, it isn't the most efficient method of accomplishing it. Given that a brain is far more complex than anything humankind has ever produced, it seems presumptuous to assume that our current methods of computation are ideal for the creation of AI. The intention here is to study the relative merits of mimicking neurons as a method of computation vs. our current transistor based designs.

I'm just speculating here, but I'd guess that it would be easier to create a massively parallel processing computer using this approach. Eventually at least. Right now the best we can do is simulate neurons on what are largely linear processing systems, which isn't very efficient.

Your comment doesn't make sense. Even neuroscientists (as opposed to computer scientists) will tell you that the type of process, electrochemical vs any other, has no impact on the ultimate information processing ability, which can be abstracted from the process which is but mere mode of implementation. A sufficiently complex biological or chemical computer will be Turing-complete, just like an electrical computer, and _can be no more_ powerful--just more or less efficient at various tasks in terms of tim

A chemical based computer could possibly become intelligent. After all, thought itself is only an electrochemical process.

If you believe that thought/sentience/intelligence is only an electrochemical process, then why do you believe that the same effect can't be achieved from a purely electrical process?

Imagine a computer no different than those we have today, but arbitrarily more powerful. On this computer is running a perfect simulation of the human brain, down to modeling every quark. Every process, chemical, electrical, and otherwise that takes place in the brain takes place in the simulation. We can already do this for small numbers of molecules; the only thing missing is the processing power. Why exactly could this simulation not exhibit the same intelligence as you or I? What is it missing, and why can what is missing not be added to the model?

James Brownatron, Godfatherbot of Soul, disagrees! He stays on the scene like a sex machine with 99.99999999% uptime!

I doubt however that's what someone saying "thought is just an electrochemical process" was going for. I mean, I even believe in souls, but I think they're a cop-out for explaining how brains produce intelligence. The question is what does produce intelligence, and I have a hard time believing it's the constituent components, and not the emergent pattern. The issue with

Take it more to mean the functionality itself, as opposed to the molecules that provide that functionality. Basically, however neurons etc work together to make a working brain, why is it so essential that it be physical potassium ions and physical neurotransmitters to create that functionality, rather than electronic equiva

Biological brain processes are so agonizingly slow, though. It's not just the electrochemical signal, it's the process of learning new things, which involves not only making an electric link, but actually growing new physical connections, extending dendrites and growing synapses. This takes time, energy, and nutrients for the body. Doing it in electricity is so much easier, although we pay for it in energy costs. Compared to a 150 watt computer power supply, the average human body burns around 2000 calo

Well, there is precedent. You see, when a mommy computer and a daddy computer love each other very much, they show this sometimes with a special dance. And if they are compatible, sometimes they will plug their interfaces together and engage in in a high bandwidth conversation. And much, much later, out of Mommy's USB port pops a very special process controller, just like you!

only an electrochemical process? Maybe not... can't find a link to the article but read an interesting piece a while ago on certain structures in nerve cells that trap electrons and seem to behave like quantum computers, so it may be the case that nature is already tapping into quantum computing for thought and consciousness, imagine the brain as billions of networked quantum computers - it's no wonder AI hasn't caught up yet.

Many people bring forward this idea. I think it stems from the fact that "traditional" AI (which has only really been around for 60 years or so) has not yet yielded a "sentient" computer. People feel that this somehow means traditional AI, and even our whole computational model can't yield sentience. They attribute intelligence to the fabric rather than the logic it implements. I think these people fail to realize that whatever computation biological brains implement, we could simulate it on traditional computers, *if we even knew what is being computed*. The problem is that, so far, beyond the first layers of our visual system, and some very simple systems, we know not much about the way the brain is connected. However, from what's been discovered in neuroscience, it seems pretty clear that the early layers of the visual cortex perform simple convolutional operation that do not involve quantum physics, or fancy shmancy things we couldn't do *more efficiently* with silicon.

The human brain is very complex, but given enough time, we very well might get to understand what makes us sentient and be able to replicate it in a computer. My personal opinion is that the brain is full of specialized hardware that has evolved over a very long time, and helps us to specific tasks (eg: facial recognition, hand-eye coordination, obstacle avoidance, language decoding), with a very powerful abstraction logic built on top (the stuff that "makes us sentient"). This abstraction logic is possibly very complex, and perhaps too difficult for us to conceive of at this time. If we are to learn anything from the rest of the brain, most of this logic probably focuses on transforming perceptual information into a form that makes it easy to reason with. On top of this, we probably again have specialized mechanisms, to do things like deduce causal relationships and generate hypotheses or semi-random associations of concepts (creativity).

The reason the "traditional AI" camp hasn't succeeded at making sentient machines are multiple, but I would sum them up as follows:
1) They have mostly given up. You probably can't get funding for claiming you'll come up with HAL9000, you'll sound like a wacko. Current AI research focuses simple learning problems (i.e.: supervised learning, reinforcement learning).
2) The approaches tried in the past focused purely on formal logic, which, as we now know, works badly in open-ended environments. For it to work well, the properties of the environment have to be simple, restricted and well-defined.
3) Supervised learning, unsupervised learning, etc., will not yield sentience. These approaches, which may actually exist in the brain, are good at solving problems of limited scope only. Our brains are not big wads of neurons performing a single computation. They are much more intricate and integrate many specialized components.

The "right" approach to AI is probably an overall approach, integrating many existing techniques into one system. Perhaps an "engineering" approach to AI would work better. Focus on constructing it and then refining it, as opposed to developing an overall theory of how it will work first and trying to reduce it to its simplest component. We already have computer systems that do speech synthesis, speech recognition, facial recognition, depth perception, 3D model reconstruction, etc. We also have unsupervised learning, supervised learning, reinforcement learning, fuzzy logic, knowledge bases, automatic theorem provers, etc. It should be possible to build a non-completely stupid AI, if one combined all these techniques in the appropriate way. How to connect them, however, is probably where the true AI problem resides.

Sentience needs a) will, b) reflection. We might achieve reflection in a computer, but can will be achieved? an animal wants to survive, and everything it does is about survival. Computers can't have will.

That's a joke for funding. A project as ambitious as this cannot get much accomplished with a couple of million eurobucks. Ten times that amount would have been respectable. 1.8 million is money than it takes to open a fast food franchise joint in some cities.

600k Euros / year isn't bad for an exploratory research program. The article suggests that they're funding multiple such efforts, searching for promising ideas that can be furthered by later funding -- which is often how these things are done.

Bits is not bits in the brain. We don't have a bunch of neurons representing ones and zeros.

A neuron is either spiking or not spiking, but everything is very temporal; with some stimulus (a particular input current) a neuron might spike at a rate of 100 Hz, with another input, at 20 Hz. How do we translate these spikes into information we can use? Well, we're trying to figure that out.

In addition, in the brain, we do not have the cleanness of digital processing. There is lots of noise from a variety of sour

From TFA:"Recent work has shown that when two such lipid layers encounter each other as the cells come into contact, a protein can form a passage between them, allowing chemical signaling molecules to pass. Second, the cells' interiors will play host to what is known as a Belousov-Zhabotinsky or B-Z chemical reaction. Simply put, reactions of this type can be initiated by changing the concentration of the element bromine by a certain threshold amount."

The power button can be difficult to find. You'll have to search around for a minute or so until the system shows some clear responses. Even then, you'll have to stimulate said power button for a few minutes before it finally boots up, allowing you to do whatever it is you want it to do. Be forewarned though, if you take too long, it will eventually lose power and turn off - however on the other hand if you are too quick it may not want to turn on at all next time. You should exercise extreme caution and ti

"..it will open up application domains where current IT does not offer any solutions - controlling molecular robots, fine-grained control of chemical assembly, and intelligent drugs that process the chemical signals of the human body and act according to the local biochemical state of the cell."
Interesting possibilities abound when you have microscopic computers running around our bodies. Where will we buy the vaccines? Pfizer or Symantec?

I believe that some journalists, when writing for the proles, will use the phrase "so-called" as a hint to mean, "Look out! Here comes a word or phrase with which you may be unfamiliar!" Stupid usage, I grant you. I don't defend it, but I've seen it before.

I kinda dropped a sponge on it. I squeezed the goop back out, but I think maybe the sponge had some windex and everclear in it...... yea, it was a pretty good party. The leds are blinking, but they're, like, orange and purple. Is that normal?