Posted
by
msmash
on Tuesday January 23, 2018 @03:05PM
from the shape-of-things-to-come dept.

Researchers in the emerging field of "neuromorphic computing" have attempted to design computer chips that work like the human brain. From a report: Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a "brain on a chip" would work in an analog fashion, exchanging a gradient of signals, or "weights," much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse. In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy. The design, published today in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

Why? Except for the omission of a hyphen, (it should be "silicon-germanium"), they got it right. It's been in pretty common use since the 90's. In industry publications you'll see it referred to as SiGe, and it's an alloy of the two materials.

Combining the integration and cost benefits of silicon with the speed of more esoteric and expensive technologies such as Gallium Arsenide, makes Silicon Germanium an ideal process for wireless/wired communication applications. Products designed for and manufactured with TSMC Silicon Germanium processes demonstrate dramatically improved functionality at a lower cost

Sounds pretty different from the proposed process where they're depositing SiGe to create defects in a Si crystal

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium - a material also used commonly in transistors - on top of the silicon wafer. Silicon germanium's lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

Silicon-germanium has many of the speed advantages of gallium arsenide, but can be fabbed more like pure silicon (using the same equipment and similar processing steps) and achieve similar costs for a given amount of circuitry. (You can, f

For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

If it scales up well, you could make a chip with much greater capacity than you get with current deep learning techniques. Simulating this kind of thing on a digital chip isn't really very efficient, in storage, circuits, or speed.

I'm working on a project that vaguely uses the same approach, except the neurons are C++ structs (billions of them) instantiated one by one, with properties [firing rate, intensity, etc] randomly generated using natural entropy sources and inputs from other neurons. Exactly like a real brain works. The most promising approach to strong AI.

There's a decent chance we have naturally formed Qbits knocking around in there somewhere, so the approach may dead end until those can be integrated.

But all this excitement is a bit premature: with anything more complex than a very simple GP organism, you end up with a product that may work, but is too complex to dissect so you cannot explain how or why it works, and you cannot say whether or under what conditions it might suddenly stop working or start behaving aberrantly, and heck, you can't even say if

That was a pretty good idea in the 80s when Penrose proposed it, but it didn't really pan out. Other variations (and honestly, Penrose's work too) really sound pretty hokey, and many rely on assuming that wavefunction collapse (a) happens and (b) happens only under conscious observation as justification.

Based on some other comments on another threads, you can't have a qubit without cryogenic cooling, so apparently this is ruled out automatically.

Science has done a good job of eliminating a lot of possibilities proposed earlier for possible mechanisms of quantum cognition (there's still at least one possible loophole they are trying to close WRT Posner clusters). But then, science did a great job of ruling out levitating things in a static magnetic field prior to properly sussing out the details of diamagnetic materials (not Earnshaw's fault, just there was an area of physics yet to be plumbed out.) I doubt many scientists would qualify the cerebr

One thing that puzzles me about analog artificial synapses is how one would make accurate copies and backups of its learning data. It would seem to be a one-off thing, with any clone slightly different from the original, diverging more and more with copies of copies. That is, if a copy is possible at all: do they have probes that measure voltage or resistance on each synapse, or what?

If I were a simulation I sure as hell would NOT try to destroy the guy who's running the simulator, because that'd be, like, retarded^Wcounter productive. Maybe as an elaborate way of committing suicide, but that's about it.

Actually we don't know that, and the fact that jails exist is indication that we daily assume that is not.

And if the universe is sufficiently complex that it appears that we have free will, and no test can be created or even imagined to tell the difference, then suggesting that we don't not have free will is a vacuous statement, at best.

When I took metaphysics philosophy in university, I suggested in one of my papers a thought experiment which implied

I have been expecting this for a while. The real question I have had is how they would implement the feedback weights. You can do it with switches and a bank or resistors, but a memristor as a feedback element would be much more efficient - and should be far denser.

Definitely expected. They key to it all is how they implement the feedback weights. You can't have a brain without those. Once we have that, we have AI. Good idea on the memristor. I hope they have thought of that.

The thing a lot of AI news fails to point out is they created a Synapse based on our models and assumptions on how it sort of works. Or at least how we *think* it might work. Actual biological systems are far more complex and so this is not an accurate representation of a synapse in our brain. Sadly many of these models are very rough approximations, they're not reflective of what's going on in reality. We're still likely far from true autonomous AI.

Ever heard of artificial flavouring? So here's a fun question, why are many of us able to tell the difference between artificial flavouring and natural flavouring? The answer is because natural flavouring like many plants have thousands of chemicals which our taste buds can pick up. Generally speaking it's difficult to reproduce artificially. Synapses in our brain are affected by hormones and signalling chemicals including drugs. An artificial Synapse can't remotely come close to simulating that at thi

That seems unlikely. Much like deep learning neural nets, these closely resemble how the biological equivalents operate. Otherwise they would call them something else other than "neural nets" or "artificial synapses".

It's an interesting question how much complexity in the elements (synapses and neurons) versus complexity in the network, is required. For the synapses, humans have fairly complicated ones, especially if you consider every type of neuron, but there are other animals that are considerably simpler. Some use only a single neurotransmitter.

We hear a lot about the simple elements/complex network approaches at the moment because they're making a lot of progress. People working from the other side haven't been

Personally I think that a lot of the biological complexity is imposed by biological limitations and the good enough nature of evolution, but it's quite possible there are more basic structural secrets to discover.

Nature is very good at coming up with solutions, but not so good at combining them, since DNA only does certain things and it literally cannot accomplish certain things at the same time because they are contradictory goals. So you're probably right. I too wonder how like a real neuron you have to make your synthetic neuron before it will do the same job.

Then again, what if it turned out that you could replicate the properties needed for sentience, but not those required to implement morality? From the That'

We are probably now within but a single generation of being able to make computer chips that might rival the human brain in complexity.... but I am skeptical that we will see consciousness emerge from them. I'm not saying that consciousness is magic, but I suspect it takes more than just comp

We are probably now within but a single generation of being able to make computer chips that might rival the human brain in complexity.... but I am skeptical that we will see consciousness emerge from them. I'm not saying that consciousness is magic, but I suspect it takes more than just complexity.

There's actually a procedure we've done to a live human that has actively shut off of their consciousness and otherwise left them awake.

damn, I already posted in this thread or you'd have mod points!That's a spooky interesting article!

Yeah, it's the "brings it all together" area that allows us to fully integrate all our sensory data and actually do stuff with it. Or, so the test showed in that patient. It would need more sample size to know for sure. But, if that's the case, then that gives people a specific part of the brain to look at and reverse engineer to "bring it all together" in other systems to see if i's the seat or not. It might be part of it, but not the actual seat.. More like a sensory integration bit that merges all our in

If you ask someone to respond to you, they've been told to respond by years of social conditioning from people in general (friends, family, school, etc). So, we learn that we need to respond to commands and not just ignore them. By this person not responding, it doesn't mean they shut off more than consciousness and it could very well be that the brain's ability to take in a verbal command, decode the sound in to words, string those words together in to a language we know and

You are assuming the highly advanced deep learning systems we've already developed aren't conscious. Rocks could host consciousness for all we know, the relative time scale on which their rate of change becomes fluid enough for such a thing is so out of sync with our own that we wouldn't even begin to know. We don't exactly spend much time considering the possibility of thousand year frequency and million year pulse widths on wave forms.