I was wandering why no one has been able to develop a program that was capable of developing some kind of conciousness or intelligent-like artificial intelligence. And it hit me, Brains don't work the same as digital computers do. Digital computers fetch an instruction at a time then execute the instruction by feeding it into an ALU (thing that does binary math). Brains have cells which pass lots of information to each other but instead of doing it one by one it does it quiet a lot at once. Also, a digital computer does binary math which is relatively slow compared to analogue math.
Then I tried to imagine a computer that could execute many, many instructions at once. I think this already exists in multi-threading but not at the same level as a brain. I guess one way you could make a computer like this is to model it off of the brain itself. Make a silicon version of a brain cell which would be like a mini processor then wire millions of these together. But, even if you managed to make something that had the power of a brain then how are you meant to program something like that? You would need to invent a new programming language which was similar to digital programming languages but still had the capability to do loads of things at the same time.
I guess a drawback to this would be that, if you made it so the computer had lots of little processors, it would be very hard to program. The output of the computer (if it were running an AI) could also be un-readable and only make sense to the computer.

The problems i see with it are that:
-The brain should have some kind of structure like real animal brains do to work properly. Having a 10 GB neuron soup is probably too messy to develop proper structures. (you need links between far away parts of the brain etc.)

-You need to pretty much update every cell every tick, and check a ton of nearby cells to see if they fired so it can learn, and also any signals coming into the cell (and if i remember right at least some visual area cell thingies did some fancy laggy pattern recognition on the signals o,e) so your cache is going to catch fire

-You also need to simulate an environment for the brain so it can develop any structures

-As you said the current processors are not really good for this, something more parallel would be better (but i think you would need shared memory accessed by all cores at the same time?)

I think its important to have the brain properly structured. Evolution has modified our brains for very long and there are probably thousands of different types of communication/cells/whatever and there are some larger scale hard coded structures too. So if you want a proper brain you either need to copy one from a real animal or run evolution on it (which would be way too slow)

1) We don't know what conciousness is -- i.e. what exactly is required for it to emerge.2) The human brain has 100 billion neurons with 100 trillion synapses, plus the rest of the neurochemical soup that makes a brain a brain, which is probably at least a petabyte of data, probably orders of magnitudes more than that -- simulating an ANN that large is not yet feasible. Maybe google could do it if they turned off all their other services...

The ability of computers to perform one instruction at a time isn't a hindrance -- we do plenty of simulations where everything is updated in discrete frames, and an object's next state is only dependent on the values of the previous frame (which makes the whole thing deterministic regardless of the order in which you performed your intra-frame operations).The problem is that to simulate reality you to run your simulation with a delta time of the plank time, which is pretty hard to do in real-time.

1) We don't know what conciousness is -- i.e. what exactly is required for it to emerge.2) The human brain has 100 billion neurons with 100 trillion synapses, plus the rest of the neurochemical soup that makes a brain a brain, which is probably at least a petabyte of data, probably orders of magnitudes more than that -- simulating an ANN that large is not yet feasible. Maybe google could do it if they turned off all their other services...

The ability of computers to perform one instruction at a time isn't a hindrance -- we do plenty of simulations where everything is updated in discrete frames, and an object's next state is only dependent on the values of the previous frame (which makes the whole thing deterministic regardless of the order in which you performed your intra-frame operations).The problem is that to simulate reality you to run your simulation with a delta time of the plank time, which is pretty hard to do in real-time.

I don't know what conciousness is but I imagine that it is far more simple than what people are expecting. I know it's still incredibly complex but if you were to imagine lots of little simple systems it's probably easier to believe than it is.It would require tremendous amounts of storage space but I wasn't thinking on the scale of a human brain. More like a small mammal's brain (like a cat for example).

That's why I thought that redesigning the modern computer would be a good idea. The current way computers work means that it wouldn't run at anything close to real-time. I think an analog computer could pull it off depending on how it worked. I think analog computers are meant to be able to solve differential calculus at incredibly faster speeds than a digital computer.

The problems i see with it are that:-The brain should have some kind of structure like real animal brains do to work properly. Having a 10 GB neuron soup is probably too messy to develop proper structures. (you need links between far away parts of the brain etc.)

-You need to pretty much update every cell every tick, and check a ton of nearby cells to see if they fired so it can learn, and also any signals coming into the cell (and if i remember right at least some visual area cell thingies did some fancy laggy pattern recognition on the signals o,e) so your cache is going to catch fire

-You also need to simulate an environment for the brain so it can develop any structures

-As you said the current processors are not really good for this, something more parallel would be better (but i think you would need shared memory accessed by all cores at the same time?)

I think its important to have the brain properly structured. Evolution has modified our brains for very long and there are probably thousands of different types of communication/cells/whatever and there are some larger scale hard coded structures too. So if you want a proper brain you either need to copy one from a real animal or run evolution on it (which would be way too slow)

Yeah I think you would need to have some kind of observer to structure the model otherwise you would need some kind of genetic algorithm to do it for you.I think you're thinking that I'm talking about running a simulation on a digital computer. Actually the computer itself could be the simulation. Doing this would mean the computer would have to be extremely different to modern day processors. Instead of RAM and an ALU it would just be lots and lots of cell-like systems communicating with each other. Providing it had steady voltage you wouldn't need to tick each cell because it would be doing its job real-time. I guess it would be a lot like a Neural Network or Cellular Automata.

It wont really work if you wire it manually because:
-It would take way too much space. How big a single neuron would be on die or whatever you build it on? What about the huge infrastructure needed to create thousands of connections for each cell, and deciding which cells to connect?

-You cant really test it. You would want to simulate it on a computer to make sure it works but that defeats the point. Because of this its easier to just run it on a supercomputer.

-A room sized giant metal cube which requires a nuclear plant to power it and then transfers it all into heat is scary.

Lets just grow an artificial brain out of stem cells and add an USB 3.0 port.

i'd have to agree with hodgman, we don't know what constructs/makes up consciousness, we don't have an inkling of how it works, and when we stare at it on paper, it looks like just a bunch of nodes(neurons) connected via a ton of lines(synapses), which are capable of sending electrical signal's amongst each other...and this somehow works,

also, remember that evolution has had Billions of years to get to us, we've been seriously attempting this for maybe a quarter of a century, the fact that we're already beginning to figure out how to try and emulate a brain with man-made structures is no doubt an significant feat.

I don't think talking about conciousness is relevant. Anything that looks like it has conciousness is enough. I can only be sure about my conciousness. I think the Duck test applies here well.

Anyhoo, I think a more decentralized computer model is required for making anything real intelligent. I mean, millions of small CPUs with millions of small memory units all in a impossibly complex structure.

I don't think we can manufacture a finished AI and just turn it on ever because of that "infinite" complexity. No matter how well we understand the elements of the system, fully understanding and controlling a system that complex is pretty much impossible. It's hard to fully understand even just a regular software. And it's lightyears away from the size and data amount of a mind.

So if we create anything resembles a true intelligent and independent mind, it will be like a new-born's mind and we have to teach her like we teach children.

The brain operates at only ~10Hz and yet in terms of NN complexity, it delivers as many FLOPS as our most powerful supercomputers -- though they require several megawatts and fill large rooms, whereas the brain only needs ~20 watts and fits in your head. So not only is it ridiculously more powerful than a regular computer, it's also ridiculously small and energy efficient.Yes, this is due to it's specialized design -- to compete with it, you would have to build your fake brain using similar construction processes (if you didn't want you brain to fill a room and require more rooms of power equipment). If you search for "neural network hardware", you'll find this isn't a new topic. People have built hardware ANN's before that were as complex as a cat's brain, though with not very impressive results.

Keep in mind that it's easy to get caught up with the state of technology and think that we're an advanced race, but biological construction techniques really do put our technology to shame. We're still playing with sticks and fire compared to what biology is doing. It's going to be a long time until we can build hardware that is similar to a brain.e.g.* Plants are mostly made out of air, are self constructing, can be endlessly recycled and contain photosynthesis units that are absolutely amazing compared to our best solar cells. That's why we use plants to make biofuel, or dig up million year old plants to burn as oil.* Kevlar is produced by stretching carbon nanotubes through a pressurized vat of extremely hot concentrated sulfuric acid (which is dangerous and hazardous to the environment). Abalone shells are tougher than kevlar, and are made out of seawater and sunlight, produced in a pollution-free process on demand.

The brain operates at only ~10Hz and yet in terms of NN complexity, it delivers as many FLOPS as our most powerful supercomputers -- though they require several megawatts and fill large rooms, whereas the brain only needs ~20 watts and fits in your head. So not only is it ridiculously more powerful than a regular computer, it's also ridiculously small and energy efficient.Yes, this is due to it's specialized design -- to compete with it, you would have to build your fake brain using similar construction processes (if you didn't want you brain to fill a room and require more rooms of power equipment). If you search for "neural network hardware", you'll find this isn't a new topic. People have built hardware ANN's before that were as complex as a cat's brain, though with not very impressive results.

Keep in mind that it's easy to get caught up with the state of technology and think that we're an advanced race, but biological construction techniques really do put our technology to shame. We're still playing with sticks and fire compared to what biology is doing. It's going to be a long time until we can build hardware that is similar to a brain.e.g.* Plants are mostly made out of air, are self constructing, can be endlessly recycled and contain photosynthesis units that are absolutely amazing compared to our best solar cells. That's why we use plants to make biofuel, or dig up million year old plants to burn as oil.* Kevlar is produced by stretching carbon nanotubes through a pressurized vat of extremely hot concentrated sulfuric acid (which is dangerous and hazardous to the environment). Abalone shells are tougher than kevlar, and are made out of seawater and sunlight, produced in a pollution-free process on demand.

Yeah the power consumption would be incredible. What about a molecular computer or a quantum computer?I still think the modern computer model, as useful to do math as it is, is pretty outdated for the things we want it to do. Fetching each instruction at a time with millions of logic gates, which need constant voltage, takes up a lot of power and time. Not sure if anyone has made a different/more efficient kind of computer. I would imagine an analog computer to be faster but because the voltage determines the number instead of a binary number that means that there could be more power consumption, unless you do decimal voltage.

The brain operates at only ~10Hz and yet in terms of NN complexity, it delivers as many FLOPS as our most powerful supercomputers -- though they require several megawatts and fill large rooms, whereas the brain only needs ~20 watts and fits in your head. So not only is it ridiculously more powerful than a regular computer, it's also ridiculously small and energy efficient.Yes, this is due to it's specialized design -- to compete with it, you would have to build your fake brain using similar construction processes (if you didn't want you brain to fill a room and require more rooms of power equipment). If you search for "neural network hardware", you'll find this isn't a new topic. People have built hardware ANN's before that were as complex as a cat's brain, though with not very impressive results.

Keep in mind that it's easy to get caught up with the state of technology and think that we're an advanced race, but biological construction techniques really do put our technology to shame. We're still playing with sticks and fire compared to what biology is doing. It's going to be a long time until we can build hardware that is similar to a brain.e.g.* Plants are mostly made out of air, are self constructing, can be endlessly recycled and contain photosynthesis units that are absolutely amazing compared to our best solar cells. That's why we use plants to make biofuel, or dig up million year old plants to burn as oil.* Kevlar is produced by stretching carbon nanotubes through a pressurized vat of extremely hot concentrated sulfuric acid (which is dangerous and hazardous to the environment). Abalone shells are tougher than kevlar, and are made out of seawater and sunlight, produced in a pollution-free process on demand.

Yeah the power consumption would be incredible. What about a molecular computer or a quantum computer?I still think the modern computer model, as useful to do math as it is, is pretty outdated for the things we want it to do. Fetching each instruction at a time with millions of logic gates, which need constant voltage, takes up a lot of power and time. Not sure if anyone has made a different/more efficient kind of computer. I would imagine an analog computer to be faster but because the voltage determines the number instead of a binary number that means that there could be more power consumption, unless you do decimal voltage.

Binary works even if there are fluctuations in the currents/voltages so we can make the processor smaller. Id imagine youd need all kinds of fancy components and regulation systems to keep the currents/whichever it is stable.

The brain operates at only ~10Hz and yet in terms of NN complexity, it delivers as many FLOPS as our most powerful supercomputers -- though they require several megawatts and fill large rooms, whereas the brain only needs ~20 watts and fits in your head. So not only is it ridiculously more powerful than a regular computer, it's also ridiculously small and energy efficient.Yes, this is due to it's specialized design -- to compete with it, you would have to build your fake brain using similar construction processes (if you didn't want you brain to fill a room and require more rooms of power equipment). If you search for "neural network hardware", you'll find this isn't a new topic. People have built hardware ANN's before that were as complex as a cat's brain, though with not very impressive results.

Keep in mind that it's easy to get caught up with the state of technology and think that we're an advanced race, but biological construction techniques really do put our technology to shame. We're still playing with sticks and fire compared to what biology is doing. It's going to be a long time until we can build hardware that is similar to a brain.e.g.* Plants are mostly made out of air, are self constructing, can be endlessly recycled and contain photosynthesis units that are absolutely amazing compared to our best solar cells. That's why we use plants to make biofuel, or dig up million year old plants to burn as oil.* Kevlar is produced by stretching carbon nanotubes through a pressurized vat of extremely hot concentrated sulfuric acid (which is dangerous and hazardous to the environment). Abalone shells are tougher than kevlar, and are made out of seawater and sunlight, produced in a pollution-free process on demand.

Yeah the power consumption would be incredible. What about a molecular computer or a quantum computer?I still think the modern computer model, as useful to do math as it is, is pretty outdated for the things we want it to do. Fetching each instruction at a time with millions of logic gates, which need constant voltage, takes up a lot of power and time. Not sure if anyone has made a different/more efficient kind of computer. I would imagine an analog computer to be faster but because the voltage determines the number instead of a binary number that means that there could be more power consumption, unless you do decimal voltage.

Binary works even if there are fluctuations in the currents/voltages so we can make the processor smaller. Id imagine youd need all kinds of fancy components and regulation systems to keep the currents/whichever it is stable.

I don't think you would if you built it properly. The main problem would be the heat. Also, if any of the wires touched it would just melt the entire computer (but I think that happens with binary processors too).

I don't think talking about conciousness is relevant. Anything that looks like it has conciousness is enough. I can only be sure about my conciousness. I think the Duck test applies here well.

Alan Turing was many things (genius, spy, homosexual), but I think nobody would call him a duck.

Ahh. teh illiterateness of me.Anyhoo, I didn't call him a duck, I called the computer a duck. Even Wiki has an article about this whole conciousness, can we forget the question of "can machines think" thing already?

The hardware requirements aside (a field I know little about), I think people expect an AI to passively sit there being fed data and then kapow it's alive. Our intelligence (and that of animals) is very embodied. I think that our body is what allows us to get the continual feedback that high quality learning requires. Every time we move a muscle it affects our whole body via physics, which affects all our senses, so therefore there's a very tight and high bandwidth learning loop. Now I'm not saying that there aren't other good paradigms, but we know this one works.

Conversely our passive neural nets make screwball mistakes like learning from satellite photos that a tank is present if it's a sunny day because all it's tank training data was taken on a sunny day. I'm not surprised. It's getting one bit of feedback (present, not present). It really knows nothing about the properties of tanks or how you interact with tanks or why they matter. Light vs dark was the easiest neural net that matched the training data.

What makes you think that doing things in parallel is the key to consciousness?

There's also the point that, as far as running an algorithm is concerned, there is no difference whether things are run in parallel or not - a single core machine can always do the same thing as one doing it in parallel, it's just a question of performance. If consciousness is something that can be developed on a computer (as we know it today, i.e., simply a matter of running the right software), then it can be run on any computer, although it might be very slow.

As far as algorithms are concerned, what you suggest is nothing new - e.g., neural networks are used in AI, and they model a process that occurs in parallel. But it doesn't matter whether you run it on hardware with multiple cores or only one.

You also seem to be unaware of anything to do with multiple cores, SMP and so on? - Yes, writing things to run in parallel is a difficult area, but this is not specific to AI, and is something that is already done today on most computers.

In summary: (a) running things in parallel isn't inherently necessary, it's just a question of whether you'd get better performance, (b) the industry already realised years ago that running things in parallel is a way to get better performance, and this transition became standard even on bog standard PCs in the last decade - even phones have multiple cores these days. (Of course, it's true that we're still a long way from solving the problem of scaling most software to say thousands or millions of cores, but again, it's not like no one's thought of this.)

What makes you think that doing things in parallel is the key to consciousness?

There's also the point that, as far as running an algorithm is concerned, there is no difference whether things are run in parallel or not - a single core machine can always do the same thing as one doing it in parallel, it's just a question of performance. If consciousness is something that can be developed on a computer (as we know it today, i.e., simply a matter of running the right software), then it can be run on any computer, although it might be very slow.

As far as algorithms are concerned, what you suggest is nothing new - e.g., neural networks are used in AI, and they model a process that occurs in parallel. But it doesn't matter whether you run it on hardware with multiple cores or only one.

You also seem to be unaware of anything to do with multiple cores, SMP and so on? - Yes, writing things to run in parallel is a difficult area, but this is not specific to AI, and is something that is already done today on most computers.

In summary: (a) running things in parallel isn't inherently necessary, it's just a question of whether you'd get better performance, (b) the industry already realised years ago that running things in parallel is a way to get better performance, and this transition became standard even on bog standard PCs in the last decade - even phones have multiple cores these days. (Of course, it's true that we're still a long way from solving the problem of scaling most software to say thousands or millions of cores, but again, it's not like no one's thought of this.)

It's less about parallel processing or multiple cores it's more about nodes. The reason you would want to make the nodes hardware instead of digital software is because to update all the nodes would take time but if it were hardware they would all be updating themselves constantly taking in inputs, making decisions and spitting outputs. Almost as if you had a silicon neuron and the wires were silicon synapses. I know another thing that would be impressive, the nodes could replicate themselves (somehow).But, I honestly think that conciousness isn't as complicated as people are suggesting. Sure the whole thing in itself is complicated but the little systems that make it work aren't at all. I think of it as more of an illusion than anything else.

It's less about parallel processing or multiple cores it's more about nodes. The reason you would want to make the nodes hardware instead of digital software is because to update all the nodes would take time

Yes, this is what I say - it may be slower, but that doesn't change whether you can do it. No doubt any computer capable of human level consciousness would have to be massively parallel, but that's separate to whatever the key to consciousness might be. If you can run it on a massively parallel computer, then you could do it on a non-parallel one too, albeit slower.

Also I don't see how multiple cores is different to "nodes" - multiple cores are doing parallel processing in hardware, for real, and not in "software".

But, I honestly think that conciousness isn't as complicated as people are suggesting. Sure the whole thing in itself is complicated but the little systems that make it work aren't at all. I think of it as more of an illusion than anything else.

Which people do you mean suggest this? It's a not uncommon view that consciousness arises out of the complexity of a large number of smaller simpler parts.