Thank you, do you even know what 'atom' means ? An indivisible entity that it. That's a modelBy the way, I can't figure out the relationship between (physics) atom , and intelligence

Um... I'm not sure who you're addressing here.

Anyhoo, I was thinking about this a lot lately. More precisely, I was reading an "article" about alien lifeforms, and lifeforms that are not based on electromagnetism as ours (chemistry), but on gravity, or strong force or weak force. Could a whole galaxy become "intelligent"? What does it mean anyway? The best definition I can think of is the ability to manipulate the environment in "unusual" ways (what the fuck "unusual" means?). At least we could detect that. But how can a galaxy manipulate the environment in an unusual way, yet, obeying the laws of physics?

Sometimes I feel that being "intelligent" and this whole "soul" stuff is about defying these laws in a way. Free will seems to be defiyng the laws of physics. But that can simply be an illusion, because according to the principle I mentioned earlier, the brain cannot model itself accurately.

Sorry to insist but it is, you have to realize that. Even if you don't care about what happens inside an atom, each atom interacts with every single other atom.

The answers that you seem to like are very concrete examples telling you precisely, that simulating very tiny systems, even with simplified rules, is already very costly.So yes, you can simulate 2 atoms easily on your PC, but extending that to anything useful is impossible. The right direction to go is to simplify rules (for example: not simulate at atom level).

Sorry to insist but it is, you have to realize that. Even if you don't care about what happens inside an atom, each atom interacts with every single other atom.

The answers that you seem to like are very concrete examples telling you precisely, that simulating very tiny systems, even with simplified rules, is already very costly.So yes, you can simulate 2 atoms easily on your PC, but extending that to anything useful is impossible. The right direction to go is to simplify rules (for example: not simulate at atom level).

Um, maybe just being wordy here, but in reality, you make rules more complex, more specific, you handle corner cases etc. Simplification here means accepting simpler and less accurate results.

The idea in the OP is simple, and works (as it works in the universe), but simulating it on computer in the same way as it works in real life is next to meaningless. Even Deep Thought had to design the whole Earth and run it for million years just to came of with the ultimate question.

First of all, atoms are not the most basic level on which a simulation would have to run. You should look into particle physics and quantum mechanics if you are interested in more about this. There are actually many layers below the atom itself, and we don't even have good theories about what is truly "basic" in the sense of indivisible concepts.

Secondly, we don't know nearly enough about mechanics on even an atomic level to fully simulate things like you describe, even in principle. It isn't just a matter of computer power - the problem is that we don't know the equations that govern all that stuff to begin with. Anything we can do is going to be an approximation at best, and the larger scales we try to simulate, the lower the chances that it will even vaguely resemble the "real" physical world.

Finally, even if we did know the precise details of how everything works, we could never simulate it faster than real time. The argument here is simple: suppose we find a way to simulate 1 atom (or whatever) using exactly 1 other atom. This is probably impossible; it would likely take many more than 1 atom to simulate just 1 target atom. But in a theoretically ideal world, say we simulate atoms 1:1. To make the simulation run faster, we have to make the atom do things faster - but if we can make the atom "faster", we have to be able to simulate that as well; and if we can simulate the fast atom, and want to make the simulation faster than real life, we have to speed up our simulator atom, which means.... infinite regress. Therefore, the idealistic, perfect situation (where we can simulate atoms 1:1) prevents us from simulating anything faster than real time, and by extension, any less ideal system (say, using 10 atoms to simulate 1) will be even worse.

Games run at realtime because they're only simulating things that are necessary for the desired effect. What is the desired effect of AI? Decision making, pattern analysis, etc. Atomic/quantum interactions involve a bunch of stuff that you WON'T CARE ABOUT.

Don't bother simulating what you don't care about. You don't really want to be stuck with an AI in a virtual coma because your atomic simulation became a tiny bit unstable and caused virtual brain damage.

Anyhoo, I was thinking about this a lot lately. More precisely, I was reading an "article" about alien lifeforms, and lifeforms that are not based on electromagnetism as ours (chemistry), but on gravity, or strong force or weak force. Could a whole galaxy become "intelligent"? What does it mean anyway? The best definition I can think of is the ability to manipulate the environment in "unusual" ways (what the fuck "unusual" means?). At least we could detect that. But how can a galaxy manipulate the environment in an unusual way, yet, obeying the laws of physics?

Sometimes I feel that being "intelligent" and this whole "soul" stuff is about defying these laws in a way. Free will seems to be defiyng the laws of physics. But that can simply be an illusion, because according to the principle I mentioned earlier, the brain cannot model itself accurately.random rambling is over.

You might enjoy Star Maker by Olaf Stapledon. it speculates on the various forms of life which could evolve in the universe and how their societies might work - including planets where a symbiotic pair of species is dominant or swarm-based intelligence rules, as well as life composed of galactic bodies. These galactic-scale intelligences are almost morally offended at smaller lifeforms trying so persistently to divert objects from their natural course of motion by applying brute force... The book also imagines life in universes with different physical laws.

On the topic of books, the OP might be interested in The Cyberiad by Stanislaw Lem - it includes a couple of interesting stories about simulating reality, in one case to replicate the Earth in order to fast-forward to the present day and make advances based on the accumulated "knowledge" contained in the simulation.

Anyhoo, I was thinking about this a lot lately. More precisely, I was reading an "article" about alien lifeforms, and lifeforms that are not based on electromagnetism as ours (chemistry), but on gravity, or strong force or weak force. Could a whole galaxy become "intelligent"? What does it mean anyway? The best definition I can think of is the ability to manipulate the environment in "unusual" ways (what the fuck "unusual" means?). At least we could detect that. But how can a galaxy manipulate the environment in an unusual way, yet, obeying the laws of physics?

Sometimes I feel that being "intelligent" and this whole "soul" stuff is about defying these laws in a way. Free will seems to be defiyng the laws of physics. But that can simply be an illusion, because according to the principle I mentioned earlier, the brain cannot model itself accurately.random rambling is over.

You might enjoy Star Maker by Olaf Stapledon. it speculates on the various forms of life which could evolve in the universe and how their societies might work - including planets where a symbiotic pair of species is dominant or swarm-based intelligence rules, as well as life composed of galactic bodies. These galactic-scale intelligences are almost morally offended at smaller lifeforms trying so persistently to divert objects from their natural course of motion by applying brute force... The book also imagines life in universes with different physical laws.

As far as I know, neural nets aren't used in game AI : the learning process can be very very long, i think the learning process can't be integrated in games for that reason

They have been used in games before, and learning was done during the testing phase, and the results were just used in the released game. Though I believe that current developers stepped away from the idea, as adjusting to the player has become more important to developers, something which other techniques lend themselves much better for.

As far as I know, neural nets aren't used in game AI : the learning process can be very very long, i think the learning process can't be integrated in games for that reason

They have been used in games before, and learning was done during the testing phase, and the results were just used in the released game. Though I believe that current developers stepped away from the idea, as adjusting to the player has become more important to developers, something which other techniques lend themselves much better for.

Yeah, I recall a kind of AI bot for Counter Strike, that had to learn the levels (each of them one by one). One learning was about 20 minutes.Maybe it was similar

As far as I know, neural nets aren't used in game AI : the learning process can be very very long, i think the learning process can't be integrated in games for that reason

They have been used in games before, and learning was done during the testing phase, and the results were just used in the released game. Though I believe that current developers stepped away from the idea, as adjusting to the player has become more important to developers, something which other techniques lend themselves much better for.

Yeah, I recall a kind of AI bot for Counter Strike, that had to learn the levels (each of them one by one). One learning was about 20 minutes.Maybe it was similar

I believe that may have been John Laird (et al) and his S.O.A.R. technology. Mixed reviews from what little I have heard.

A trained neural net is a weighted equation. The two are identical. The difference is in how the weighted equation is developed.

I've been trying to get that into people's heads for years but the schools keep cranking out people who think just saying "neural network" is sexy. "Weighted sums" doesn't impress people enough.

For a single layer net such as adaline or perceptron, with the activation function=identity , so we can say that the output is just a 'weighted sum' of the inputsWhat about multi-layer networks ?What about models that don't use the Mc Culloch & Pitts model, like spiking neural networks ? (LaPicque model for instance)

I think that 'weighted equation' is more accurate, at least a better shorcut

A trained neural net is a weighted equation. The two are identical. The difference is in how the weighted equation is developed.

I've been trying to get that into people's heads for years but the schools keep cranking out people who think just saying "neural network" is sexy. "Weighted sums" doesn't impress people enough.

For a single layer net such as adaline or perceptron, with the activation function=identity , so we can say that the output is just a 'weighted sum' of the inputsWhat about multi-layer networks ?What about models that don't use the Mc Culloch & Pitts model, like spiking neural networks ? (LaPicque model for instance)

I think that 'weighted equation' is more accurate, at least a better shorcut

I refer to them as "functions with parameters", or "functions with too many parameters for their own good".

An ANN consisting of a single neuron with linear activation function is multiple linear regression, and it is well understood since the times of Gauss. An ANN consisting of a single neuron with sigmoid activation function is logistic regression, and it's also well understood. A multi-layer perceptron has so many parameters that training becomes really hard (e.g., it's hard to avoid getting stuck in local minima), and trying to understand what each parameter does becomes hopeless. That's what I mean by "too many parameters for their own good".

An ANN consisting of a single neuron with linear activation function is multiple linear regression, and it is well understood since the times of Gauss. An ANN consisting of a single neuron with sigmoid activation function is logistic regression, and it's also well understood. A multi-layer perceptron has so many parameters that training becomes really hard (e.g., it's hard to avoid getting stuck in local minima), and trying to understand what each parameter does becomes hopeless. That's what I mean by "too many parameters for their own good".

Ah ok I understand what you meant, thanks

EDIT :"avoid getting stuck in local minima" -> that's why a momentum is often added in the delta rule (but it doesn't eliminate the risk, and it's done by adding other parameters .... )

"understand what each parameter does becomes hopeless" -> we generally don't care about what the weight values are (and as you say it's hopeless anyway in a 'big' network)

This "simulate everything at atomic level" madness that started with the unlimited detail thing must stop NOW!

Adding to what ApochPiQ said:

For those saying we have enough processing power: Look at Folding@home; it's attempting to do what you're saying (model at atomic level, although not for AI purposes, but to understand protein synthesis), and it takes thousands of computers running through hours just to simulate ONE NANOSECOND. And last time I check, inteligence improves through time. It takes around two decades for a normal human being in the real world to become intelligent enough.

For those saying we only need to compute those atoms that are revelevant NO! We don't know what's relevant or not; everything is connected with everything. One small tiny miniscule thing leads to another and to another which in results leads to a million chain reactions ultimately causing a huge difference. It's called the butterfly effect.

I can think of two problems. One, even if we had sufficient processing power, there's the fact that the simulation would have to split time up in frames. That adds a certain amount of error, that would make the rules of that world fundamentally different from those of our world. Two, it presumes that we actually know more about the universe than we really do. We don't have perfect knowledge of physics yet. So we'd make assumptions, and the rules of the simulated world would drift even more.

And a third one that just occurred to me: Say you have preposterous processing power and perfect knowledge of the laws of physics. So you build your world (by the way, are you simulating just the world, or the sun as well? That's a whole other bunch of atoms to think about, and it's not like the sun doesn't have some small impact on our world, so four problems really), and you get your intelligence up and running. And now THEY want to make a computer to simulate the universe. So your computer has to simulate a world with a computer trying to simulate the world. And of course they are successful, so the intelligence that emerges in the simulated computer tries to simulate another world.

So now your computer is simulating a world that contains a computer that is simulating the simulated world, which will contain a computer that can simulate the simulated simulated world. And so on the recursion goes. And you want this to run in real time (in fact, millions of times faster than real time). You're essentially asking the computer to simulate the processing power of infinite computers just like itself. This is a logical impossibility.