I started (and that's it merely, have to continues someday) reading a book about the M-string theory ("Our superstring Universe") a bit before christmas, and only read a few pages before I got consumed by tedious everyday life things, then my library lease ran out (with a fine) and I had to bring the book back, having barely started it.
Nevertheless the very first page surprised me,
it said that before the Big Bang occured, all our universe was very condensed with all the dimensions (i believe it was 7...or 10, can't remember) tighly packed up, but it had a dimension! A measurable one!
I was surprised because I grew up being told it was a singularity...

Anyone as kind as to shed some light on my ignorance?

Magicman

I have always interpreted singularity in that sense to simply mean one thing. It had dimension, although it was very small. Also, you should keep in mind that anything you read in those books is theory at best. Some of it is likely mere speculation.

Indi

In very brief:

The singularities arise when you assume general relativity is correct. The early hypotheses surrounding the big bang all have something in common in their design: they start with general relativity (GR) - because it "makes sense" - then try to tack on quantum chromodynamics (QCD). More recent hypotheses have tried to go the other way - they start with QCD and then tack on general relativity. The various string theories - including M-theory - work that way.

Modern physics is leading us to suggest the latter path may be the way to go. The problem is that the Hawking model - the one that assumes the universe was a singularity at the big bang, the one you are probably referring to - uses GR all the way back to the big bang... but there is a point in that model where the properties of the universe are such that GR doesn't apply! So the Hawking model shoots itself in the foot. It can't be correct, because it predicts itself that it doesn't work. ^_^;

M-theory works in an entirely different way, assuming that GR is simply not correct - or rather that it only looks correct at large scales, but is fundamentally wrong. It deals with Planck-scale branes and wrapped dimensions in Calabi-Yau spaces, etc. etc. Grotesque math, really.

So basically, you were told it was a singularity based on the Hawking model, which assumes GR is fundamentally correct. But the Hawking model is almost certainly wrong. M-theory assumes QCD is fundamentally correct instead, so it arrives at an entirely different model. But since the Hawking model is so easy to explain and understand (it doesn't involve quantum wackiness!), most people "think" in terms of the Hawking model most of the time. Even i am guilty of this. It beats trying to explain M-theory every time the question comes up after all. ^_^;

The-Nisk

Indi wrote:

In very brief:

The singularities arise when you assume general relativity is correct. The early hypotheses surrounding the big bang all have something in common in their design: they start with general relativity (GR) - because it "makes sense" - then try to tack on quantum chromodynamics (QCD). More recent hypotheses have tried to go the other way - they start with QCD and then tack on general relativity. The various string theories - including M-theory - work that way.

Modern physics is leading us to suggest the latter path may be the way to go. The problem is that the Hawking model - the one that assumes the universe was a singularity at the big bang, the one you are probably referring to - uses GR all the way back to the big bang... but there is a point in that model where the properties of the universe are such that GR doesn't apply! So the Hawking model shoots itself in the foot. It can't be correct, because it predicts itself that it doesn't work. ^_^;

M-theory works in an entirely different way, assuming that GR is simply not correct - or rather that it only looks correct at large scales, but is fundamentally wrong. It deals with Planck-scale branes and wrapped dimensions in Calabi-Yau spaces, etc. etc. Grotesque math, really.

So basically, you were told it was a singularity based on the Hawking model, which assumes GR is fundamentally correct. But the Hawking model is almost certainly wrong. M-theory assumes QCD is fundamentally correct instead, so it arrives at an entirely different model. But since the Hawking model is so easy to explain and understand (it doesn't involve quantum wackiness!), most people "think" in terms of the Hawking model most of the time. Even i am guilty of this. It beats trying to explain M-theory every time the question comes up after all. ^_^;

Ah, thank you.
Now that you mention it I recall reading that "general relativity breaks down at a singularity" in what I've read of A Brief History Of Time (Don't quote me on it). I'll have to look up what's all the rage with "quantum" is, as it stands I'm shamefuly ignorant on it it would seem.

ocalhoun

The-Nisk wrote:

Ah, thank you.
Now that you mention it I recall reading that "general relativity breaks down at a singularity" in what I've read of A Brief History Of Time (Don't quote me on it). I'll have to look up what's all the rage with "quantum" is, as it stands I'm shamefuly ignorant on it it would seem.

^Yes indeed, you should do some learning about quantum physics... Not only will it help you understand some other things better, it is a very interesting and counter intuitive topic in its own right.
(As a side note, It'll help you understand that there really is such a thing as a 'maybe', that not everything is a 'yes' or a 'no'... You seemed to have a problem with that on another thread.)

Arnie

You noticed?

Although, when people start mixing quantum physics with philosophy/theology and the like, it often leads to bad things.

Indi

The-Nisk wrote:

I'll have to look up what's all the rage with "quantum" is, as it stands I'm shamefuly ignorant on it it would seem.

Well, since you asked. ^_^;

In the mid-1800s, physicists started making real leaps and bounds studying energy. They were getting to the point where they thought they pretty much almost had physics "solved". But they came to a point where they were just totally stuck. According to theory, if you take a perfect blackbody and heat it up, it should emit light at any frequency - or in plain English, if you take a hunk of steel and heat it up to 2000 degrees, it should glow at all colours equally. The result is that the high frequencies should be emitting at infinite intensity. Of course, that's not what happens in practise. In practise, the hunk of steel will glow a certain colour (red-hot, white-hot... various different colours at different temperature), and the high frequencies were certainly not infinitely intense. To picture that, see here:
(The vertical axis is intensity, graph from here). As you can see, according to the classical model, the intensity just shoots off into infinity. In reality (labelled "quantum" in the graph), it peaks, and then drops.

Now this was a major problem for physics - so major they labelled it the "ultraviolet catastrophe". But at the time they still believed they had physics mostly solved, and this was just a detail that wouldn't change the fundamentals.

In 1900, Max Planck said, "screw this" (possibly not his actual words), and instead of trying to explain why the curve was shaped the way it was, he tried to explain what was happening. Classical theory assumed that every frequency had an equal chance of being emitted. Planck said, "obviously not, so what is the probability distribution if it is not all equal probabilities?" He tried to match various distributions... and found a match. He found that if he used a formula that assumed that the only frequencies emitted were frequencies that satisfied ℎf = n (where ℎ is a constant, f is frequency and n is 1,2,3,4...), it worked. Now Planck never tried to explain why that was so - he just admitted he had no clue - but it worked.

Five years later was 1905, Einstein's annus mirabilis, in which he published 4 papers that changed the course of physics. 2 were the foundation for relativity, 1 was on Brownian motion (which proved that atoms were real and not hypothetical) and the last was on the photoelectric effect. While studying the relationship between light frequency and voltage in photoelectric materials, Einstein found that they only accepted and emitted light in specific amounts. This led Einstein to hypothesize that light came in discrete packets: photons. Going a step further, he realized that this solved Planck's problem, too. That led to the first great breakthrough in (what would become) quantum mechanics: energy is quantized.

(Just to clear up the mystery of the word "quantized", if you're not familiar with it: when a value is quantized, that means it can only take up specific values, not just any value. For example, a crowd of people is quantized, because you can have a crowd of 100 or a crowd of 101 people... never a crowd of 100.326 people. Energy is quantized because you can emit 1 photon, or 2 photons... never 1.4 photons.)

To give you an idea of how controversial this was, when Einstein received his Nobel prize, they explicitly excluded his work on the photoelectric effect and photons. ^_^;

The next piece of the puzzle came from Niels Bohr. He created a model of the atom that was a kind of intermediary step between classical and quantum. You probably learned about the Rutherford model of the atom in high school physics, where negatively charged electrons orbit around a positively charged nucleus rather like planetary motion. The problem is... this model doesn't work. Whenever an electric charge is in motion, it radiates electromagnetic radiation (according to Maxwell's laws). But if the electron is constantly radiating EM radiation - which is energy - while orbiting the nucleus, the orbit would eventually decay. All atoms would collapse. Clearly... this is a problem.

Bohr created a model where electrons could only orbit in specific orbits - in other words, the orbit is quantized. To jump from one orbit to another took a fixed, specific amount of energy - Bohr refused to admit this implied photons (he didn't believe in them)... but it sure sounds like it, doesn't it?

The final piece of the puzzle was supplied by a man named Louis de Broglie. Since it was widely held at the time that light was a wave (from Maxwell's equations, for example), a photon must be both a particle and a wave. That duality was the reason Einstein's photons were so highly controversial. de Broglie was the guy who - depending on your view - solved the problem, or made it worse. de Broglie argued that not only photons, but all matter, was really wave-like in nature. He even drafted the equation to give the wavelength for any amount of matter.

Now if you're not seeing how all of this fits together, here's the key!

All matter is not tiny spheres, it is actually wave packets that look like this:
You can see how that can kinda seem like a particle when viewed from the side - it is sort of contained. Because they're not tiny spheres, particles don't actually bounce off of each other - the waves interfere.

The reason why electrons can only travel in certain orbits is because those orbits are the only ones were the orbital length is a multiple of the wavelength. In other words, if you take the orbital path and unfold it into a straight line, it will fit one complete electron wave (or two, or three, etc). This is the only way the wave pattern will stay stable, because if the path was a different length, the wave would interfere with itself on a second pass around.

All matter interactions become wave functions - not mechanical functions like balls bouncing off of each other. And because they are wave functions, you get neat effects like tunnelling (the wave passes through another wave intact), entanglement (the waves become codependent), and so on. And because matter is waves, it does not exist at a single point as it would if it were a tiny ball - it can exist over an area of space - a cloud (it can appear to be in multiple places at once).

Quantum physics can be difficult to grasp because we don't think in waves - we can easily picture tiny balls bouncing around, but wave forms interacting... it gets tricky. To make things worse, the wave nature of matter leads to wackiness like electrons being in multiple places at once, or electrons travelling from point A to point C without every crossing point B between them, and so on. These things are easy to see from the math... but the math is gross. Unless you want to solve the Shrödinger equation (and trust me, you don't ^_^), you just have to trust that things that make no sense to you are really happening at the atomic scale.

ocalhoun wrote:

(As a side note, It'll help you understand that there really is such a thing as a 'maybe', that not everything is a 'yes' or a 'no'... You seemed to have a problem with that on another thread.)

That is not a particularly rational interpretation of QM. QM works in probabilities, but that doesn't imply uncertainty in the layperson sense. Consider a coin flip, which can be heads or tails, and call heads "yes" and tails "no". Before the coin is flipped you don't know whether you're getting "yes" or "no", so if someone asks if you are going to get "yes", you can say "maybe". But that doesn't mean that there is no "yes" or "no", it just means you don't know whether it will be "yes" or "no" yet. Once the coin is flipped, it will be either "yes" or "no"... not "maybe".

Similarly in QM, there is only uncertainty until interaction. Once there is interaction, all you have is "yes" or "no".

So, really, there is no "maybe" in QM. There is only "we don't know yet", but eventually it will boil down to either "yes" or "no".

deanhills

Indi wrote:

So, really, there is no "maybe" in QM. There is only "we don't know yet", but eventually it will boil down to either "yes" or "no".

Thanks for the posting Indi. I learned a lot. Think the only part that confuses me is the last reference to "maybe". If you have a theory and you are not sure whether it is "yes" or "no", wouldn't it then be a maybe "yes" or maybe "no", until you have the one or the other?

Arnie

Solving the time-independent Schrödinger equation isn't that bad and can actually be quite insightful. (And no, you cannot divide both sides by psi because H is a Hamiltonian operator, not a constant.)

Analytically you should use simple models or atoms with Born-Oppenheimer, but numerically there are a lot of possibilities. This is the basis of quantum chemical calculations. Of course things change when it comes to the time-dependent version or the relativistic Dirac equation...

Indi

deanhills wrote:

Indi wrote:

So, really, there is no "maybe" in QM. There is only "we don't know yet", but eventually it will boil down to either "yes" or "no".

Thanks for the posting Indi. I learned a lot. Think the only part that confuses me is the last reference to "maybe". If you have a theory and you are not sure whether it is "yes" or "no", wouldn't it then be a maybe "yes" or maybe "no", until you have the one or the other?

No, it is a yes or no period. You just don't know which yet.

There is a vast difference between there being a concrete answer and not knowing it yet, and there being no concrete answer at all. ocalhoun said that QM implies that not everything is "yes or no". That is false. Everything is yes or no in QM. The only thing novel about QM is that unlike all previous scientific theories that said it is possible to know whether it is yes or no in advance, QM says that is not possible. You have to wait for the wavefunction to collapse before you have your yes or no... but you do have a yes or no. You can't predict which of yes or no it will be... but it will be yes or no. There is no maybe.

This is a problem of equivocation. "Maybe" can mean "will the answer be yes: maybe", or it can mean "will the answer be yes or no... or will it be maybe (neither yes or no)". You are using it in the first sense, ocalhoun was trying to use it in the second. He was trying to claim that QM denies absolutes, which is absolutely false. If anything, of the two fundamental scientific theories in modern physics, general relativity denies absolutes, while QM assumes them.

I'll have to look up what's all the rage with "quantum" is, as it stands I'm shamefuly ignorant on it it would seem.

Well, since you asked. ^_^;

In the mid-1800s, physicists started making real leaps and bounds studying energy. They were getting to the point where they thought they pretty much almost had physics "solved". But they came to a point where they were just totally stuck. According to theory, if you take a perfect blackbody and heat it up, it should emit light at any frequency - or in plain English, if you take a hunk of steel and heat it up to 2000 degrees, it should glow at all colours equally. The result is that the high frequencies should be emitting at infinite intensity. Of course, that's not what happens in practise. In practise, the hunk of steel will glow a certain colour (red-hot, white-hot... various different colours at different temperature), and the high frequencies were certainly not infinitely intense. To picture that, see here:
(The vertical axis is intensity, graph from here). As you can see, according to the classical model, the intensity just shoots off into infinity. In reality (labelled "quantum" in the graph), it peaks, and then drops.

Now this was a major problem for physics - so major they labelled it the "ultraviolet catastrophe". But at the time they still believed they had physics mostly solved, and this was just a detail that wouldn't change the fundamentals.

In 1900, Max Planck said, "screw this" (possibly not his actual words), and instead of trying to explain why the curve was shaped the way it was, he tried to explain what was happening. Classical theory assumed that every frequency had an equal chance of being emitted. Planck said, "obviously not, so what is the probability distribution if it is not all equal probabilities?" He tried to match various distributions... and found a match. He found that if he used a formula that assumed that the only frequencies emitted were frequencies that satisfied ℎf = n (where ℎ is a constant, f is frequency and n is 1,2,3,4...), it worked. Now Planck never tried to explain why that was so - he just admitted he had no clue - but it worked.

Five years later was 1905, Einstein's annus mirabilis, in which he published 4 papers that changed the course of physics. 2 were the foundation for relativity, 1 was on Brownian motion (which proved that atoms were real and not hypothetical) and the last was on the photoelectric effect. While studying the relationship between light frequency and voltage in photoelectric materials, Einstein found that they only accepted and emitted light in specific amounts. This led Einstein to hypothesize that light came in discrete packets: photons. Going a step further, he realized that this solved Planck's problem, too. That led to the first great breakthrough in (what would become) quantum mechanics: energy is quantized.

(Just to clear up the mystery of the word "quantized", if you're not familiar with it: when a value is quantized, that means it can only take up specific values, not just any value. For example, a crowd of people is quantized, because you can have a crowd of 100 or a crowd of 101 people... never a crowd of 100.326 people. Energy is quantized because you can emit 1 photon, or 2 photons... never 1.4 photons.)

To give you an idea of how controversial this was, when Einstein received his Nobel prize, they explicitly excluded his work on the photoelectric effect and photons. ^_^;

The next piece of the puzzle came from Niels Bohr. He created a model of the atom that was a kind of intermediary step between classical and quantum. You probably learned about the Rutherford model of the atom in high school physics, where negatively charged electrons orbit around a positively charged nucleus rather like planetary motion. The problem is... this model doesn't work. Whenever an electric charge is in motion, it radiates electromagnetic radiation (according to Maxwell's laws). But if the electron is constantly radiating EM radiation - which is energy - while orbiting the nucleus, the orbit would eventually decay. All atoms would collapse. Clearly... this is a problem.

Bohr created a model where electrons could only orbit in specific orbits - in other words, the orbit is quantized. To jump from one orbit to another took a fixed, specific amount of energy - Bohr refused to admit this implied photons (he didn't believe in them)... but it sure sounds like it, doesn't it?

The final piece of the puzzle was supplied by a man named Louis de Broglie. Since it was widely held at the time that light was a wave (from Maxwell's equations, for example), a photon must be both a particle and a wave. That duality was the reason Einstein's photons were so highly controversial. de Broglie was the guy who - depending on your view - solved the problem, or made it worse. de Broglie argued that not only photons, but all matter, was really wave-like in nature. He even drafted the equation to give the wavelength for any amount of matter.

Now if you're not seeing how all of this fits together, here's the key!

All matter is not tiny spheres, it is actually wave packets that look like this:
You can see how that can kinda seem like a particle when viewed from the side - it is sort of contained. Because they're not tiny spheres, particles don't actually bounce off of each other - the waves interfere.

The reason why electrons can only travel in certain orbits is because those orbits are the only ones were the orbital length is a multiple of the wavelength. In other words, if you take the orbital path and unfold it into a straight line, it will fit one complete electron wave (or two, or three, etc). This is the only way the wave pattern will stay stable, because if the path was a different length, the wave would interfere with itself on a second pass around.

All matter interactions become wave functions - not mechanical functions like balls bouncing off of each other. And because they are wave functions, you get neat effects like tunnelling (the wave passes through another wave intact), entanglement (the waves become codependent), and so on. And because matter is waves, it does not exist at a single point as it would if it were a tiny ball - it can exist over an area of space - a cloud (it can appear to be in multiple places at once).

Quantum physics can be difficult to grasp because we don't think in waves - we can easily picture tiny balls bouncing around, but wave forms interacting... it gets tricky. To make things worse, the wave nature of matter leads to wackiness like electrons being in multiple places at once, or electrons travelling from point A to point C without every crossing point B between them, and so on. These things are easy to see from the math... but the math is gross. Unless you want to solve the Shrödinger equation (and trust me, you don't ^_^), you just have to trust that things that make no sense to you are really happening at the atomic scale.

ocalhoun wrote:

(As a side note, It'll help you understand that there really is such a thing as a 'maybe', that not everything is a 'yes' or a 'no'... You seemed to have a problem with that on another thread.)

That is not a particularly rational interpretation of QM. QM works in probabilities, but that doesn't imply uncertainty in the layperson sense. Consider a coin flip, which can be heads or tails, and call heads "yes" and tails "no". Before the coin is flipped you don't know whether you're getting "yes" or "no", so if someone asks if you are going to get "yes", you can say "maybe". But that doesn't mean that there is no "yes" or "no", it just means you don't know whether it will be "yes" or "no" yet. Once the coin is flipped, it will be either "yes" or "no"... not "maybe".

Similarly in QM, there is only uncertainty until interaction. Once there is interaction, all you have is "yes" or "no".

So, really, there is no "maybe" in QM. There is only "we don't know yet", but eventually it will boil down to either "yes" or "no".

Indi, thank you soo much for taking some of your time to do this. If it wasn't for this, I doubt I would've understood this concept as much as I did, since most books have tendency in involve the grotesque maths you mentioned within the very early chapters, and I doubt my maths skills to be sufficient to understand all those equations.

Nevertheless, I have to say that the first paragraph somewhat threw me off since you said that "if you take a perfect black-body and heat if up, it should emit any frequency". First off I was unfamiliar with the term black-body, but that wasn't much problem, the "any frequency" bit was, since from high-school chemistry I learned that electrons only occupy certain spaces( orbitals) and can emit fixed frequencies (by falling from the higher energy levels to lower ones), but the course we covered failed to explain as to why this is, so again, thank you Indi for explaining what the crapy school education system failed to.

I knew that everything traveled as a wave...but the fact (it is a fact right?) that this was meant literaly and everything is a wave, well it's shocking and amusing, but at the same time sort of fills the gaps that I knew and at the same time didn't know were there (things now add up).

ocalhoun wrote:

^Yes indeed, you should do some learning about quantum physics... Not only will it help you understand some other things better, it is a very interesting and counter intuitive topic in its own right.
(As a side note, It'll help you understand that there really is such a thing as a 'maybe', that not everything is a 'yes' or a 'no'... You seemed to have a problem with that on another thread.)

I simply apply the knowledge I have generated to date to make sense of the universe I happen to inhabit (and everything really). I do computer science in university, the thread you mentioned was sparked out of what I have learned about logic and intelligence, etc. If I attain new information, I will of course change "my point of view" ( in the thread you happen to mention, I simply expressed an idea, I never claimed authorship of it - if you are confused as to what I mean look up Death Of The Author by Roland Barthes, it touches the basics of what I mean), hence "my point of view" is forever dynamic, where I stand today is not where I'll stand tomorrow.

That said, I want to learn more about Quantum. However, to hopefuly make things easier for anyone who tries to explain things to me (and me understanding what they say), I will have to state what I already know (to some extent) at the risk of creating the impression that I am boasting about it. In middle school (it's not the actual education system in Ireland, but it's the closest equivalent of) I did general science and technology (which I was very bad at), in high school, I did Physics and Chemistry (I'm only going to mention subject I think relevant in this discussion). I have a sufficient understanding of chemistry (so, you could use some examples I guess) and a moderate knowledge of physics - i know general(since I think that the school education offered what may be reffered to as an overview) principles of mechanics and nuclear physics and so on, however I have had ( and still do) some problems with the whole 'waves' concept, that is I don't fully understand it. This could be due to the fact that, well, basicly I have to imagine what is happening (what I'm trying to learn) in my head, give it a dimension, a picture (a patern I could remember - if you are to believe what intelligence is defined by). The example being the Bohr model, in it's basics I was quite able to simulate it in my mind, hence I've had little problem with it. In case of waves, I have some difficulties in imagining it in perspective, hence I have difficulties with most things associated with them. Nevertheless, I hardcoded some knowledge of them, especialy regarding light, what I'm trying to say is: I know about "waves" but I don't fully understand "waves". Oh and In college I do Computer Science etc. I hope me writing this serves a purpose other than me sounding stupid in my own mind.

Now then here's some thought's I had when I read what Indi posted:
1) From what you said about all matter being actualy wave packets, does that mean gravity can be explained in terms of matter (being the distortion) distorting space-time (that line/wave thingy the matter/distortion is made up of, although wait, this is quantum, not relativity or hawkings theory, does that mean space-time is an invalid concept? is it strings, fabric of the universe, what would be the accepted term/concept?)?
2)You mentioned "travelling from point A to point C without every crossing point B between them", and I made a connection (I'm sure it's a false one, but my mind made it so i thought it's worth a mention) to Worm Holes, but in a different way, somehow. Does that bear any truth in concept/principles or perhaps it is a good example (for the weak minded )?
3) In your reply to ocalhoun you mentioned a concept which I understood as: 'all is probabilites until the event takes place, then it becomes a certainty'. So in a vague way quantum mechanics (I'm using this term quite blindly) describes, to some extent, evolution - where as I understand it every (is it?) possibility is played out and "natural selection" picks the 'winning numbers', is there any truth in that (while possibly running the risk of being vague)?
4) But more importantly (well...), as a follow up and exploration on question 3 above, that yes or no period(before the event) you mentioned Indi, is that what gives rise to the MWI (Many World Interpretation) theory? or is there more?

Again thank you for decreasing my ignorance.

As a curious( a.k.a. irelevant to this thread) side note:
When I read that matter is a wave, it made me think that we are a rather curious entanglement of wavelengths (or how would you describe it?), which in turn made me think of holograms (but then again I don't know much about those), which then made me relate to the article I've read in a science magazine( could be New Scientist or American Scientist, I dunno) which claimed we could all be a projection (a hologram) from the 'shell' (for the lack of a better word) or outside( or side) of the universe, this in turn made me think of simulated reality (which "incidentaly" we happen to cover in the last few CS lectures), then matrix etc. So basicly I felt like a work of sci-fi LOL. Ah, the power of imagination.

deanhills

Well all I can say is that you got my imagination going and enjoyed your posting very much! Must be something in the genes of the Irish I'm busy with three books of Richard Dawkins, but am definitely interested in Death Of The Author by Roland Barthes. I have always stood by this, today's truth is tomorrow's falsehood. Think the way it has been put is more accurate and scientific:

The-Nisk wrote:

hence "my point of view" is forever dynamic, where I stand today is not where I'll stand tomorrow.

Next thing that caught my imagination was your last paragraph about a hologram. I'm totally fascinated by this. I also think there is a good chance of one day projecting yourself, i.e. your girlfriend in Sydney may get a mini hologram of you on her table, after you have transported yourself to her in a special sci-fi way. Probably scary from a privacy point of view when you start to chat away out of the blue, but would enhance our standard of living in a great way. But yes, imagination is limitless. I hope people will expand in that direction so that one day we will not need TV or LCD screens anymore, nor all those cables

The-Nisk wrote:

When I read that matter is a wave, it made me think that we are a rather curious entanglement of wavelengths (or how would you describe it?), which in turn made me think of holograms (but then again I don't know much about those), which then made me relate to the article I've read in a science magazine( could be New Scientist or American Scientist, I dunno) which claimed we could all be a projection (a hologram) from the 'shell' (for the lack of a better word) or outside( or side) of the universe, this in turn made me think of simulated reality (which "incidentaly" we happen to cover in the last few CS lectures), then matrix etc. So basicly I felt like a work of sci-fi LOL. Ah, the power of imagination.

deanhills

Indi wrote:

deanhills wrote:

Indi wrote:

So, really, there is no "maybe" in QM. There is only "we don't know yet", but eventually it will boil down to either "yes" or "no".

Thanks for the posting Indi. I learned a lot. Think the only part that confuses me is the last reference to "maybe". If you have a theory and you are not sure whether it is "yes" or "no", wouldn't it then be a maybe "yes" or maybe "no", until you have the one or the other?

No, it is a yes or no period. You just don't know which yet.

There is a vast difference between there being a concrete answer and not knowing it yet, and there being no concrete answer at all. ocalhoun said that QM implies that not everything is "yes or no". That is false. Everything is yes or no in QM. The only thing novel about QM is that unlike all previous scientific theories that said it is possible to know whether it is yes or no in advance, QM says that is not possible. You have to wait for the wavefunction to collapse before you have your yes or no... but you do have a yes or no. You can't predict which of yes or no it will be... but it will be yes or no. There is no maybe.

This is a problem of equivocation. "Maybe" can mean "will the answer be yes: maybe", or it can mean "will the answer be yes or no... or will it be maybe (neither yes or no)". You are using it in the first sense, ocalhoun was trying to use it in the second. He was trying to claim that QM denies absolutes, which is absolutely false. If anything, of the two fundamental scientific theories in modern physics, general relativity denies absolutes, while QM assumes them.

Thanks Indi. Enjoyed the posting very much. QM must have limitless possibilities. Almost like creation

Indi

The-Nisk wrote:

Indi, thank you soo much for taking some of your time to do this. If it wasn't for this, I doubt I would've understood this concept as much as I did, since most books have tendency in involve the grotesque maths you mentioned within the very early chapters, and I doubt my maths skills to be sufficient to understand all those equations.

i've had a bit of a notion in the back of my mind to write a math-free introduction to modern physics for a long time. Eventually some math would have have to be introduced, sure, but it may be possible to put it off for a while. It's something i've been meaning to talk to Bikerman's people about.

The-Nisk wrote:

Nevertheless, I have to say that the first paragraph somewhat threw me off since you said that "if you take a perfect black-body and heat if up, it should emit any frequency". First off I was unfamiliar with the term black-body, but that wasn't much problem, the "any frequency" bit was, since from high-school chemistry I learned that electrons only occupy certain spaces( orbitals) and can emit fixed frequencies (by falling from the higher energy levels to lower ones), but the course we covered failed to explain as to why this is, so again, thank you Indi for explaining what the crapy school education system failed to.

Just to make sure that was clear, that "if you take a perfect black-body and heat if up, it should emit any frequency" was according to classical theory - before quantum mechanics. According to classical theory, if you apply energy to an atom, the electrons should all be excited to various levels - any levels - and then fall back down to produce all different emitted frequencies. There's no reason why they should only occupy specific orbitals, so they can be anywhere around the nucleus.

What you learned is correct, modern theory. In modern theory - from QM - electrons can only occupy certain orbitals, and thus can only fall through certain fixed energy levels. And thus, only emit fixed frequencies, as you learned.

And honestly, if you actually picked that up in high school chemistry, that really ain't bad at all. ^_^;

The-Nisk wrote:

I knew that everything traveled as a wave...but the fact (it is a fact right?) that this was meant literaly and everything is a wave, well it's shocking and amusing, but at the same time sort of fills the gaps that I knew and at the same time didn't know were there (things now add up).

Well, this is actually a deep philosophical question, which may surprise you. But physicists - especially theoretical physicists - are quite cautious about differentiating between theoretical constructs and "real things".

For example, atoms were theoretical constructs in a real, modern physics sense since around 1800. By the mid-1800s, using atoms as a theoretical model was well established, and they were doing things like using vibrating atoms to explain heat transfer. But even then, they were not sure whether atoms really existed, or whether they were just a convenient model. It wasn't until 1877 that someone first showed that atoms might be real things, and one of Einstein's annus mirabilis papers in 1905 was what sealed the deal and ended the question completely. In other words, it took about 100 years from when atoms were first introduced as real science, until science accepted as fact that they were real things... not just a handy model.

So it goes with matter waves. de Broglie's initial theory was not actually that matter is waves, but that all matter has an associated wave. Now, is matter really waves? And if so... waves of what (and what do they travel through, if anything)? These questions go outside of the realm of the Standard Model (which is the current state of the art of physics that includes both quantum chromodynamics and general relativity, with supersymmetry). You're starting to get into the realm of next-generation physics - things like M-theory.

So... the truth is right now, modern physics doesn't really make the assumption that matter is actually waves. But there is strong circumstantial evidence that it is true, such as Bertram Brockhouse's Nobel-winning experiments right here with neutron diffraction. Brockhouse essentially fired neutrons (because they are electrically neutral, unlike protons or electrons, and have much shorter wavelengths than the wavelengths of photons at the same energy) through an atomic crystal lattice, and showed that they diffracted according to the spaces between the lattice atoms. In simple terms, imagine a shallow pond of water, with a wall across the middle of it that has regularly spaced holes, and drop a rock in the water on one side. As everyone knows, ripples will spread out in a circle from where the rock was dropped... until those ripples hit the wall. Then some will be reflected, and some will pass through the holes to produce a diffraction pattern, like this:
If neutrons were not waves, they would either hit the wall and bounce back, or pass through one of the holes, possibly being slightly deflected - you would not see a diffraction pattern. But Brockhouse saw a pattern, so the neutrons acted like waves. (Or, possibly, were waves.)

Take a look at this - this is the diffraction pattern you get when you fire electrons through a crystal:
If you can imagine it, the crystal is just a flat plate held in front of a screen, and the electrons are fired at the screen through the plate like a laser. This is the way the electrons scatter. The regular pattern shows that it's diffraction (if electrons were not waves, you would get one bright dot with a haze around it).

Does that mean that electrons are actually waves, or just that they act like waves? Unfortunately, that's beyond current science... but i'd bet they are actually waves.

The-Nisk wrote:

... what I'm trying to say is: I know about "waves" but I don't fully understand "waves".

If you ever get to do QM in university or college, they will start by telling you more about waves than you ever wanted to know. ^_^; In fact, you will usually have an entire course on the mathematical properties of waves as a prerequisite.

But to be truthful... they're actually very simple beasts. There are some specific things you have to know about them to understand them... but not really all that many, and they don't require much math. It is my opinion that if you can get your hands on program to draw waves given the basic properties (wavelength/frequency, amplitude/height and phase/offset), and do things like add waves together and show you what happens in real time - you could wrap your head around waves well enough to handle QM.

The-Nisk wrote:

1) From what you said about all matter being actualy wave packets, does that mean gravity can be explained in terms of matter (being the distortion) distorting space-time (that line/wave thingy the matter/distortion is made up of, although wait, this is quantum, not relativity or hawkings theory, does that mean space-time is an invalid concept? is it strings, fabric of the universe, what would be the accepted term/concept?)?

QM and QCD (quantum mechanics and quantum chromodynamics) do not have much to say about gravity at all. In fact... they say nothing at all. ^_^; General relativity describes gravity as a distortion of space-time caused by mass... which is then in turn affected by the distortion... but GR doesn't consider the make-up of matter - whether it's waves or whatever - at all.

The Standard Model (TSM, which is like a super-souped up extension of QCD), does have quite a bit to say, but there are still a lot of problems. In particular, TSM and GR are contradictory. String theorists claim they've got the problem licked. But of course, that's next-generation physics. So the currently accepted concept is unknown. ^_^; Depending on what you're doing, you may prefer to think of space-time as a geometry warped by the presence of mass (GR), or as a field of (normally) evenly-spaced Higgs bosons (TSM), or as the "distance" between vibrational modes in a Calabi-Yau manifold (M-theory).

You're right about one thing, actually - in QM, the notion of space-time is totally invalid... which is probably the nastiest problem in all of physics. GR says that space-time is a thing - a smooth geometry that bends and warps in the presence of mass. QM says no, space-time is not a thing, it is simply the 4-D "distance" between two wave interactions, and it is not smooth at all, but extremely chaotic. GR says the laws of the universe are the same no matter what your reference is, so no one is "right" (everything is relative). QM says no, there is an absolute frame of reference. Getting these two theories to agree has been what has occupied physics for the last 100 years. Some of the next-generation theories show incredible promise... but we're not there yet.

The-Nisk wrote:

2)You mentioned "travelling from point A to point C without every crossing point B between them", and I made a connection (I'm sure it's a false one, but my mind made it so i thought it's worth a mention) to Worm Holes, but in a different way, somehow. Does that bear any truth in concept/principles or perhaps it is a good example (for the weak minded )?

No, and this is why people often say QM is "weird".

Take a piece of paper with a grid on it, like squared paper. Now, put the point of your pencil in one square, and without lifting the pencil from the paper, try to move to another grid square without crossing a grid line.

Impossible to do, of course. Your natural response is to "cheat" - to lift your pencil point off of the paper, and put it in another grid square somewhere else. That's what effectively what a wormhole does in this case (although, more accurately, a wormhole would probably be better modelled by bending the paper, and then pushing your pencil point right through it to go from one grid to another - but even this is travelling through the third dimension... it's just travelling a shorter distance because space is warped).

But this isn't what actually happens. That would require the existence of a higher dimension that the pencil point can move through (in the case of the two dimensional paper, the third dimension you get by lifting the pencil point up, or pushing it through).

It turns out the electron literally does blip out at point A, and blip back in instantaneously at point C... without ever passing through B. It doesn't move anywhere. It just is at A... and then it is at C - travelling without moving, instantaneously.

And - it gets even weirder - at the same time, the electron is also at point D, far away from either A, B or C!

It is in multiple places at once, popping about instantaneously... this is the wackiness of QM. ^_^; But it's actually real.

The-Nisk wrote:

3) In your reply to ocalhoun you mentioned a concept which I understood as: 'all is probabilites until the event takes place, then it becomes a certainty'. So in a vague way quantum mechanics (I'm using this term quite blindly) describes, to some extent, evolution - where as I understand it every (is it?) possibility is played out and "natural selection" picks the 'winning numbers', is there any truth in that (while possibly running the risk of being vague)?

That's a really interesting question that hints at an understanding of evolution that most people don't have. To consider it will also require a look at one of the most infamous experiments in modern physics: the quantum double slit experiment.

You probably know of the double slit experiment, where a wave - usually light - is projected through two slits onto a screen, producing an interference pattern. As i mentioned above, you get same effect with electrons and matter in general.

But here's where it gets weird.

If you put a detector at one slit - so you know which slit each electron goes through - the interference pattern disappears, and electrons no longer behave like waves, they behave like particles going through two slits (producing two blobs (or a single merged blob) on the screen.

What is happening is the electrons do not travel like particles, or even waves, at all. Instead, they travel through every possible path at the same time, until they entangle with something else. If it's the screen, a single electron travels through both slits before the electron's wave entangles with the screen. Thus, you get interference. But if it's the detector at the slit, the electron's wave entangles with the detector, and then goes on from there to entangle again with the screen. Because it has been entangled at one slit, it won't pass through both slits. Thus, no interference.

So, what happens in QM is a particle travels through all possible paths, simultaneously, to where it entangles with something.

In evolution, an evolutionary advancement (note: when i say advancement, i mean advancement of time, not the species - evolution is change, not necessarily advancement) takes all paths (theoretically - in practice, limited by the size of the species), but not all paths are fruitful. More than one may be, though.

So, if you were going to draw the path of the electron in QM, you would mark two points - the source and the detector, both points where entanglement happens - then draw a bunch of random lines connecting those two points. Those are all the paths that the electron follows.

If you were to draw the path of an evolutionary advancement, you would mark a start point, then draw a bunch of lines spreading out from that start point. Most would dead end, but at the end of some of those lines you would mark an end point.

It would be an interesting comparison, i think.

The-Nisk wrote:

4) But more importantly (well...), as a follow up and exploration on question 3 above, that yes or no period(before the event) you mentioned Indi, is that what gives rise to the MWI (Many World Interpretation) theory? or is there more?

Well, in the context above, if you put a detector at one slit, the electron will be entangled there. But if the electron travels every path between the source and the slits, how do you know whether a single electron will be entagled at that slit, or not (in which case, it would go through the other slit and neither)? After you detect it, you know what the electron did - either it went through the slit with the detector or it didn't. But before you detect it, it is in a state of superposition, where it could go either way and you can't say which. It has the potential to do both things.

What does this state of superposition mean? This is the subject of much speculation. Most physicists will just shrug and say "who cares"? "Shut up and calculate it". But some have tried to explain it.

MWI is one such explanation (actually, several such explanations ^_^; MWI is a messy topic - the creator quit physics right after he made it, got rich doing other stuff, then went a little loopy). In MWI, the electron actually takes both options, each in separate universes.

Course, this creates a little problem in the quantum slit experiment that shows the electron going through two different slits at the same time in one universe (this one). MWI proponents argue this is because "universes interfere" in some unspecified way.

If you want to choose an interpretation of superposition, it's pretty much just a matter of choice. You can choose MWI, or whatever interpretation grabs your fancy. In real physics, though, no one really cares. They just take superposition as what it is - the electron actually does both things (or in the case of Schrödinger's cat, the cat is both dead and alive) - and leave it at that.

For now anyway. Next-generation physics will probably either answer the question, or make it irrelevant.

The-Nisk

It's been a while but I've returned to this wonderful thread, hopefuly armed with greater understanding & new skills - I decided to take up Calculus in Uni to aid my previously limping maths side.

Indi, you mentioned the tendency to deffirenciate between the real and theoretical, but say we don't care about being serious too much and 'imagine' the whole universe is a computer simulation/projection then we sort-of get rid of the 'real' and we can calculate (and also get ex-comunicated from the church for blastphemy)? Since it will be a mathematical system which could use the NFA to govern things like electron movement and so on? It would also explain why the universe is so damn computable.

My appologies for throwing ideas/concepts at you Indi and expect you to make sense of them for me! =]

Indi

Oh, it's no problem. This is how science forums should work!

Your question is a doozy, though - in the sense that i understand what you're getting at, but the answer is beyond easy answer.

Let's say you're right and the universe is either a giant computer running a program, or the universe itself is a program running on some giant-er computer: the thing is there are fundamental limits in computation that make that a difficult stand to take. For example, the halting problem: if the universe is being computed, then certain conditions would probably induce singularities that set off "infinite loops"... but which ones? There's no way to know. There is also the problem of infinites, in that several things in our universe are infinite... which would require an infinitely large computational system to do the number crunching.

i suspect - and this is getting beyond the boundaries of science here - that trying to model the universe en masse as a thing that is contained/understood by anything creates philosophical problems that can't be solved. i think you should think of it the other way: by taking a cue from Darwin.

You see, the real reason Darwin is considered one of the most important men in modern thinking is not because of his theory of evolution per se (which we actually obsoleted even before the 1920s), it is because of the concept change he brought to thinking in general: that simple rules with just a little bit of randomness input can lead to infinite complexity.

Instead of trying to think of the entire universe as one massive computation or simulation that "something" has to compute or simulate, think of the universe as a tiny collection of "rules", fed by quantum randomness. All of the complexity and order of the universe arises from that simple grounding - with infinite extent and infinite variation - and cannot be predicted (because of the random input) or understood (because of the infinities) or computed (because of both).

In other words, don't think of the universe as a massive computer program being run in some cosmic RAM somewhere, think of it as a cosmic-scaled game of Conway Life: just a handful of simple rules with a teeny bit of randomness (very rarely, pixels light up or die on their own), which explodes into structure and diversity on an infinite scale that can neither be predicted nor even understood by any finite mind.

The-Nisk

Indi, if you're saying that if we take the universe to be simulated, only simulated and not computed - or in other words some basic laws exist and aside from that the contents are let to fend for themselves and interact whichever way they are allowed to according to these laws, so that the 'system' doesn't know nor care what is happening and is only concerned with keeping up the basic laws - in that case we are of similar mindset. If not, please clarify.

I believe Steven Hawkings was a big believer in the idea that if we have a big enough computer we could predict what will happen next. I found that idea a bit too...simple I guess is the word, to have much weight behind it. I just think the universe which does seem to function as a system, computer simulated or not, is rather complex or rather has many many simple rules that combined give rise to complexity.

But this leads me onto the topic of 'simulation', forgeting the universe for a second and changing magnification - in programming/software engineering people always talk of virtual machines and simulation, but I really think that they have a bad idea of what simulating something means. It's hard to explain. Now, I'm no expert on this field, but from what I know the 'simulating' that they do be doing is more a 'cheat' that produces similar results and not actual simulation. Of course I have some consideration for counter-arguments to that, such as the argument that if we were simulating the system completely quicker than in real-time it would be immpossible/inneficient/non-feasible (if we take whatever simulating the universe to be the most powerful computer that can exist - althought that's not necesary, just an idea). It's a very messy idea/concept.

And since i mentioned arrogance where coding is concerned I might aswell mention this - slopy coding should be punished! I mean it's okay when you're learning to have extra variables flooding your scripts and could-be-better functions/methods, but once you get to the grips with whatever language you're using it should be punished! There should be a global points system for all programmers and the people like that should have their point reduced!

Could I hear more about Darvin and his ideas and why it is absolete() ? I admit to never actualy reading any physical material about it - all I know about it and evolution was constructed out of some very vague words-of-mouth & my own ideas/understanding of how it must all work. Also, if there's more quantum-mechanics primers - I welcome thee! (I think I just caught myself tryinng to steer the conversation away from biology - my weakest field, to counter that - anyone wish to explain some biology/genetics?)

Note: the 4th paragraph should be considered semi-seriously or from both, the serious and the joker perspective =]

metalfreek

Even though I am a student of physics I am yet to study about string theory. All information shared here are really informative and keep it coming. I am learning a lot.

Indi

The-Nisk wrote:

Indi, if you're saying that if we take the universe to be simulated, only simulated and not computed - or in other words some basic laws exist and aside from that the contents are let to fend for themselves and interact whichever way they are allowed to according to these laws, so that the 'system' doesn't know nor care what is happening and is only concerned with keeping up the basic laws - in that case we are of similar mindset. If not, please clarify.

That's pretty much it, only i wouldn't use the word "simulated", because "simulated" implies:

A "simulator";

A "real" object, which the simulation is trying to be a model of;

Someone who is observing the simulation to learn something.

i don't know what word i would use though: picking a word for something can be a tricky game.

Fun fact (since Darwin came up): Darwin didn't want to use the word "evolution" to describe his theory of descent with modification. i'd have to look this up to confirm it, but i seem to recall reading that he never uses "evolution" once in Origin of Species. Why? Because "evolution" comes from the Latin word "evolvere" which means to unfold, or unroll (for example, a scroll). "Evolution" implies there is a predetermined pattern that is being discovered, bit-by-bit, when Darwin wanted to make clear that there is "plan" in evolution. Even today, you can still find people that don't understand that; so Darwin was right.

The-Nisk wrote:

I believe Steven Hawkings was a big believer in the idea that if we have a big enough computer we could predict what will happen next. I found that idea a bit too...simple I guess is the word, to have much weight behind it. I just think the universe which does seem to function as a system, computer simulated or not, is rather complex or rather has many many simple rules that combined give rise to complexity.

Maybe not Hawking, but definitely Einstein was a die-hard computationalist (hence: "God does not play dice..."). Hawking can be a little hard to pin down, because as vague as Einstein was, Hawking is positively obtuse. It's easy to never be wrong if you never make a clear statement. (But even then, Hawking hasn't done so hot. ^_^; Every scientific bet he's ever made that has been settled so far, he's lost.)

The-Nisk wrote:

But this leads me onto the topic of 'simulation', forgeting the universe for a second and changing magnification - in programming/software engineering people always talk of virtual machines and simulation, but I really think that they have a bad idea of what simulating something means. It's hard to explain. Now, I'm no expert on this field, but from what I know the 'simulating' that they do be doing is more a 'cheat' that produces similar results and not actual simulation. Of course I have some consideration for counter-arguments to that, such as the argument that if we were simulating the system completely quicker than in real-time it would be immpossible/inneficient/non-feasible (if we take whatever simulating the universe to be the most powerful computer that can exist - althought that's not necesary, just an idea). It's a very messy idea/concept.

Yes, in the ideal sense, simulating something means figuring out the rules that describe the operation of a system, and building a model that works on the same rules. Good in theory, impossible in practice, for a couple of reasons. First, we often don't know the rules, so we have to guess. Second, real systems can often be so complex - even if they have simple rules - that they're just impossible to model competently. (For a realistic example, consider three-body motion: it's just three point masses orbiting each other using simple Newton/Kepler laws... but it's beyond analytic computation. Very, very simple laws can quite quickly lead to very, very complicated systems.)

To get around that, the easiest way is to fudge it: use simpler rules, worry about "good enough" rather than accurate, and only simulate those parts of the system necessary to get the results you care about. For example, when simulating a roll of a die, you don't simulate the inertia of the die's mass, the friction of its surfaces as it tumbles or the hardness coefficients of the die and the rolling surface - you just pick a (pseudo-)random number between 1 and 6 inclusive (assuming a standard die), and that's all you need, most of the time. That's what you call a "cheat", which it is - but if you don't care about the physics of a die roll (you're only interested in the result of the roll), then it's "good enough".

Sometimes you really do "true" simulation (whenever you can get away with it, for sure), but you cheat when you have to.

Now, for the case of the universe, you've got a bit of a pickle. If you are able to simulate the operation of the universe in real time... then you're not really simulating the universe. ^_^; You're either cheating, or simply wrong. Why? Because if you really were running a simulation of the universe on a computer within the universe, then the real universe must be running itself... plus the simulation you're running, because it's within the universe. Your simulation, therefore, must be doing the same thing: it's simulating the universe, which has within it a simulated computer running a simulation of the simulated universe. And that computer with the simulated simulated universe has within it a simulated simulated computer, running a simulation of the simulated universe simulation... etc. etc..

What that means is that if the universe is computable, then it must be impossible to accurately simulate it on any computer within the universe. (Infinities muck this logic up somewhat, of course, as infinities are wont to do.)

The-Nisk wrote:

Could I hear more about Darvin and his ideas and why it is absolete() ? I admit to never actualy reading any physical material about it - all I know about it and evolution was constructed out of some very vague words-of-mouth & my own ideas/understanding of how it must all work. Also, if there's more quantum-mechanics primers - I welcome thee! (I think I just caught myself tryinng to steer the conversation away from biology - my weakest field, to counter that - anyone wish to explain some biology/genetics?)

Darwin was actually a lousy biologist. ^_^; In fact, if i recall, he dropped out of medical school to study religion. What he was really gifted at was classification of species, not biology per se.

The reason Darwin is obsolete is because his work is so archaic. ^_^; He published in the mid 1800s... that's a century-and-a-half ago now, and in scientific time, eons. Darwin never knew about DNA (although his work predicted something like it), and Hubble hadn't even been born - so the idea that there may have been a "beginning" to life on Earth was just a religious philosophical idea that carried over, and the dating was very, very recent (not quite the 6,000 years of Young Earth Creationists - i think the estimates were in the millions of years range - but still quite low).

Darwin didn't believe that natural selection was fast enough to have produced the diversity of life on Earth that was plainly visible, so he included "Lamarkian" evolution in his theory. Lamarkian evolution is the idea that if you (for example) exercise your arms enough, your children will be born with strong arms (or, for a more natural example, the giraffe has a long neck because previous giraffes stretched so hard to get at high leaves). In other words, changes that occur to you during your life - not changes to your genetic structure! (which Darwin didn't know about) - can be passed on.

Darwin also had no concept of mutation; another reason why evolution by natural selection alone seemed so slow to him. All changes in Darwinian theory are based on normal statistical variation in the species.

There are other differences, too. For example, Darwin believed that when two animals mated, their traits would "blend" together. So if you had a really tall mother and a really short father, you would be of medium height. If your father had blue eyes and your mother had green eyes, you would have blue-green eyes. If your father had dark brown hair and your mother had blond hair, you would have light brown hair. Nowadays we know that's not how it works: some genes are recessive, some are dominant, and don't get a "blend" of you parent's genes - you either get the genes for something from your father, or your mother, not both.

Long after Darwin died, genetics became a field, DNA was discovered, the mechanism of mutations hypothesized, Larmarkism discredited, and the age of the Earth found to be billions of years... and all of it put together in what is called the modern synthesis. That is what modern evolutionary theory looks like. Darwin is obsolete, although many of his basic ideas are still relevant (in fact, some of his basic ideas are taken more seriously now than he took them - like the relevance of natural selection to the process).

The-Nisk wrote:

And since i mentioned arrogance where coding is concerned I might aswell mention this - slopy coding should be punished! I mean it's okay when you're learning to have extra variables flooding your scripts and could-be-better functions/methods, but once you get to the grips with whatever language you're using it should be punished! There should be a global points system for all programmers and the people like that should have their point reduced!

If there were a demerits system for bad coding, i'd probably have been put up against a wall and shot by now. ^_^; One of the notes on my whiteboard is from Anna: "calc_theta1_1(), calc_theta1_2(), calc_theta1_3(), jesus christ indi, is it that hard to come up with function names?"

The-Nisk

Indi wrote:

The-Nisk wrote:

Indi, if you're saying that if we take the universe to be simulated, only simulated and not computed - or in other words some basic laws exist and aside from that the contents are let to fend for themselves and interact whichever way they are allowed to according to these laws, so that the 'system' doesn't know nor care what is happening and is only concerned with keeping up the basic laws - in that case we are of similar mindset. If not, please clarify.

That's pretty much it, only i wouldn't use the word "simulated", because "simulated" implies:

A "simulator";

A "real" object, which the simulation is trying to be a model of;

Someone who is observing the simulation to learn something.

i don't know what word i would use though: picking a word for something can be a tricky game.

Fun fact (since Darwin came up): Darwin didn't want to use the word "evolution" to describe his theory of descent with modification. i'd have to look this up to confirm it, but i seem to recall reading that he never uses "evolution" once in Origin of Species. Why? Because "evolution" comes from the Latin word "evolvere" which means to unfold, or unroll (for example, a scroll). "Evolution" implies there is a predetermined pattern that is being discovered, bit-by-bit, when Darwin wanted to make clear that there is "plan" in evolution. Even today, you can still find people that don't understand that; so Darwin was right.

The-Nisk wrote:

I believe Steven Hawkings was a big believer in the idea that if we have a big enough computer we could predict what will happen next. I found that idea a bit too...simple I guess is the word, to have much weight behind it. I just think the universe which does seem to function as a system, computer simulated or not, is rather complex or rather has many many simple rules that combined give rise to complexity.

Maybe not Hawking, but definitely Einstein was a die-hard computationalist (hence: "God does not play dice..."). Hawking can be a little hard to pin down, because as vague as Einstein was, Hawking is positively obtuse. It's easy to never be wrong if you never make a clear statement. (But even then, Hawking hasn't done so hot. ^_^; Every scientific bet he's ever made that has been settled so far, he's lost.)

This sparked some thought in my head that are rather hard to formulate into something coherent, or planned at the very least, so I'm just going to brainstorm it out as I write along. I have no first hand knowledge of einsteins work, apart from the basic e=mc^2 computations, but I think he just meant that there's no such thing as random when he uttered that famous quote so many religion activists tried vainly to use. If I had to put forward my ignorant view of the universe - I would say it is a self-contained simulation, at least in principle if it's not simulating something. Think holograms. It's a messy topic...there's a quote:

Quote:

The world s an illusion

I have no problem accepting that idea, letting your imagination run off on you like that is not a bad thing neccesaraly, but some people would feel offended to have their existence reduced to a hologram, like in a sci-fi novel somewhere, some would call it blasthemy, either out of love for their imaginary friend or their own hurt self-proclaimed self-importance. I really dislike this kind of arrogance.....but back to the topic, or slightly so. We agreed that universe is a complexity that arose out of application of some basic laws to many, many 'things' which at times make it seem entirely random, which it's not. So we have complexity arising out of simplicity. Life is a great example of that....now here's the rhetorical question is it then accidental or intentional? I'd say neither, it simply 'can' happen but doesn't neccesaraly have to nor neccesaraly 'will' it.

Indi wrote:

The-Nisk wrote:

But this leads me onto the topic of 'simulation', forgeting the universe for a second and changing magnification - in programming/software engineering people always talk of virtual machines and simulation, but I really think that they have a bad idea of what simulating something means. It's hard to explain. Now, I'm no expert on this field, but from what I know the 'simulating' that they do be doing is more a 'cheat' that produces similar results and not actual simulation. Of course I have some consideration for counter-arguments to that, such as the argument that if we were simulating the system completely quicker than in real-time it would be immpossible/inneficient/non-feasible (if we take whatever simulating the universe to be the most powerful computer that can exist - althought that's not necesary, just an idea). It's a very messy idea/concept.

Yes, in the ideal sense, simulating something means figuring out the rules that describe the operation of a system, and building a model that works on the same rules. Good in theory, impossible in practice, for a couple of reasons. First, we often don't know the rules, so we have to guess. Second, real systems can often be so complex - even if they have simple rules - that they're just impossible to model competently. (For a realistic example, consider three-body motion: it's just three point masses orbiting each other using simple Newton/Kepler laws... but it's beyond analytic computation. Very, very simple laws can quite quickly lead to very, very complicated systems.)

To get around that, the easiest way is to fudge it: use simpler rules, worry about "good enough" rather than accurate, and only simulate those parts of the system necessary to get the results you care about. For example, when simulating a roll of a die, you don't simulate the inertia of the die's mass, the friction of its surfaces as it tumbles or the hardness coefficients of the die and the rolling surface - you just pick a (pseudo-)random number between 1 and 6 inclusive (assuming a standard die), and that's all you need, most of the time. That's what you call a "cheat", which it is - but if you don't care about the physics of a die roll (you're only interested in the result of the roll), then it's "good enough".

Sometimes you really do "true" simulation (whenever you can get away with it, for sure), but you cheat when you have to.

Now, for the case of the universe, you've got a bit of a pickle. If you are able to simulate the operation of the universe in real time... then you're not really simulating the universe. ^_^; You're either cheating, or simply wrong. Why? Because if you really were running a simulation of the universe on a computer within the universe, then the real universe must be running itself... plus the simulation you're running, because it's within the universe. Your simulation, therefore, must be doing the same thing: it's simulating the universe, which has within it a simulated computer running a simulation of the simulated universe. And that computer with the simulated simulated universe has within it a simulated simulated computer, running a simulation of the simulated universe simulation... etc. etc..

What that means is that if the universe is computable, then it must be impossible to accurately simulate it on any computer within the universe. (Infinities muck this logic up somewhat, of course, as infinities are wont to do.)

I'd say it is impossible to simulate the current universe. But if we say want to create artificial life, or artificial intelligence, which should be treated as meaning the same thing, I would argue we would have to do a non-cheat simulation of a universe on a smaller scale....I'm not sure how that would work with the universe expansion....maybe we could cheat there a bit. All we have to really do is simulate an environment, then hope that life would occur, maybe create an encouraged environment.
If we are to believe the simplicity to complexity idea, it should be possible. However I doubt the computing as it is today will ever achieve such feats. Bio-computing, now that might be up to the task, but a standard silicon based chips or even quantum computing ( *using term blindly notice*) - not a chance. I mean we are quite intelligent species, but untill we calm our ego, we wont achieve such science feats. I'm saying such things because I have a vivid memory of debating with my Computer Scientist lecturer wheither intellligence is a patern, he claimed it was a patern - I said it was more an ability to spot paterns, an evolving living thing....but of course I was a 1st year junior, so I knew I wasn't going to win any debates like such. I'm trying to say that we sometimes have a rather questionable confidence in our knowledge/power/ability. But of course what is a living thing but a mechanism that arose out of interactions? and if interactions are based on simple rules, then life is a what? I prefer not to answer that question, because it just won't be satisfactory even if I could force one out. As I said - it's a messy topic!

Indi wrote:

The-Nisk wrote:

Could I hear more about Darvin and his ideas and why it is absolete() ? I admit to never actualy reading any physical material about it - all I know about it and evolution was constructed out of some very vague words-of-mouth & my own ideas/understanding of how it must all work. Also, if there's more quantum-mechanics primers - I welcome thee! (I think I just caught myself tryinng to steer the conversation away from biology - my weakest field, to counter that - anyone wish to explain some biology/genetics?)

Darwin was actually a lousy biologist. ^_^; In fact, if i recall, he dropped out of medical school to study religion. What he was really gifted at was classification of species, not biology per se.

The reason Darwin is obsolete is because his work is so archaic. ^_^; He published in the mid 1800s... that's a century-and-a-half ago now, and in scientific time, eons. Darwin never knew about DNA (although his work predicted something like it), and Hubble hadn't even been born - so the idea that there may have been a "beginning" to life on Earth was just a religious philosophical idea that carried over, and the dating was very, very recent (not quite the 6,000 years of Young Earth Creationists - i think the estimates were in the millions of years range - but still quite low).

Darwin didn't believe that natural selection was fast enough to have produced the diversity of life on Earth that was plainly visible, so he included "Lamarkian" evolution in his theory. Lamarkian evolution is the idea that if you (for example) exercise your arms enough, your children will be born with strong arms (or, for a more natural example, the giraffe has a long neck because previous giraffes stretched so hard to get at high leaves). In other words, changes that occur to you during your life - not changes to your genetic structure! (which Darwin didn't know about) - can be passed on.

Darwin also had no concept of mutation; another reason why evolution by natural selection alone seemed so slow to him. All changes in Darwinian theory are based on normal statistical variation in the species.

There are other differences, too. For example, Darwin believed that when two animals mated, their traits would "blend" together. So if you had a really tall mother and a really short father, you would be of medium height. If your father had blue eyes and your mother had green eyes, you would have blue-green eyes. If your father had dark brown hair and your mother had blond hair, you would have light brown hair. Nowadays we know that's not how it works: some genes are recessive, some are dominant, and don't get a "blend" of you parent's genes - you either get the genes for something from your father, or your mother, not both.

Long after Darwin died, genetics became a field, DNA was discovered, the mechanism of mutations hypothesized, Larmarkism discredited, and the age of the Earth found to be billions of years... and all of it put together in what is called the modern synthesis. That is what modern evolutionary theory looks like. Darwin is obsolete, although many of his basic ideas are still relevant (in fact, some of his basic ideas are taken more seriously now than he took them - like the relevance of natural selection to the process).

I see, i had no knowledge of what the word evolution meant, only a notion of what it implied, so thanks for that piece of info. I'm surprised his quiting medical school didn't reflect on the opinion on his theory of evolution from the "scientific community" ( I tend to use quotations marks around questionable areas =] ).

Indi wrote:

The-Nisk wrote:

And since i mentioned arrogance where coding is concerned I might aswell mention this - slopy coding should be punished! I mean it's okay when you're learning to have extra variables flooding your scripts and could-be-better functions/methods, but once you get to the grips with whatever language you're using it should be punished! There should be a global points system for all programmers and the people like that should have their point reduced!

If there were a demerits system for bad coding, i'd probably have been put up against a wall and shot by now. ^_^; One of the notes on my whiteboard is from Anna: "calc_theta1_1(), calc_theta1_2(), calc_theta1_3(), jesus christ indi, is it that hard to come up with function names?"

Haha! As long as your functions have a real task they're good at performing it's okay. Encapsulation might be questionable, but that depends who do you code for - just for yourself or in conjunction with others? In case of the former, if you remember what each function does, you'd be allright I'd say. In case of the later - spam comment that'd allow a 5-year old use the code =]