Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Who said that all progress comes the crazy ones (or something vaguely like that). So maybe they're right (and I'm hoping for it). But (unlike him, lacking a legacy) I wouldn't bet my retirement fund on it.

A nut? That's hilarious, Wolfram is probably the smartest man alive. Like Da Vinci and others his legacy won't be truly appreciated for centuries.
Read NKS and if that doesn't give you some concept of his abilities, well, I don't think anything will convince you.

The Monster Raging egomania is absolutely no exaggeration. I think that aside, 'NKS" is not insane, merely an insane amount of effort to demonstrate (note, not *prove*) something that is ultimately trivial/tangential. It most certainly is not a "New Kind of Science", it's not science in any conventional definition.

Mathematica [wikipedia.org], used by grad students everywhere. The impact of this software is huge. Grad students everywhere rely on it to visualize equations they otherwise wouldn't understand. It has been a tremendous boon to computer scientists, astronomers, chemists, physicists, etc.

He also wrote some papers on particle physics. And then there's Wolfram-Alpha, which I use at least once a week.

Mathematica is not an example of his science abilities though. Integration, deriviation, graphing and other such features of computer algebra systems were done years before by both Macsyma and Maple. Mathematica is just an example of someone who knows how to make and market software. That's what Wolframs good at. Promotion. Mostly self-promotion but also promotion of his software.

Remember that he wrote A New Kind of Science at night, while he continued to run a successful multi-million dollar software enterprise during the day. The peer review jury is still partly out on ANKOS, but his highly original ideas continue to thrive and spur further research. His deep insight that true chaos devolves from ordered deterministic processes (e.g. cellular automatia) across all of nature is nothing short of astounding. Focused he is. Obbsessive and a bit eccentric, certainly. But a nut? Not by

His deep insight that true chaos devolves from ordered deterministic processes (e.g. cellular automatia) across all of nature is nothing short of astounding

This is pretty much what everybody already knew since the 80's, and the investigation of chaos theory and iterative algorithms. It's important to know, but by now I'd look askance at any scientist who didn't accept this decades ago.

His deep insight that true chaos devolves from ordered deterministic processes (e.g. cellular automatia) across all of nature is nothing short of astounding.

I am not sure why you say that, it's hardly a new idea, and he offers no particularly new insights. He (or rather his hordes of uncredited minions), have brute-force raised the bar on the degree of proof. And then he mentioned himself in the same breath as Einstein and Newton.

NKS does not make a good case for Wolfram's genius, but rather for his arrogance and ignorance. It's good work, from the point of view of being correct, and being about a significant subject. It's not so good from the point of view of originality. Nor is it a superior treatment of a known subject. It's not even a novel approach. Wolfram really went gaga over cellular automata, and they've been well known ever since Conway's Game of Life popularized them in 1970, and studied well before that. He talks as if the subject had languished, and his research singlehandedly revived interest in them. Perhaps so, among physicists. He also excuses his failure to understand its significance as the consequence of it being presented as just a game. Obviously, he didn't talk with any computer scientists before writing that book. He merely rediscovered what computer scientists have known for decades. Worse, he's not even the first physicist to have rediscovered computer science! That man, and his arrogant physicist buddies need to get out of their bubble more often. I've seen this kind of thing before, where the people at the top of a particular discipline start acting as if all other science is secondary, is only an aspect of their chosen discipline. Saw that attitude towards Computer Science in professors and students of Electrical Engineering. They didn't get it that algorithms were more than simple, trivial little lists of instructions for hooking up logic gates. Mathematicians also have this tendency to view CS as just a branch of math, and algorithms as something that can be expressed as "just" a series of formulas. It's like the view that a person is only a bag of water with a few other chemicals mixed in, or the "Big Iron" implication that a computer is only a lump of metals. Goes over the top in overlooking the organization.

Wolfram's work illustrates that Computer Science should be a discipline of its own, on the same level as Math. The concept of the computer algorithm ranks with the mathematical formula in importance. You can't do any serious physics without advanced math. These days, you also need advanced computer science to do physics. His much vaunted NKS is in fact Computer Science.

It took genius to invent the wheel. In that sense, Wolfram is a genius. What does it take to avoid reinventing the wheel? Wisdom.

Wolfram's argument for exploring the space of discrete computations as a source of models richer and cheaper than continuum math needs wider endorsement. Much of the criticism is the inverse of a long recognised problem: shooting the message when you really want to shoot the messenger (and that only because you know the reputation rather than the person).

And your critique of totalising narratives has long been well understood in the postmodernist framework, but pomo too has been so badly misrepresented as t

Who said that all progress comes the crazy ones (or something vaguely like that). So maybe they're right (and I'm hoping for it). But (unlike him, lacking a legacy) I wouldn't bet my retirement fund on it.

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man." - George Bernard Shaw [wikiquote.org]

Why is it nutty? Risk=Damage*Likelihood. An existential risk has, for us at least, infinite damage; therefore even if the likelihood is very small the risk is still infinite.

"The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man." George Bernard ShawAnother good one from Shaw. (George Bernard Shaw telegrammed Winston Churchill just p

Is he saying that the universe can be likened to a computer program or that a computer program can be written which can simulate the universe? Or is he exploring metaphysics and stating that the universe *is* a computer program?

Wolfram pushes his principle of computational equivalence which says that anything you can find in one discrete system you can find in any other (which can be shown to emulate a universal Turing machine). His preference for 1D and Conway's, my and others' preference for 2D cellular automata for exploring some of that space is much more a statement about human visual perception. He actually suggests that a simple graph (formal math term for network of nodes and links) is a more likely candidate, but they are

Is he saying that the universe can be likened to a computer program or that a computer program can be written which can simulate the universe? Or is he exploring metaphysics and stating that the universe *is* a computer program?

Is he saying that the universe can be likened to a computer program or that a computer program can be written which can simulate the universe? Or is he exploring metaphysics and stating that the universe *is* a computer program?

I am not a physicist but I would probably try to explain it this way: Information isn't free. We know that. It "costs" something. We can call its most basic unit to be a "bit" but I'm not aware of any really solid equivalences between bits and energy. But if you knew this relationship, you could rewrite a lot of physics with the "bit" as one of the fundamental units of physics and get rid of -- say -- energy. You would represent energy as some complicated set of inequalities or equivalences that are written only with references to bits.

Now let's jump WAY ahead. To the really far out there part. If (and I believe that's a BIG if) you can then express these as Turing machines and you have a complete set of rules to compute with, you're getting closer to building a very accurate (if not perfect) simulator. Gravity, relativity, everything gets bundled up into one neat little Turing Machine that quite simply predicts the future. Perhaps you could simulate atomic movement in vacuums at a fraction of the cost of our current simulator -- and superior (the hope is perfect) accuracy! The final dream, of course, is to simulate the universe perfectly from the Big Bang onward and merely predict the future. It's not hard to see the problems with all of this, however. A simple exercise is to imagine I built this machine yesterday and as the machine begins to compute yesterday and today's events, it's computing itself computing itself computing itself computing itself... now you can parade in the sci-fi authors. Oh, and Raymond Kurzweil.

Thinking about a universe simulator that predicts the future is fun; but should be impossible by the laws of information entropy. The absolute smallest space you could use to record the information about the position and rotation and composition of an atom would be at least the size of an atom; Even if your machine runs on the quark scale, you need to record information for every quark of the universe. Your machine could never achieve a greater bit "resolution" than the universe that it takes place in, so therefor you could only ever simulate a portion of the known universe. To simulate the entire universe, you would need a computer the size of the universe at least (if not much larger), IE the universe itself. You cannot fit a perfect copy of the universe inside of the universe. So short of somehow creating additional dimensions then you're SOL. That is, if the universe is indeed digital (as particles would suggest). If instead everything is continuous with infinite resoloution... there's a whole lot of questions to be answered.

Ignore the compression argument. If you can simulate the universe in a machine smaller than itself, the machine simulates itself, so it will have inside it a simulation of the simulation, which contains a simulation of the simulation, etc., all in the same state. So something smaller than a particle would be able to contain the state of the entire universe. Now there's a claim...

You could do that, although you still have to work with the entire uncompressed data set to calculate the next state change, so even if you somehow worked out how to do it piece by piece (questions about locality, etc.) if you're doing that, compressing it and then working with pieces of it at a time, then you're going to have a slower performance time. That means your simulation will be perfect, but slower than the real world, so ultimately useless! xD I can simulate what will happen tomorrow... in a month

Now let's jump WAY ahead. To the really far out there part. If (and I believe that's a BIG if) you can then express these as Turing machines and you have a complete set of rules to compute with, you're getting closer to building a very accurate (if not perfect) simulator. Gravity, relativity, everything gets bundled up into one neat little Turing Machine that quite simply predicts the future. Perhaps you could simulate atomic movement in vacuums at a fraction of the cost of our current simulator -- and superior (the hope is perfect) accuracy! The final dream, of course, is to simulate the universe perfectly from the Big Bang onward and merely predict the future. It's not hard to see the problems with all of this, however. A simple exercise is to imagine I built this machine yesterday and as the machine begins to compute yesterday and today's events, it's computing itself computing itself computing itself computing itself... now you can parade in the sci-fi authors. Oh, and Raymond Kurzweil.

There are an enormous number of problems with trying to simulate the entire universe. It invariably results in an infinite recursive loop. Basically, in order to simulate the universe you would have to do so from outside the universe, and it would require an entire universe to do so - in order to get a perfectly accurate simulation, there isn't any information you can discard - every subatomic particle and force directly or indirectly affects every other. It is a pipe dream. No matter how fast and complex o

I am not a physicist but I would probably try to explain it this way: Information isn't free. We know that. It "costs" something. We can call its most basic unit to be a "bit" but I'm not aware of any really solid equivalences between bits and energy. But if you knew this relationship, you could rewrite a lot of physics with the "bit" as one of the fundamental units of physics and get rid of -- say -- energy.

Not exactly what you were looking for, but a "bit" is equal to some number [blogspot.com] of Joules per Kelvin (the SI units for entropy). Both of them are measures of the degrees of freedom of a system. (Specifically, its logarithm.)

If your hard drive has a capacity of n bits, then it has n "binary degrees of freedom" (permitting it to be in one of 2^n possible distinct states).

Relatedly, temperature is a measure of energy per effective degree of freedom. So, crank out the units in the measure "Joules per Kelvin": it'

He says that computer programs provide a *type* of understanding that was not previously possible because it would take an entire man's life to do the calculations.

A key idea in New Kind of Science is (paraphrased) "Computational Complexity". For special initial settings in the environment, if you keep iterating results *on top of each other* you get patterns of complexity far beyond the initial starting block. For the most obvious example, a "fairly small genome" produces billions of unique people because

His deep insight is simply that true chaos devolves from ordered deterministic processes (e.g. cellular automatia) across all of nature. He demonstrates this systematically in his book, A New Kind of Science. The book elucidates the results of hundreds of computer experiments that use cellular automatia to echo various aspects of the natural world from physics to biology, often in clearly visible ways with wonderful fractal graphics. IMHO he shows incontrovertibly that natural chaos is sometimes the output

If the Universe IS doing calculations, then it is as accurate as possible. You can't possibly get closer to calculating what the laws of physics say should happen than by the calculation actually being what actually does happen. But that means the universe is either infinite, to hold infinitely long registers, or the real laws of physics don't include any infinite precision expressions. A finite universe can't, for a simple example, be multiplying some

Pi and such aren't needed for the calculations of the universe, nor do you need infinite registers. Pi and other numbers that describe ratios and the like are products of the universe's calculations. We use those numbers because they are in fact useful to us and our understanding but they are merely part of the outcome and don't actually need to be in the method/function that's running. As for infinite registers, the universe itself is the computer: both

Look what happens when you pile too much mass someplace! Eventually the entire region of space just segfaults! And time doesn't even flow at the same rate for even short distances! Looking at the universe, I'd guess it was some rushed freshman-year science project.

Wolfram brings computational science to the table and has posited that the earth and universe can be understood as a computer program that can be significantly altered as we continue to advance in technology.

Wolfram is a genius, I'm just not clear what "advancements" he's brought to computational science or bit-string physics. I mean, that "universe as a computer" stuff is all still theory right now, right?

Call me cynical but I fear that this will result in more Futurism with people crossing into other fields of expertise, reading papers and then holding them up as the holy grail in undoing aging and death. Sure, it's amusing but I think at best this is going to be a lot of smart people pounding square pegs into round holes all day long. At worst it's just going to sidetrack people from doing work and daydreaming about interdisciplinary possibilities (like some of the Macy Conferences did for Cybernetics).

Welp, better settle in and prepare for the crazy Kurzweil stories to fire back up!

This happens all the time. Smart people looking at areas outside there experiences and being wrong. But people holding up their nonsense because the person was 'smart'.And by smart they mean 'It's what I want to be true, therefore it's smart."

Joking aside: this is pure bollocks. Classical physics has insolvable problems (3 particles is a no-no), quantum mechanics cannot be simulated at a low level. So how is computation going to help understand the universe? Run a few simulations with a huge pile of assumptions? I put my money on Bruce Willis and his team.

I suppose that depends on the definition of "solve". Computationally, it's a numerical approximation. In most cases it can be approximated to any required accuracy for any practical purpose. But that's not what solved means in the context it was used.

It's really sad that, if nothing is done about it, the unsustainable economic system that we have right now will lead to a collapse of our technological society long before any asteroid might hit us. The minds behind this project might better be used to solve that conundrum instead...

The way I see it, the problems that stem from our system of economics are incidental; the real problem is democracy*. People will always vote for the guy who says something along the lines of "Free stuff for all!" rather than the one that says "Sorry countrymen, but we can't afford it and this is why... actually, while I'm here, the state is already spending more than it earns and we need to cut a few things."

*Caveat: I'm pro-democracy and I'll remain so until a truly benevolent and intelligent dictator com

What? No, no I don't think the majority of the voting masses would choose some random shmuck that calls himself intelligent and benevolent. I think they're going to vote for the guy who appears to be the most intelligent and benevolent (and follows their ideological stances, and is vouched for by their party). And that's largely a matter of marketing. Which boils down to how much campaign funds you throw at it. Which is largely determined by the rich and/or influential supporters that run the non-government

I'm not really impressed by either. Wolfram made some very good software but then wrote that wretched book which was primarily a mix of either wrong ideas or unoriginal ideas. There was a strong failure to credit the work others had done with cellular automaton. I couldn't tell if that was due to his ignorance or his general self-promotional tendencies.

As to the Lifeboat Foundation I lost minimal trust in them after they got in bed with Pam Geller http://lifeboat.com/ex/boards [lifeboat.com] (yes, that's Pamela "Obama is a Muslim with a Fake Birth Certificate" Geller http://en.wikipedia.org/wiki/Pamela_Geller#Birther_views [wikipedia.org]). If that weren't enough they've been involved in fear mongering about the LHC http://lifeboat.com/ex/particle.accelerator.shield [lifeboat.com]. There are however other groups that are dealing with exisential risk threats in a serious and useful fashion. The Future of Humanity Institute http://www.fhi.ox.ac.uk/ [ox.ac.uk] which is affiliated with the University of Oxford, and headed by the very bright Nick Bostrom thinks about existential risk issues in general. Meanwhile, there are organizations focusing on specific concerns. For example, the B612 Foundation http://www.b612foundation.org/b612/ [b612foundation.org] is focused on dealing with detecting and dealing with large asteroids. They have the advantage of also having a very clever name. Internet cookie to anyone who can figure out why they are called that without searching.

A good point of caution, but doesn't prove anything in of itself. When you discover the atom, everything looks like its made of atoms

Oh wait, most things actually are! Sometimes that happens.
Cellular automata would indeed be able to model *everything* and give us new insights to *everything*, if the universe is indeed digital. (as opposed to continuous, not analog)

"things" certainly are. There may be a lot of dark energy out there, but when I am referring to "what things are made of" you can't say dark energy. Dark Matter I'll buy, as a "thing" that isn't made of atoms, but there's very, very little dark matter in our world, and I wouldn't compare matter vs dark matter as "virtually nothing", although yes there is more dark matter.

Also it has nothing to do whatsoever with the point I was making, so awesome, thanks for contributing.

Yes, except we KNOW that the human brain works nothing like a binary computer. We know that almost nothing in nature that we have even begun to understand works like a binary computer. So why would anyone in their right mind assume that the UNIVERSE does? The binary computer is just a practical tool we invented in the 20th century, taking advantage of the on/off switching tech available at the time. It's not a model for the freaking universe.

Yes, except we KNOW that the human brain works nothing like a binary computer.

[citation required] ?

If there is a smallest fundamental particle in the universe, it is binary. Thats why we would think it. Things on our scale do not seem to work "like a binary computer" because the scale is so vastly large you get unpredictable behavior. We're up at some 10 ^ 18 powers above the binary level. Its HUGE.

If you have a monitor with only a few discreet pixels, and those pixels can each only have on color, then that is a limited resolution. Any image would appear pixelated and blocky an

If there is a smallest fundamental particle in the universe, it is binary.

I agree with the rest of what you said, but this is not really true. As far as we know, the "logic" of the universe is not classical: there are some fundamental properties of particles that can't be reduced to a single bit. See, for instance the spin of an electron.

Despite that, as far as we know it's still possible, in principle, to simulate anything in the universe (including quantum mechanics, which includes the electron spin) in a classical computer (to any precision you'd like). So your main point stil

The binary computer model is in theory perfectly capable of simulating a human brain. The main problems we have are that: 1) we are not completely sure how the brain is wired together, so we don't know what to simulate in the first place, and 2) our machines are mostly sequential, and the brain is highly parallel, so what the brain can do in one step, a sequential computer can only do in a number of steps proportional to the network's size. This is obviously impractical, but it is no fault to the model.

and there is no indication that anything at all in the universe is not computable.

So, you're saying, that given enough input data, a Turing machine or equivalent computer can predict, with perfect accuracy, specific instances of radioactive decay, since specific instances of radioactive decay are "in the universe", and everything in the universe is, per your description, computable?

Considering that the many-worlds interpretation of quantum mechanics is equivalent to the Copenhagen interpretation and certainly consistent with what we observe, the input data in question would basically be the complex amplitude of every single possible universe. This would allow for the deterministic computation of the amplitudes of every single possible universe at the next time step. So yes, you would determine, with perfect accuracy, that at each time step the probability of universes where the decay

Well the bunker seems useful. But why didn't they come up with something like a new energy source that has pleasantly high energy return on investment, that is probably too hard, I wonder what they will power their lifeboat with though, probably its oil, gas or nukes for the next 100000 years.

Look, I know it's a bit far out, but haven't we pretty much concluded even if the Big Rip [wikipedia.org] doesn't happen and that protons don't decay [wikipedia.org], entropy will eventually cause the heat death [wikipedia.org] of the universe? I mean, I realize that it's around 10^14 years out and won't really be a concern if we can't escape the earth in the next 1.4 billion years [wikipedia.org] or so. Don't get me wrong, I think humanity is perfectly capable of saving itself from asteroid bombardments and the death of stars. But, my (admittedly limited) understanding of what's going to happen to the universe keeps me from really getting excite about projects like this.

On the other hand, the goal here is to make sure we live long enough to face these problems. And that's pretty important.

Something like living in a virtual reality hosted on a reversible computer [wikipedia.org] might allow us to live for significantly longer than the Big Rip would suggest, if not outright forever. Might be somewhat of a pipe dream, but it's fun to think about.

I mean really, we have a human race which is eating itself out of house and home, destroying the environment and every other species and the entire biosphere at a rate never before encountered in the history of life on Earth, AND rapidly acquiring ever greater capabilities to destroy itself on a daily basis while retaining the basic ethical outlook of fire-wielding cavemen. Meanwhile these people are wasting their time wool-gathering about infinitely more remote possibilities like asteroid impacts and total

Doc, don't you realize there are 50,000 nuclear weapons on hair-trigger alert pointed at us every day and that a reasonably systems analysis of US and Russian nuclear 'defense' systems indicates there's roughly a 50/50 chance we will set them off within the next 30 years? Seriously?

People have been saying this for better than 60 years now.

And while it should be pointed out that through much of that 60 years, we did have nuclear weapons on "hair-trigger alert"***, we don't anymore.

I suggest you read the system reliability analysis. Your assumptions are quaint but wrong. Yes you can argue that the our avoidance of a terrible accident for the last 60 years or so is some kind of proof, but your result is like 0.5 sigma.

Beyond all of that there are so MANY issues. Here we are, the great man-ape twisting ALL the dials on EVERY natural geochemical cycle, carbon, nitrogen, phosphorus, mercury, etc. I could name 20 other critical issues without even needing to go look them up. Many will be t

- get their power from solar cells and geothermal
- have automated greenhouses (scaled up Aerogardens like the Aero Grow folks make) which provide much of their food needs (anyone run the numbers on how much seaweed one could grow in a tank the size of a typical house window?)
- make tchochkes (and small useful objects as well) using a makerbot or reprap or diylilcnc
- capture rainwater and filter / purify it, use grey

The idea that the universe can be understood as a computer program is essentially unfalsifiable. Given that at any moment the set of all observations we have at our disposal is finite, it is trivial to build a Turing machine that produces that exact set, regardless of the actual underlying mechanics. Even if, say, the universe contained some magic oracle that solved the halting problem for Turing Machines, we could never actually verify that it does. It could just be some machine that runs the input TM for

Simply look at the fact that quantum mechanics cannot be simulated accurately and efficiently on a classical computer. Yet QM itself is falsifiable and has proven correct so far, outside of some boundary conditions. So the whole world cannot be simulated on a classical computer, no matter how big.

Another example: take a random vector V of 300 values and consider the subset-sum problem: does it exist a partition of V into two subsets A and B such that sum

Another example: take a random vector V of 300 values and consider the subset-sum problem: does it exist a partition of V into two subsets A and B such that sum(A) = sum(B). This is a known NP-hard problem. Solving this problem for a given vector V only once would require much more energy than exists in the entire visible universe, for any physical computer. Do the math yourself as an exercise...

I don't see what's the point of this example. Where in the universe do you see hard instances of the subset-sum problem (or any other NP-hard problem) of that size being solved?

You are confused. "Computable" doesn't mean what you think it means. "Computable" does not mean "efficient", nor does it mean "tractable". "Computable" means "there exists a Turing machine that solves the problem in a finite time for any finite input". P is computable and tractable for small enough exponents and hidden constants. NP is computable and thought to be intractable. EXPSPACE, which is probably the worst complexity class the Turing machine simulating the universe would fall into, is computable and

The hardest to explain is probably #4. My proposal here is that, if someone has never heard of the concept of existential risk, it’s easier to focus on these first four before even daring to mention the latter ones. But here they are anyway:

Everyone always takes the standpoint that humanity must go on forever. Even if it lasts another 50K years, a human from 50K years in the future would be so unrecognizable to us that it may as well be an alien species, much as we would be to our ancestors 50K years ago. Our dying out or being replaced by the machines or other species we create is a much more likely outcome. Which isn't to say that we shouldn't try. May as well at least die trying, right?

They're going to save humanity. Why? If there's no one else out there, then we're going to go on, living our grumpy little lives. If there's someone else out there advanced enough to talk to, then they'll discover it too.

Sometimes I think we should take all of our great art, pack it up into a ruddy great rocket, and nuke ourselves back to the stone age and try again.

The purpose of the group is to think through scientific solutions to existential problems that might be used to save humanity from such risks as asteroids hitting the earth or some other diabolical disaster.

...or perhaps Global Warming?

And the fact that I wonder whether or not this will be modded as flame bait or troll should be disturbing to all of us.

Global Warming (whether caused by human activity or natural cycles or whatever) is by no means an existential threat to humanity. If worst case scenarios come true it will have a massive socio-political impact as large areas of attractive coastal areas may be threatened and fertile vs un-fertile land (deserts etc) will move around, but that's rather an inconvenience compared to a large meteor impact or some of the other scenarios mentioned in the article.
That's not to say that nobody should be concerned about global warming, but it's not what the Lifeboat Foundation is.

The problem is how long can they stay in the bunker and how long will global warming last.

You may want to ponder what energy sources they will use to power their bunker, and for how long that is possible. Also notice that there is the usual decay of mechanical systems through friction and other problems, so you need certain resources to maintain the bunker, also recycling isn't perfect, so after all you may need more energy/resources than you think.

Global Warming (whether caused by human activity or natural cycles or whatever) is by no means an existential threat to humanity.

Depends on how you define humanity. If you mean homo-sapiens continuing to exist in numbers of a few tens of millions or more, then, no, global warming won't wipe us out the way a massive asteroid or gamma ray burst would.

If, on the other hand, you take the Jim Morrison quote "I want to get my kicks in before the whole shithouse goes up in flames," to talk about the end of humanity as the end of being able to live in a shelter without worry for your safety, the ability to easily secure food for the winter... global warming could do that a whole lot easier than the Vietnam war ever could.

Nobody should be concerned about global warming as long as the current data remains manipulated, fabricated and motivated by political agenda.

What a truly idiotic statement. How about "Nobody should be concerned about the financial health of the US / Europe / China / India as long as the current data remains manipulated, fabricated and motivated by political agenda"?

Humans are always manipulating, fabricating and politicizing things. It does make it harder to sort things out, but if it is important that you ignore basic human action and behavior, you might well consider a monastery.

If Goedel was still around I'm sure he would like to say to Wolfram what he was too polite to say directly to Wittgenstein: that while the formalism project can be a handy tool in isolated circumstances that it must ultimately fail to account for the world we find ourselves in, because there are truths formalism cannot reach before they emerge unexpectedly from expanding chaos. He might even add that you could see that all in cellular automata if you looked with better tools in more likely places. So any li