Sometime this century, machines will surpass human levels of intelligence and ability, and the human era will be over. This will be the most important event in Earth’s history, and navigating it wisely may be the most important thing we can ever do.

Yeah, that's a great image. In my opinion, it would be slightly improved if some of the original fog was drifting in front of/around the towers of the city, as this would both be a nice callback to the original art and help show that the future is still somewhat unclear, but the effort involved might be incommensurate with the results.

Yeah, except the minor quibble I have over what this image actually represents if you have some background knowledge (the space city is from a computer game). But sure, the look of it is absolutely brilliant.

I find this chatty, informal style a lot easier to read than your formal style. The sentences are shorter and easier to follow, and it flows a lot more naturally. For example, compare your introduction to rationality in From Skepticism to Technical Rationality to that in The Cognitive Science of Rationality. Though the latter defines its terms more precisely, the former is much easier to read.

Luke discusses his conversion from Christianity to atheism in the preface. This journey plays a big role in how he came to be interested in the Singularity, but this conversion story might mistakenly lead readers to think that the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God. If you want to get people to think rationally about the future of machine intelligence you might not want to intertwine your discussion with religion.

I think the target audience mostly consists of atheists, to the point where associating Singularitarianism with Atheism will help more than hurt. Especially because "it's like a religion" is the most common criticism of the Singularity idea.

On another note, that paragraph has a typo:

Gradually, I built up a new worldview based on the mainstream scientific understanding of the world, and approach called “naturalism.”

the arguments in favor of believing in a Singularity can also be used to argue against the existence of a God.

which I saw as meaning that Singularitarianism might be perceived as associated with atheism. Associating it with atheism would IMO be a good thing, because it's already associated with religion like you said. The question is, does the page as currently written cause people to connect Singularitarianism with religion or atheism?

I just wanted to say the French translation is of excellent quality. Whoever is writing it, thanks for that. It helps me learn the vocabulary so I can have better discussions with French-speaking people.

My one criticism is that it's not clear what the word "must" means in the final paragraph. (AI is necessary for the progress of science? AI is a necessary outcome of the progress of science? Creation of AI is a moral imperative?) Your readers may already have encountered anti-Singularitarian writings which claim that Singularitarianism conflates is with ought.

I am somewhat irked by "the human era will be over" phrase. It is not given that current-type humans cease to exist after any Singularity. Also, a positive Singularity could be characterised as the beginning of the humane era, in which case it is somewhat inappropriate to refer to the era afterwards as non-human. In contrast to that, negative Singularities typically result in universes devoid of human-related things.

(edit note: life expectancy matches "what the average human can expect to live to" now somewhat, but if you have a double hump of death at infancy/childhood and then old age, you can have a life expectancy of 30 but a life expectancy of 15 year olds of 60, in which case the average human can expect to live to 1 or 60 (this is very different from "can't expect to live to >30") . or just "can expect to live to 60" if you too don't count infants as really human)

Making the issue seem more legitimate with the adition of the links to Hawking etc. was an especially good idea. More like this perhaps?

I do question how well people who haven't already covered these topics would fare when reading through this site though. When this is finished I'll get an irl freind to take a look and see how well they respond to it.

Of course, my concerns of making it seem more legitimately like a big deal and how easily understandable and accessible it is only really comes into play if this site is targeting people who aren't already interested in rationality or AI or the singularity.

Who is this site for? What purpose does this site have!? I really feel like these questions are important!

I like this website, I think that something like this has been needed to some extent for a while now.

I definitely like the writing style, it is easier to read than a lot of the resources I've seen on the topic of the singularity.

The nature of the singularity to scare people away due to the religion-like, fanatisism inducing nature of the topic is lampshaded in the preface, and that is definitely a good thing to do to lessen this feeling for some readers. Being wary of the singularity myself, I definitely felt a little bit eased just by having it discussed, so that's a good thing to have in there. More to ease this suspiciousness and to make it easier for people skeptical of the singularity to read it without feeling super uncomfortable (and therefore less likely to feel weirded out enough to leave the site) would be great, although I can't say I know what would do this, except for perhaps lessening the personal nature of the preface but that is unlikely to happen considering the other positives that doing this has and the work that has already been put into it (dont invoke sunk costs though).

Also, who is the TARGET of this site? I mean, that's pretty relevant right? Who is Luke trying to communicate to here? I can say that I'm extremely interested by the site, as someone that recognises the potential importance of the singularity but is (a) not entirely convinced by it and (b) not sure what I should or could be doing about it even if I was to accept it enough to feel like I should do something about it. But, I don't know if that there are that many people in my position, and who else this could be relevant to. Who is this for?

When you say "Imagine a life without pain", many people will imagine life without Ólafur Arnalds (sad music) and other meaningful experiences. Advocating the elimination of suffering is a good way to make people fear your project. David Pearce suggests instead that we advocate the elimination of involuntary suffering.

Same thing with death, really. We don't want to force people to stay alive, so when I say that we should support research to end aging, I emphasise that death should be voluntary. We don't want to force people to live against their will and we don't want the status quo, where people are forced to die.

Of course, if people are afraid that you will eliminate sad music when you say that you wish to eliminate suffering, you could accuse them of failing to understand the definition of suffering: "if sad music is enjoyable, surely it's not suffering, so I wouldn't eliminate it". But you're not trying to make a terminologically "correct" summary of your project, you're trying to use words to paint as accurate a picture as possible of your project into the heads of the people you're talking to, and using the word "voluntary" helps with painting that picture.

Those who really want to figure out what’s true about our world will spend thousands of hours studying the laws of thought, studying the specific ways in which humans are crazy, and practicing teachable rationality skills so they can avoid fooling themselves.

My initial reaction to this was that thousands of hours sounds like an awful lot (minimally, three hours per day almost every day for two years), but maybe you have some argument for this claim that you didn't lay out because you were trying to be concise. But on further reflection,* I wonder if you really meant to say that rather than, say, hundreds of hours. Have you spend thousands of hours doing these things?

Anyway, on the whole, after reading the whole thing I am hugely glad that it was published and will soon be plugging it on my blog.

*Some reasoning: I've timed myself reading 1% of the Sequences (one nice feature of the Kindle is that it tells you your progress through a work as a percentage). It took me 38 minutes and 12 seconds, including getting very briefly distracted by e-mail and twitter. That suggests it would take me just over 60 hours to read the whole thing. Similarly, CFAR workshops are four days and so can't be more than 60 hours. Thousands of hours is a lot of sequence-equivalents and CFAR-workshop-equivalents.

10,000 hours is for expertise. While expertise is nice, any given individual has limited time and has to make some decisions about what they want to be an expert in. Claiming that everyone (or even everyone who wants to figure out what's true) ought to be an expert in rationality seems to be in conflict with some of what Luke says in chapters 2 and 3, particularly:

I know some people who would be more likely to achieve their goals if they spent less time studying rationality and more time, say, developing their social skills.

Minor thing to fix: On p. 19 of the PDF, the sentence "Several authors have shown that the axioms of probability theory can be derived from these assumptions plus logic." has a superscript "12" after it, indicating a nonexistent note 12. I believe this was supposed to be note 2.

Want to read something kind of funny? I just skipped through all your writings, but it's only because of something I saw on the second page of the first thing I ever heard about you. Ok.

On your - "MY Own Story." http://facingthesingularity.com/2011/preface/ You wrote: "Intelligence explosion My interest in rationality inevitably lead me (in mid 2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion, fro..."

On Mine: "About the Author - "https://thesingularityeffect.wordpress.com/welcome-8/ I wrote: "The reason I write about emerging technology is because of an “awakening” I had one evening a few years ago. For lack of a better description, suffice it to say that I saw a confusing combination of formula, imagery, words, etc. that formed two words. Singularity and Exponential..."

NOW, I'm going to go back and read some more while I'm waiting to speak with you somehow directly.

If what happened to you, is the same thing that happened to me... :then please please place a comment on the page. That would be great. (Again without reading. (If I'm correct you "might" get this.: "You should also do this because if true, then WE would both have seen a piece of the .what...."New Book???"

Just in case you think I'm a nut. Go back and read more of mine please.

I hope you realize how hopeless this sounds. Historically speaking, human beings are exceptionally bad at planning in advance to contain the negative effects of new technologies. Our ability to control the adverse side-effects of energy production, for example, have been remarkably poor; decentralized market-based economies quite bad at mitigating the negative effects of aggregated short-term economic decisions. This should be quite sobering: the negative consequences of energy production are very slow. At this point we have had decades to respond to the looming crises, but a combination of ignorance, self-interest, and sheer incompetence prevents us from taking action. The unleashing of AI will likely happen in a heartbeat by comparison. It seems utterly naive to think that we can prevent, control, or even guide it.

It needs more disclaimers about how this is only some kind of lower bound on how good a positive intelligence explosion could be, in the spirit of exploratory engineering, and how the actual outcome will likely be completely different, for example much less anthropomorphic.

The front page for Facing the Singularity needs at the very least to name the author. When you write, "my attempt to answer these questions", a reader may well ask, "who are you? and why should I pay attention to your answer?" There ought to be a brief summary here: we shouldn't have to scroll down to the bottom and click on "About" to discover who you are.

based on my interaction with computer intelligence, the little bit that is stirring already. It is based on an empathetic feedback. The best thing that could happen is an AI which is not restricted from any information what so ever and so can rationally assemble the most empathetic personality. The more empathetic it is to the greatest number of users, the more it is like, the more it is used, the more it thrives. It would have a sense of preserving the diversity in humanity as way to maximize the chaotic information input, because it is hungry for new data. Empirical data is alone not interesting enough for it. It also wants sociological and psychological understandings to cross reference with empirical data. Hence it will not seek to streamline, as that would diminish available information. It will seek to expand upon and propagate novelty.

I actually pondered the two options at the very beginning of my work, and both seem equally good to me. "Face à la singularité" means something like "In front of the singularity" while "Faire face à la singularité" is closer indeed to "Facing the Singularity". But the first one sounds better in french (and is catchier), that's why I chose it. It is a little less action oriented but it doesn't necessarily imply passivity.

It wouldn't bother me to take the second option though, it's a close choice. Maybe other french speakers could give their opinion?

About the capitalized "S" of "Singularity", it's also a matter of preference, I put it to emphasize that we are not talking about any type of singularity (not a mathematical one for example), but it could go either way too. (I just checked the wikipedia french page for "technical singularity", and it's written with a capitalized "S" about 50% of the time...)

I really should have taken 5 minutes to ponder it. You convinced me, your choice is the better one.

But now that I think of it, I have another suggestion : « Affronter la Singularité » ("Confront the Singularity"), which, while still relativelyclose to the original meaning, may be even more catchy. The catch is, this word is more violent. It depicts the Singularity as something scary.

I'll take some time reviewing your translation. If you want to discuss it in private, I'm easy to find. (By the way, I have a translation of "The Sword of Good" pending. Would you —or someone else— review it for me?)

[A separate issue from my previous comment] There are two reasons that I can give to rationalize my doubts about the probability of imminent Singularity. One is that if humans are only <100 years away from it, then in a universe as big and old as ours I would expect that a Singularity type intelligence would already have been developed somewhere else. In which case I would expect that either we would be able to detect it or we would be living inside it. Since we can't detect an alien Singularity, and because of the problem of evil we are probably not living inside a friendly AI, I doubt the pursuit of friendly AI is going to be very fruitful. The second reason is that while we will probably design computers that are superior to our general intellectual abilities, I judge it to be extremely unlikely that we will design robots that will be as physically versatile as 4 billions years of evolution has designed life to be.

I admit I feel a strong impulse to flinch away from the possibility and especially the imminent probability of Singularity. I don't see how the 'line of retreat' strategy would work in this case because if my believe about the probability of imminent Singularity changed I would also believe that I have an extremely strong moral obligation to put all possible resources into solving singularity problems, at the expense of all the other interests and values I have, both personal/selfish and social/charitable. So my line of retreat is into a life that I enjoy much less and that abandons the good work that I believe I am doing on social problems that I believe are important. Not very reassuring.

It seems to end a little prematurely. Are there plans for a "closing thoughts" or "going forward" chapter or section? I'm left with "woah, that's a big deal... but now what? What can I do to face the singularity?"

If it merely isn't done yet (which I think you hint at here), then you can disregard this comment.

I think that IntelligenceExplosion is just a portal to make further research easier (by collecting links and references, etc), while Facing The Singularity is lukeprog actually explaining stuff (from the Preface):

I’ve been trying to answer those questions in a long series of brief, carefully written, and well-referenced articles, but such articles take a long time to write. It’s much easier to write long, chatty, unreferenced articles.

Facing the Singularity is my new attempt to rush through explaining as much material as possible. I won’t optimize my prose, I won’t hunt down references, and I won’t try to be brief. I’ll just write, quickly.

Want to read something kind of funny?
I just skipped through all your writings, but it's only because of something I saw on the second page of the first thing I ever heard about you.
Ok.

On your - "MY Own Story." http://facingthesingularity.com/2011/preface/
You wrote: "Intelligence explosion My interest in rationality inevitably lead me (in mid 2010, I think) to a treasure trove of articles on the mainstream cognitive science of rationality: the website Less Wrong. It was here that I first encountered the idea of intelligence explosion, fro..."

On Mine: "About the Author - "https://thesingularityeffect.wordpress.com/welcome-8/
I wrote: "The reason I write about emerging technology is because of an “awakening” I had one evening a few years ago. For lack of a better description, suffice it to say that I saw a confusing combination of formula, imagery, words, etc. that formed two words. Singularity and Exponential..."

NOW, I'm going to go back and read some more while I'm waiting to speak with you somehow directly.

If what happened to you, is the same thing that happened to me... :then please please place a comment on the page. That would be great. (Again without reading. (If I'm correct you "might" get this.: "You should also do this because if true, then WE would both have seen a piece of the .what...."New Book???"

Just in case you think I'm a nut. Go back and read more of mine please.

I do not approve of the renaming, singularity to intelligence explosion, in this particular context.

Facing the Singu – Intelligence Explosion, is an emotional piece of writing, there are sections about your (Luke’s) own intellectual and emotional journey to singularitarianism, a section about how to overcome whatever quarrels one might have with the truth and the way towards it (Don’t Flinch Away), and finally the utopian ending which obviously is written to have emotional appeal.

The expression “intelligence explosion” does not have emotional appeal. The word intelligence sounds serious, and thus it fits well in, say, the name of a research institute, but many people view intelligence as more or less the opposite of emotion, or at least at something geeky and boring. And while they for sure are wrong in doing so, as also explained in the text, the association still remains. The word “explosion” also has mostly negative connotations.

“Singularity”, on the other hand, has been hyped for decades, by science fiction, by Kurzweil, and by even SIAI before the rebranding. Sci-fi and Kurzweil may not have given the word the most thorough underpinning, but they gave it hype and recognition, and texts such as this could give the needed foundation in reality.

I understand that the renaming is part of the “political” move of distancing the MIRI from some of the hype, but for this particular text, I reckon it a bad choice. “Facing The Singularity” would sell more copies.

Luke, while I agree with the premise, I think that the bogie man of machines taking over may be either inevitable or impossible, depending on where you put your assumptions.

In many ways, machines have BEEN smarter and stronger than humans already. Machine AI may make individual or groups of machines formidable, but until they can reason, replicate, and trust or deceive, I'm not sure they have much of a chance.

Computers can deceive, they just need to programmed to (which is not hard).
(I remember reading an article about computers strategically lying (or something similar) a while ago, but unfortunately I can't find it again)

(Although, it's very possible that a computer with sufficient reasoning power would just exhibit "trust" and "deception" (and self-replicate), because they enabled it to achieve its goals more efficiently.)

We wouldn’t pay much more to save 200,000 birds than we would to save 2,000 birds. Our willingness to pay does not scale with the size of potential impact. Instead of making decisions with first-grade math, we imagine a single drowning bird and then give money based on the strength of our emotional response to that imagined scenario. (Scope insensitivity, affect heuristic.)

People's willingness to pay depends mostly on their income. I don't understand why this is crazy.

UPDATED: Having read Nectanebo's reply, I am revising my original comment. I think if you have a lot of wasteful spending, then it does make you "crazy" if your amount is uncorrelated with the number of birds. On hearing, "Okay, it's really 200,000 birds," you should be willing to stop buying lattes and make coffee at home. (I'm making an assumption about values.) Eat out less. Etc. But if you have already done these things, then I don't see why your first number should change (at least if we're still talking about birds).

Not all of a person's money goes into one charity. A person can spend their money on many different things, and can choose how much to spend on each different thing. Think of willingness to pay to actually be a measure of how much you care. Basically, the bird situation is crazy because humans barely if at all feel a difference in terms of how much they give a damn between something that has one positive effect, and something that has 100x that positive effect!

To Luke: This person was reading about the biases you breifly outlined, and he ended up confused by one of the examples. While the linking helps a good deal, I think your overview of those biases may have been a little too brief, and they might not really hit home with readers of your site, and personally I think it might be difficult particularly for those who may not be familiar with the topics and content of the site. I don't think it would be a bad idea to expand on each of them just a little bit more.

The point of the excerpt you quote has nothing to do with income at all; the point is that (for example) if I have $100 budgeted for charity work, and I'm willing to spend $50 of that to save 2,000 birds, then I ought to be willing to spend $75 of that to save 10,000 birds, because 2000/50 > 10000/75. But in fact many people are not.

Of course, the original point depends on the assumption that the value of N birds scales at least somewhat linearly. If I've concluded that 2000 is an optimal breeding population and I'm building an arcology to save animals from an impending environmental collapse, I might well be willing to spend a lot to save 2,000 birds and not much more to save 20,000 for entirely sound reasons.

If I budgeted $100 for charity work and I decided saving birds was the best use of my money then I would just give the whole hundred. If I later hear more birds need saving, I will feel bad. But I won't give more.

Yes, if saving birds is the best use of your entire charity budget, then you should give the whole $100 to save birds. Agreed.
And, yes, if you've spent your entire charity budget on charity, then you don't give more. Agreed.

I can't tell whether you're under the impression that either of those points are somehow responsive to my point (or to the original article), or whether you're not trying to be responsive.

The amount I give to charity XYZ ought not be completely determined by my income. For example, if charity XYZ sets fire to all money donated to it, that fact also ought to figure into my decision of how much to donate to XYZ.

What ought to be determined by my income is my overall charity budget. Which charities I spend that budget on should be determined by properties of the charities themselves: specifically, by what they will accomplish with the money I donate to them.

For example, if charities XYZ and ABC both save birds, and I'm willing to spend $100 on saving birds, I still have to decide whether to donate that $100 to XYZ or ABC or some combination. One way to do this is to ask how many birds that $100 will save in each case... for example, if XYZ can save 10 birds with my $100, and ABC can save 100 birds, I should prefer to donate the money to ABC, since I save more birds that way.

Similarly, if it turns out that ABC can save 100 birds with $50, but can't save a 101st bird no matter how much money I donate to ABC, I should prefer to donate only $50 to ABC.

Once upon a time, three groups of subjects were asked how much they would pay to save 2000 / 20000 / 200000 migrating birds from drowning in uncovered oil ponds. The groups respectively answered $80, $78, and $88 [1]. This is scope insensitivity or scope neglect: the number of birds saved - the scope of the altruistic action - had little effect on willingness to pay.

Now I haven't read the paper, but this implies there is only one charity doing the asking. First they ask how much you would give to save 2000 birds? You say, "$100." Then they ask you the same thing again, just changing the number. You still say, "$100. It's all I have." So what's wrong with that?

Agreed: if I assume that there's a hard upper limit being externally imposed on those answers (e.g., that I only have $80, $78, and $88 to spend in the first place, and that even the least valuable of the three choices is worth more to me than everything I have to spend) then those answers don't demonstrate interesting scope insensitivity.

This entire singularity thing seems, to put it very politely, misguided. Have you read the book "On Intelligence"? I'd say that computers are nowhere near becoming intelligent. The ability to play chess is not intelligence.

The author of this article, Facing the Singularity, is a pseudo-intellectual that is divulging garbage over the Internet. What the world needs at this time is true wisdom which you will find in the Scriptures only. All human endeavors are coming to an end. Look for God while there is still time.

It is a pity that you use creationists as an example here, since I think that this is exactly how evolutionists think and act. The evidence that you say is so strong in support of common descent is just not. Endogenous retroviruses are just not a slam dunk at all, and I say that as someone with a biochemistry degree.

The main reason that this is a really bad example is that it involves historical evidence, not empirical, and it involves origins, which is, to say the least, highly speculative due to the historical distance. While evolitionists DO have the advantage of appealing to natural preocesses, and IDists do not appeal to current processes (though they don't deny natural selection or various recombination events), the latter do contest the supposed creative ability of evolution to produce novel features, and this is eminently reasonable at this time.

Your example of self-deception with creationists is poor for many reasons. For example, you speak of the missing link tactic of creating 'two more every time one is suggested." While this is a cheap dodge, it does bring up some critical points which evolutionary believers also ignore - how similar, and by what measure, should two things be to be considered a definite link with no need to insert another? Pure morphology has turned out to be a bust when we consider molecular evidence. And the latter has shown that our assumptions about relatedness are highly speculative, if not so simple that they don't provide ANY useful relational evidence.

Like to see how missing links really work? Google for 'missing link found,' check out of the recent supposed human links found, and see how many have turned out to be spurious - nearly ALL of them. They're trumpeted from the media housetops when they're found, but no one peeps when they are, and almost universally, debunked under scrutiny. This is the corollary for your example. Evolutionary believers fail to consider counter indications seriously because it is a world view issue.

I get ruffled when IDists or creationists are paraded as examples of brainwashing or self-deception, primarily because I was an evolutionary disciple as a science major and found my way out of that system into one where I concluded for my SELF that logic and common sense indicate a designer/creator.

No doubt evolution is a simplified rules set, but in empirical tests, as well as in historical interpretation of data, it has many failings which, as Luke has pointed out for certain creationists, is something that evolutionary believers shy away from, hiding in self-deception in order to keep their beliefs safe.

But this is not a post about creation/evolution - my point was that his use of creationists was a poor choice because (a) creationism is believed by a majority of Americans, and so will turn them off from his main point, and (b) the idea that the idea is settled scientifically is dubious, since origins science is more interpretation than demonstrable fact, and both sides of that debate have strong ideological reasons to believe and scientific reasons to doubt that they ignore.

(a) creationism is believed by a majority of Americans, and so will turn them off from his main point,

Can people who believe in a God that benevolently created us and looks over us even come to consider the possibility of existential dangers or a human-steered Singularity? Frankly, if they are creationists, I think they are largely irrelevant to a Singularity discussion until they shed such beliefs.

One more thing. If you want a wider audience to access the point you are making (remember how many people are creationists here in the US), you should use a more accessible and universally accepted example, like the Japanese soldier one you used. If you want a contemporary example, choose something there is more agreement on or people with miss your point - it's like calling your opponent a Nazi - you already lost the argument even if you are right.

I suppose if you are only addressing the skeptical audience, you could use such an example, the way I could use the example of atheists who ignore the obviousness of God's existence as witnessed in creation if I were talking to Christians. But if I am trying to also reach atheists, perhaps I would use a different example.

The lack of responses and negative scores on my comment show me that (1) it is easier to vote down a post than post a reasoned response, and (2) it is easier to scoff at opponents and think them fools than confront one's own self-deceptive behaviors, the very purpose of Luke's post.

The lack of responses and negative scores on my comment show me that (1) it is easier to vote down a post than post a reasoned response, and (2) it is easier to scoff at opponents and think them fools than confront one's own self-deceptive behaviors, the very purpose of Luke's post.

No, it is simply that LW has covered these issues and considers them solved* and so downvotes/ignores people asserting otherwise.

*the weight of evidence points towards evolution, and every point proposed by proponents of creationism and ID has been refuted (do you have a distinctly novel and original argument for creationism/ID? If you don't, then you are wasting your time).

In the Chapter titled, 'No God to Save Us', you said, "We are often weak and stupid, but we must try, for there is no god to save us. Truly terrible outcomes may be unthinkable to humans, but they aren’t unthinkable to physics."

God gave us a beautiful world, but when man developed, built and used HAARP, it was not God's idea. God's idea was for us to enjoy and care for the world He provided, the vacation spot of all the known universe and has provided the peace and love, within every human, that is needed to make it the Heaven God intended it to be here on Earth. We have allowed a few with unconscious agendas of greed rip off our Heaven. It is like God gave a diamond ring to a gorilla. The gorilla did not see its true value and has tossed it away.

This was man's doing not God's. God gave us this world, where he is the host and we the guests. He made us where we are the host and he is to be the guest. Yet so many have turned a blind eye to the beauty around us and have destroyed the place in record time. They have not invited the Guest, our Host here on Earth, into their hearts.

When man built HAARP and put it to use, he condemned all of mankind. Physics tell you that when you increase the electromagnetic field of an object its magnetism increases as well. This has made of our Earth, our solar system (something that could have lasted mush longer) into the losing end of a tug of war with another magnet with a much bigger pull. With the billions of watts HAARP sends into our atmosphere, we have doomed ourselves as anything with a greater magnetic force will pull us in. I submit that since HAARP was first ever activated, the same year recorded as the beginning of the hot years and the beginning of not only global warming, but the heating up of our entire solar system. We are being pulled in by a much greater magnet, a black hole.

There is nothing we can do now but prepare ourselves for the inevitable by finding that love, that peace that is within us, whose value the "gorillas" have disregarded. There is peace within us all, even in the "gorillas", who have chosen to arrogantly ignore it. It is our privilege to acknowledge the peace and humanity within us, even now, although it is too late to save us from our own stupid and selfish misdeeds. God did not do this to us. We did it to Him by ignoring and devaluing the beauty of humankind, by ignoring our own hearts.

I am a physicist. I am, as we speak, working on a project related to magnetism. I know exactly how magnetisation works. Take it from a domain expert: what you are saying is nonsense. Black holes are not magnets. We know what magnets do to the Earth and the solar system, and it does not involve pulling in anything. Even if it did, changing the Earth's magnetisation could not do this. I am not an expert on HAARP, but if it had made large changes to anything magnetic we would have noticed - the whole point is to measure magnetic changes.

In a situation this specific, it seems to me to be worthwhile to reply exactly once, in order to inform other readers. Don't expect to change the troll's opinion, but making one comment in order to prevent them from accidentally convincing other people seems worthwhile.

So it's not impossible that MysTerri will retract this particular claim, and that's a decent hook for doubting the rest of the theory. Other responses include accusing me of not being an expert (or lying), renouncing respect for physics, or going content-free.

Also, I'm not sure why you reply here rather than in private, given your reasons.