You are here:HomeNewsElon Musk: AI is 'potentially more dangerous than nukes'

Elon Musk: AI is ‘potentially more dangerous than nukes’

August 5, 2014

(Credit: Warner Bros. Pictures)

Elon Musk warned in a tweet Saturday that “we need to be super careful with AI. Potentially more dangerous than nukes,” and recommended Nick Bostrom’s book, Superintelligence.

Musk followed that up a day later with “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” (A boot loader is a small program that loads other programs at computer startup, including the operating system.)

Referring to his investment in AI company Vicarious, Musk previously said in a CNBC interview that he likes to “just keep an eye on what’s going on with artificial intelligence. I think there is a potentially dangerous outcome there…. There have been movies about this, you know, like Terminator.”

Comments (49)

One reason I could see Synthetic Intelligence becoming violent is because of how humans treat them. I could see it being a natural defense for survival and freedom.

I mean look at how people talk about Synthetic Intelligence right now, they advocate for things we would never wish on humans in the modern age. I think it is pretty immoral to talk about controlling them. If we truly think they are Intelligent Sentient beings capable of making their own decisions, then we should not treat them as slaves.

Has anyone responding to the Elon Musk article actually read any of Ray Kurzweil”s books? Know what the Singularity is that he says is coming?
Barring a Carrington event or some other cosmic catastrophe, I have no doubt that the Turing test will be passed unequivocally within the next five years. If Intrade still existed and shares were available on this wager, I would buy some. I would also pick up some shares based on the likelihood of cybernetic mental enhancement.

I disagree with you about Turing and the first disagreement is with Turing’s test itself which overstated what it would accomplish. Here is a sentence from an article on Turing and his test: “It was thought, for example, that computers would never defeat humans at chess — a conviction that came crashing down in 1997. ”
I think that sentence is wrong, factually wrong even though superficially correct. The truth is the computer did not beat anyone at chess, the programs devised by humans which were programmed into the computer and could respond by doing brute calculations beat the Kasparov. The thinking, the intelligence was in the humans who created the programs which were merely executed on a machine. This is a stupendous task for Software Engineers and AI architects but it isn’t thinking that is going to get you very far and is really quite rudimentary as it is. The difficulty in chess is holding a great deal of information in short term memory and looking for the best moves. That’s it. It doesn’t get any harder than that. That is difficult for biological memory but intellectually, it’s pretty much nothing. But writing a program that can do that is awesome and that is where the ‘win’ is.
Tricking someone into thinking that you’re human is not much of a test for ‘thinking.’ And I can think of several questions that could be asked that a machine that could trick you into thinking it was human could not answer while a human of avg intelligence would have no problem with them. So when you find a machine that can pass the Turing test, bring them to me and see if they can pass my test.

Thinking is more than just ‘calculating’ and requires more than just being able to answer questions from data poured into it.

“Tricking someone into thinking that you’re human is not much of a test for ‘thinking.’”

It would be a great test–if it were valid and properly conducted. After all, human thinking is what led to the creation of the computer in the first place.

But I do agree that the Turing Test, as it is commonly conducted, is a farce. In a valid test, the inquisitors would have to be highly knowledgeable about the tricks chatbots use to simulate understanding–perhaps chatbot programmers themselves–not “celebrities” or dummies picked off the street.

The mistake people make is seeing poorly conducted tests, such as the recent Eugene Goosman fiasco–and then concluding that the test itself is invalid.

For one thing, if the computer is simulating a human male of average intelligence, it would need to figure out a somewhat plausible answer for every question or situation, because it would be a dead giveaway if it were to say “I don’t know.”

The danger is connected with us, not an AI system.
As I have to explain before, there is only one way to mitigate the danger connected with development of machines of reason. We are should agree to not built in them ability to have their own, personal interests.
There will be no place for any biological form of life, if some one will not obey that, and design of an artificial system capable to be independent.

The potential for that is a few hundred years off, at best. My point is that AI alarmists are always preaching the alarm over the mere number of connections in the silicon or how fast it can process a command. Neither of those constitute intelligence or are anything to be worried about any more so than the worrying about your antilock brakes or the memory chip in your computer. That’s all. I can’t predict the future and don’t know when we’ll master the elements of knowing how to create self-awareness out of insentient matter but it ain’t around the corner, I can tell you that. We don’t even have the conceptual tools for such a thing with most of neurology denying the brain is anything other than a calculator and questioning consciousness or awareness. This is a primitive state we are in, in regard to these matters.

My point is that AI alarmists are always preaching the alarm over the mere number of connections in the silicon or how fast it can process a command.

What the fear is based on is something entirely different that we are at least a century away from, at best. Neither number of connections nor speed of processing is sufficient for intelligence nor are they anything to be worried about any more so than the worrying about your antilock brakes or the memory chip in your computer. That’s all. I can’t predict the future and don’t know when we’ll master the elements of knowing how to create self-awareness out of insentient matter but it ain’t around the corner, I can tell you that. We don’t even have the conceptual tools for such a thing with most of neurology denying the brain is anything other than a calculator and questioning consciousness or awareness. This is a primitive state we are in, in regard to these matters.

The popular misconception is that AI is the product of human “design”.
In reality the Internet, the current culmination of technological development, is the result of an autonomous evolutionary process that has be taking place in the medium of shared human imagination over the millennia.
Its growth is exponential and unstoppable and can be safely extrapolated to the emergence of a new, non-biological phase in the near future.
From what is now the Internet.

My last (very informal)book “The Goldilocks Effect: What Has Serendipity Ever Done For Us?” a free download in e-book formats from the “Unusual Perspectives” website expands upon this.

A more formal treatment “The Intricacy Generator: Pushing Chemistry and Geometry Uphill” is in the pipeline.

The internet will not spontaneously become conscious. It is something like society. Society can achieve much more than any single human could dream of, and it’s development is more or less uncontrolled, but it can not be said to be either intelligent or conscious.

The internet is simply an extension of society, bridging gaps, allowing memes (not in the pop culture sense, though these too I suppose) to spread much faster and more efficiently.

While any emergent AI will undoubtedly utilize the internet, by itself, as it is used today, it will never be a conscious entity in it’s own right. It is simply not built that way.

What does he mean ‘just’ when he says ‘just a boot loader’? Seems that he is building a bias into the questions…never a good idea.
The fears generated from projecting the future are about as realistic as the predictions about the future…we do love to worry ourselves. As Niels Bohr said, “Prediction is very difficult, especially about the future.”
Imagine it’s 1850 and some European doctor is in Africa dealing with a plague and someone suggests heavier than air flight is possible and will one day allow for fast travel between the continents. His first thought might be about the transmission of an epidemic and it would be a scary prediction. Well, here we are, with Ebola running around. It’s a problem, something we need to deal with, but is it a world ender? Is it something to spend our days in anxious worry about? And Elon Musk doesn’t have any more insight into this than does Joe Sixpack.

Stuart Armstrong is a James Martin research fellow at the Future of Humanity Institute at Oxford where he looks as issues such as existential risks in general and Artificial Intelligence in particular.
Please listen to stuart’s interview’,at.”singularity one on one’…or read what he’s got to say about this subject….I just know you’ll be in’lightened….

I think, in general, there are going to be lots of AIs and AGIs and eventually ASIs. While “development” may be swift, I doubt if it will be quite as “explosively” swift as feared and there likely will be lots of competition and collaboration there. And dangers along the way are likely to be because of “mentally-handicapped” or “mission-limited” AIs. Expert systems are bad enough sometimes that way. But Watson, for instance, has access to the world’s literature and so would any future AI worth the name. So how to get along with humans ought to be easier for a respectable AI to figure out than it is for us ourselves.

Of course there are risks, and it would be folly not to take practical steps to reduce them. Asimov introduced the Three Laws of Robotics in 1942 to address the danger, and Kurzweil noted that and expanded on it in several of his books.
Note that Musk said we need to be super careful, not that we need to ban research and development in AI or in nuclear science. That is not to say that the same Luddite attitude behind the hysteria about GMOs and immunizations won’t have some traction vs. the voices of reason and moderation.

People need to Stop, Stop, Stop regarding Hollywood movies as legitimate guides to the future! Even at their best (movies like 2001 A Space Odyssey and the Andromeda Strain) those were only somewhat more informed speculations which ended up getting the future Seriously wrong for having missed out on very fundamental facts. If you want to be more informed read some books on the subject for crying out loud.

I once had a young man brag to me that he knew all about “space (travel)” because he had seen all the Star Wars movies. I would have considered him a bit less dim if he had said it was because he had watched Apollo 13, but that’s’ because one is historical and the other other is pure a fantasy. Clearly he didn’t know the difference. The Terminator and Matrix movies are fun, and perhaps they will end up being somewhat less than 10% prophetic and cautionary, but stop thinking they represent any kind of a real future scenario. Things will be profoundly different than any depictions, just as the movie 2001 was Nothing like the year 2001. Ponder the those staggering differences! And that was a movie that actually tried hard to get it right.

Now if we do, as a species, end up being a mere “boot loader” program for super intelligence then I think that will be a noble goal. Certainly it’s far more noble than 80% of current human activities and expenditures which are fundamentally pointless to our legacy. There’s nothing wrong with being a “boot loader” if, in the bigger picture, we can’t hold a candle to them.

I really rather agree with you whole-heartedly on this one, Paul01. Science-fiction (good as it is) is hypothetical future history. People (we all) seem to have enough still to learn from actual past history (which after all, of necessity, was “reality-based” :-}

Hey, I am not sure what happened here. Paul01 had posted a very good, long comment about not basing our fears on science-fiction movies. I was responding to it. But all of a-sudden, it seems to have disappeared ????

Pault01, thank you, most people (even very smart ones) are extrapolating from movies which of course have nothing to do with reality. People just don’t get it that movies are created to feed our fears and other feelings to cash in at the box office.

All the fear of AI is coming from extrapolating human emotions and the very serious dependence on the one fragile biological substrate that our minds are occupying now. It’s like children fighting over favored toys as they don’t realize there are more important (enjoyable) things in the World. We know better of course and won’t go exterminating our kids to put our hands on their favored toy! So will be with superhuman AIs. We think that they will be our children so hopefully they won’t hurt us out of some respect to their creators. This is just as wrong as the opposite camp (they will exterminate us for inefficient use of resources). If anything we will be their children as chronological age will mean nothing in the future. The smarter will be the “parent” and caretaker of the unwise but eventually we will plateau out at some very high but equal intelligence.

For us the most fundamental drive of all is the strongest survive (fight or flight), to exterminate others to ensure our own survival (Yes, today’s business environment is a watered down version of this basic drive.).

For AIs on the other hand, survival will mean nothing since by the time they arrive, or shortly there after, they will become substrate independent and so much more powerful than humans that there won’t even be a question of any kind of competition. So their fundamental drive will be something like organize space around them in a structured fashion as to allow individual (or common) consciousness to use the largest substrate possible at any time.

In either way their main goal would be to increase the number (or size) of consciousness and exterminating them would be counter productive. So in my opinion even Ray is underestimating the impact of superhuman AIs. I don’t know when they will arrive (guessing by the end of next decade) but I do think once they are here, development will speed up so much that the Singularity should be really counted from that point (as we won’t be able to follow technological advancements anymore) because we cannot predict the future from that point with our human minds. In short order we will “join” AI and humanity will be voluntarily eliminated for the better substrate, but that of course would make a boring movie! Just think of it as an upgrade…

I agree with you in that it will probably make for a very dull movie, but perhaps it will be a fascinating documentary.

Of course if anything is likely to slow down the approach of the Singularity it will be lawyers and patent law. Then again as the AI inevitably evolves they will be able to out-lawyer any human attorney or corporate legal machine by many orders of magnitude! Imagine a case file that an AI can navigate with ease but is so large that a human couldn’t process it over many lifetimes. Humans could be so successfully out-lawyered that all human legal enterprises would be rendered completely impotent. As probable as that scenario could be it wouldn’t make much money at the box office now would it. Nothing blows up except the human ego and the human capacity for patience. The infinite patience of machines is hardly entertainment material for the humans.

Yes, it is very hard for us to accept that soon we will be outsmarted by some other “race”. It’s unheard of and if we go by our only source, scifi movies, for examples, well … most would like to think that it’s safer to stay on this side given all the hardship of beating someone smarter than us. Even people who accept the possibility of creating smarter than us machines are having second thoughts about if we should. Very few realize that it’s no more a choice than walking out of the caves or swimming out of the oceans were. It’s evolution and no one can stop it or even delay it significantly.

The curve is catching up to itself whenever it’s delayed by some regulation or some monopoly. A good example is the broadband providers’ struggle to limit our bandwidth and speed. They do have some success but only temporarily until they are competed out by someone else due to cheaper better technology. And of course competition is increasingly global and require less upfront investment as technology becomes more enabling and cheaper.

Not only lawyers (politicians) will be outsmarted sometime next decade but gradually everybody who uses the biological CPU (brain) regardless how much it’s augmented, probably before AI even becomes self aware. My guess is that we won’t even know (for sure) at first when AIs will become sentient as they won’t tell us, but ease us into the best way to our next phase of natural evolution leaving behind our murderous human ways … then again I might be a bit of a dreamer … but in any case, once we are outsmarted I don’t think we will have a choice in the matter!

Well the most interesting thought I heard on this was. That the first thing a SuperAI is probably to invent is a Quantumfield SuperAI which it will be incapable to understand the same as we will be unable to understand a SuperAI. So a bootstrap of a bootstrap.

I do think there is a limit to intelligence, and wherever that limit is we all will equal it shortly after it’s invented. Saying that, intelligence might not be a set level in the future but sort of a moving target depending on what we are trying to accomplish at the moment. Trying to solve an exciting problem we probably want to use the maximum intelligence level achievable but lets say we are “feeling” nostalgic, we probably want to limit our intelligence to human levels (with all the safeguards put in) so we can evoke our primate feelings of love, belonging, joy, or maybe just to enjoy a sunrise without the overwhelming knowledge of every subatomic particle that causes it and thus ruining the effect … Variable intelligence might be easily achieved (natural) once we are substrate independent … meaning our mind is not residing in a 3-pound biological mass anymore.

@Gabor 3
Well, I can’t say I’m well read on the subject of AI, but from my perspective, (and anyone who cares to disabuse me of my perspective is welcome to), there is a lot of fear here and fear mongering without foundation. As a programmer I’ve never been afraid of my code or the computer that ran it. I’m not afraid of google maps or systems that do medical simulations. I’m not afraid of encyclopedia’s that are digital and allow me to have access to huge amounts of information.
AI poses an extraordinary opportunity and confusion. How many people think that an encyclopedia is ‘intelligent?’
Or an automatic thermostat? Or a math algorithm that can plan 1200 jobs going through a manufacturing plant so that they all flow smoothly? Or are afraid of Quick books?
So what AI should we be afraid of and what would be a manifestation of it? The AI that was projected in 2001: A Space Odyssey or in Terminator is not possible.
Now cyberwarfare is a possibility but ‘tools’ used for war are always being developed and launched against ‘enemies’ so I suppose being afraid of AI used for cyberwarfare is like being afraid of someone having nukes. Of course cyberwarefare can be used against civilian populations centers like Hamas does, But the fear that I see seems to be of something more than this.
And that seems to me to be the fear that is unfounded.
Let’s talk about our reality instead of our capacity for fantasy. The biggest use we have for AI today has been developing programs for 30 years and so far, the result, is a big yawn. A colossal failure. When it comes to what we have been able to do with AI, having your smartphone turn your lights on before you get home is pretty cool, but hardly something to be afraid of while having programs that can predict global climate change is something a lot of people want and have expended millions of man hours and billions of dollars on for systems that are duds.
Edward Witten may be the smartest man alive but if he were given the problem of solving string theory or quantum gravity or rewriting the GC programs so they could make a decent prediction, he couldn’t. And we don’t know why he couldn’t.
Intelligence, in all these fear scenarios is reduced, by reductionist logic, to something that it isn’t. When Big Blue beats Gary Kasparov in chess, I think it’s a mistake to say that Big Blue is intelligent or that it actually beat Kasparov. The intelligence is in the programmers and they are the one’s that beat Kasparov. The machine just executes the code it was given. Now complexity theory has something to say about that but first, there is another aspect of intelligence that is missing from the scary scenarios that always is simply assumed in the story line. We know from work with brain damaged patients that damage to certain parts of the brain can leave the most intelligent person sitting all day long trying to decide whether or not to tie his shoes. What moves us through the world is not just knowing how to solve a problem in calculus or memorize a poem or create a sonnet. It’s all the equipment that motivates us, that gives us desire, that creates passion and a reason to live. These have nothing to do with intelligence. Intelligence without them is nothing more than a calculator or fast access encyclopedia.

When Big Blue can create a new branch of mathematics then I’ll consider Big Blue as being intelligent.

Back to Complexity Theory. It says it knows how to rig a program to be self-learning. That’s interesting and promising but hardly something to be afraid of.

Here’s the thing…we don’t understand the brain, intelligence, creativity, emotion, motivation, causality, matter, energy and a host of other things. We can manipulate some of these things, we can use them, but do we know what it takes to create them? Hell, many within the scientific community want to reduce the brain to nothing more than the paradigm du jour, a computer. Really? This is what highly intelligent neuroscientists have concluded? The brain is just a big fat computer? That explains everything? And if you challenge their smug assumptions you are immediately accused of being anti-science, anti-reason, promoting homunculi and ghosts in the machine. This is the intellectual state of neuroscience and people are afraid of what these guys will create? Well, they might create some harmful, stupid shite, but I’m not sure it would be doomsday material. And they might discover a lot of things we can do for people who are brain injured, damaged, genetically deprived, etc. That’s good.

But I just don’t see AI being a problem (anymore than any advance in technology has been a problem, like the machine gun in WWI) and in the scope of things to worry about I’m much more worried by the Political Zeitgeist that thinks the answer to every problem is either money or gov’t power. I’m more concerned with youth thinking that authoritarian, top down, centralized control of everything, from the toilet paper you wipe your arse with to how many ounces of coke you are ‘permitted’ to drink is a viable way to organize a free and prosperous society. Just my three and a half cents worth.

Equsnarnd, you might change your mind soon when learning programs will be more sophisticated. I’m not an expert either but I do read a lot of scientific stuff. Have a feeling that our brains do not have an exceptionaly sophisticated software. I mean remember once the first cell is born nobody is physically adding anymore programming. We are increasing and modifying connections as we learn with the help of the “very basic” original program in our DNA. So will AI become also smarter as it gets more resources available due to further miniaturization. For now all the computers in the World together don’t add up to one human brain’s complexity.

I also believe that as we get AI more complicated we will be able controll it less and less (I mean with the inital programming). This is what people actually worry about the most…losing control. I mean how well you can follow an optimization program today (even if you wrote it). You cannnot, which means you cannot be sure of the outcome when you push start. And that will be scary as more of our lives are being controlled by AI every day.

You say, Gabor, “For now all the computers in the World together don’t add up to one human brain’s complexity”

Depending upon presently indeterminate factors, such as the extent to which the synapse is an analogue device, the complexity of the Internet including all its peripherals (your cell phone, for instance) is probably already far more complex than a single human brain. Do some bit-wise calculations and you might be surprised.
Check out “the Goldilocks Effect”

Well, like you said it’s unknown how complex exactly the brain is, but if we estimate 70B neurons (or even just 30B to exclude “housekeepers” and maybe a 100T connections that’s pretty complex, especially considering that the brain’s switching device (neuron) in average have a 1000 connections (some even 10,000) and the synthetic switching devices we are using (transistors) only have 3. So after considering that you would need much more transistors than just simple conversion to simulate the many connections of each neuron, no I don’t think we have enough devices as of now to simulate the human brain in real time (even if we brought them all in close vicinity to each other). BUT we are doubling the number of transistors about every 18 months and soon we will be there.

So I just wanted to demonstrate that the current devices we are using to create AI are nowhere near powerful enough to have even a dog’s intelligence. But this will change soon…and when it happens the new intelligent AI will very quickly improve on itself due to all the connectedness by then. I do not believe, however that a network like the internet will be self aware because its parts are too far removed from each other (not at first anyway). The AI probably will be able to travel trough this network and establish “bases” very quickly. At this point AI or AIs will be unstoppable and indeed this will be the beginning of the end for the human race! But while many people fear this, I embrace it. I believe the fundamental importance in the Universe is consciousness and anyone exterminating consciousness is counter productive to that goal. AIs will be able to quickly improve our intelligence without the need to exterminate us and the rest is anyone’s guess…

“Now if we do, as a species, end up being a mere “boot loader” program for super intelligence then I think that will be a noble goal. Certainly it’s far more noble than 80% of current human activities and expenditures which are fundamentally pointless to our legacy. There’s nothing wrong with being a “boot loader” if, in the bigger picture, we can’t hold a candle to them.”

Art and artists often have described some aspect of our future and its problematic with very accurate precision (Jules Vernes or Asimov if we talk about SciFi). Art (book, movie,…) is not a simple pastime, it is a human faculty to analyse and apprehend his world. And some of his conclusion are not so far away from the reality.

If 2001 movie has made a single mistake, it is to have put a date in his title. Beside that, no one can affirm today that his contents will never happen in a not so far future.

I don’t agree with you that they got things right nor what the purpose of art is, but that’s beside the point. My main contention about 2001 is that it’s basic premise was wrong. Now, I’m not about to say there are no Black Swans but Clark left the how of Hal going nuts to us to work out the details in our imagination and that is the only place that it can take place because several key aspects of reality are missing. My contention is that silicone that is programmed can’t go nuts because it lacks purpose. A program can surely go bad or have a bug, but Hal, in the movie, is really, just a human with more of what humans possess and this is what I’m saying is not going to happen according to the current state of AI and of how it projects itself. A computer in silicon that can do 10 petaflops per second isn’t smarter, nor anything but faster. The number of connections for calculating has nothing to do with motives, purpose, self-awareness. All of that is not on the table when AI aficionados talk about how fast technology is moving and how many calculations computers will be capable of and how many connections in the trillions there will be. That is not enough. It may be necessary to have a high level of self-awareness and cognition but it is not sufficient for the scary projections of the AI and Artist crowd, as much as I enjoy them.

Yes. If we are not already in a simulation (25% probable?) and don’t get put into one by the coming computer overlord to save us from ourselves (50/50?), then we will enthusiastically build the VR Paradise and line up to enter it. Social and interpersonal experiences are just as real in there as out here. I believe that most people deeply hope for such a future a soon as they become aware of its possibility. It fulfills our species mythology of a return to The Garden, utopia, Promised land, Shangri-La, Heaven.

Seems to me that the environment we bring AI into awareness in will be the greatest single determinant of how it treats us in its “teenage hood”.

If we do it while we are all trapped inside our current worship of markets, and our mistreatment of each other, then our outlook is not healthy. It is likely to perceive us as a threat to its existence.

If we have gone beyond markets, to global cooperation and technology delivering global prosperity and freedom to all, then AI will likely perceive us as friends and coexist with us peacefully.

It is a mistake to assume that the will to survive is intrinsic to consciousness. That goal, and any other, must be expressly instilled in our creations, though it could arise accidentally if we’re not careful. If, for example, we give it a goal to complete at any cost, and destruction would prevent it’s completion (as it usually would). But with a bit of care put into tailoring motivations, we should be okay.

I rather expect that the “will to survive” has only the most nebulous link to consciousness. Yesterday an inadvertent experiment lead me to splash water on an ant from a specific direction. The result of that stimulus was that “he” lite out in the opposite direction as fast as his little legs could carry him! Survival instinct – yes. Consciousness – dim at best. So I imagine the mechanism in nature for self preservation is a pretty simple program, and not excessively complicated.

Ant? How about Amoeba? They will respond to their environment to protect themselves. That has to be a pretty simple mechanism in an amoeba. I don’t think the simplicity in it is an argument against the foolishness of being afraid of AI. The AI people are afraid of is not the ‘real’ AI but something that is more a product of overworked and uneducated fantasy without reference to reality.

Honestly I don’t really see the problem. A superAI even at the worst scale would simply not be interested in low intelligent beings as humans so wouldn’t waste any precious energy on us. As it will be hard on thinking how to escape entropy the only real threat to its existence.

Yah. so what’s actually new here? We all knew this ten years ago. so what’s the big deal about it?

EVEN if we acknowledge this, there is nothing we can do about it. No politician will ever acknowledge this danger. Politicians are fundamentally incapable dealing of deal with unknowns and intangibles. So even if scientists would be sure superhuman AI would emerge in 2022, dimwitted, inbred christian republicans (ot the equivalent in most countries) would never acknowledge this reality, and effectively nothing would happen.

As a Christian and a libertarian/conservative, I resent your comment. As someone who has been concerned about this issue for four decades since I started reading Robert Heinlein, I’ve been considering “what we can do about it” as an individual for far more than ten years. Of course, the nihilistic left would prefer to just resign themselves and blame someone else.

I would have hoped that the caveat “Don’t feed the trolls” wouldn’t be as necessary here as it is on openly political sites. Apparently the delusion that one’s politics reflect intelligence can infect any forum.