This summer, Elon Musk spoke to the National Governors Association and told them that “AI is a fundamental risk to the existence of human civilization.” Doomsayers have been issuing similar warnings for some time, but never before have they commanded so much visibility. Musk isn’t necessarily worried about the rise of a malicious computer like Skynet from The Terminator. Speaking to Maureen Dowd for a Vanity Fair article published in April, Musk gave an example of an artificial intelligence that’s given the task of picking strawberries. It seems harmless enough, but as the AI redesigns itself to be more effective, it might decide that the best way to maximize its output would be to destroy civilization and convert the entire surface of the Earth into strawberry fields. Thus, in its pursuit of a seemingly innocuous goal, an AI could bring about the extinction of humanity purely as an unintended side effect.

This scenario sounds absurd to most people, yet there are a surprising number of technologists who think it illustrates a real danger. Why? Perhaps it’s because they’re already accustomed to entities that operate this way: Silicon Valley tech companies.

Consider: Who pursues their goals with monomaniacal focus, oblivious to the possibility of negative consequences? Who adopts a scorched-earth approach to increasing market share? This hypothetical strawberry-picking AI does what every tech startup wishes it could do — grows at an exponential rate and destroys its competitors until it’s achieved an absolute monopoly. The idea of superintelligence is such a poorly defined notion that one could envision it taking almost any form with equal justification: a benevolent genie that solves all the world’s problems, or a mathematician that spends all its time proving theorems so abstract that humans can’t even understand them. But when Silicon Valley tries to imagine superintelligence, what it comes up with is no-holds-barred capitalism.

In psychology, the term “insight” is used to describe a recognition of one’s own condition, such as when a person with mental illness is aware of their illness. More broadly, it describes the ability to recognize patterns in one’s own behavior. It’s an example of metacognition, or thinking about one’s own thinking, and it’s something most humans are capable of but animals are not. And I believe the best test of whether an AI is really engaging in human-level cognition would be for it to demonstrate insight of this kind.

Insight is precisely what Musk’s strawberry-picking AI lacks, as do all the other AIs that destroy humanity in similar doomsday scenarios. I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea. Then I realized that we are already surrounded by machines that demonstrate a complete lack of insight, we just call them corporations. Corporations don’t operate autonomously, of course, and the humans in charge of them are presumably capable of insight, but capitalism doesn’t reward them for using it. On the contrary, capitalism actively erodes this capacity in people by demanding that they replace their own judgment of what “good” means with “whatever the market decides.

Because corporations lack insight, we expect the government to provide oversight in the form of regulation, but the internet is almost entirely unregulated. Back in 1996, John Perry Barlow published a manifesto saying that the government had no jurisdiction over cyberspace, and in the intervening two decades that notion has served as an axiom to people working in technology. Which leads to another similarity between these civilization-destroying AIs and Silicon Valley tech companies: the lack of external controls. If you suggest to an AI prognosticator that humans would never grant an AI so much autonomy, the response will be that you fundamentally misunderstand the situation, that the idea of an ‘off’ button doesn’t even apply. It’s assumed that the AI’s approach will be “the question isn’t who is going to let me, it’s who is going to stop me,” i.e., the mantra of Ayn Randian libertarianism that is so popular in Silicon Valley.

The ethos of startup culture could serve as a blueprint for civilization-destroying AIs. “Move fast and break things” was once Facebook’s motto; they later changed it to “Move fast with stable infrastructure,” but they were talking about preserving what they had built, not what anyone else had. This attitude of treating the rest of the world as eggs to be broken for one’s own omelet could be the prime directive for an AI bringing about the apocalypse. When Uber wanted more drivers with new cars, its solution was to persuade people with bad credit to take out car loans and then deduct payments directly from their earnings. They positioned this as disrupting the auto loan industry, but everyone else recognized it as predatory lending. The whole idea that disruption is something positive instead of negative is a conceit of tech entrepreneurs. If a superintelligent AI were making a funding pitch to an angel investor, converting the surface of the Earth into strawberry fields would be nothing more than a long overdue disruption of global land use policy.

There are industry observers talking about the need for AIs to have a sense of ethics, and some have proposed that we ensure that any superintelligent AIs we create be “friendly,” meaning that their goals are aligned with human goals. I find these suggestions ironic given that we as a society have failed to teach corporations a sense of ethics, that we did nothing to ensure that Facebook’s and Amazon’s goals were aligned with the public good. But I shouldn’t be surprised; the question of how to create friendly AI is simply more fun to think about than the problem of industry regulation, just as imagining what you’d do during the zombie apocalypse is more fun than thinking about how to mitigate global warming.

There have been some impressive advances in AI recently, like AlphaGo Zero, which became the world’s best Go player in a matter of days purely by playing against itself. But this doesn’t make me worry about the possibility of a superintelligent AI “waking up.” (For one thing, the techniques underlying AlphaGo Zero aren’t useful for tasks in the physical world; we are still a long way from a robot that can walk into your kitchen and cook you some scrambled eggs.) What I’m far more concerned about is the concentration of power in Google, Facebook, and Amazon. They’ve achieved a level of market dominance that is profoundly anticompetitive, but because they operate in a way that doesn’t raise prices for consumers, they don’t meet the traditional criteria for monopolies and so they avoid antitrust scrutiny from the government. We don’t need to worry about Google’s DeepMind research division, we need to worry about the fact that it’s almost impossible to run a business online without using Google’s services.

It’d be tempting to say that fearmongering about superintelligent AI is a deliberate ploy by tech behemoths like Google and Facebook to distract us from what they themselves are doing, which is selling their users’ data to advertisers. If you doubt that’s their goal, ask yourself, why doesn’t Facebook offer a paid version that’s ad free and collects no private information? Most of the apps on your smartphone are available in premium versions that remove the ads; if those developers can manage it, why can’t Facebook? Because Facebook doesn’t want to. Its goal as a company is not to connect you to your friends, it’s to show you ads while making you believe that it’s doing you a favor because the ads are targeted.

So it would make sense if Mark Zuckerberg were issuing the loudest warnings about AI, because pointing to a monster on the horizon would be an effective red herring. But he’s not; he’s actually pretty complacent about AI. The fears of superintelligent AI are probably genuine on the part of the doomsayers. That doesn’t mean they reflect a real threat; what they reflect is the inability of technologists to conceive of moderation as a virtue. Billionaires like Bill Gates and Elon Musk assume that a superintelligent AI will stop at nothing to achieve its goals because that’s the attitude they adopted. (Of course, they saw nothing wrong with this strategy when they were the ones engaging in it; it’s only the possibility that someone else might be better at it than they were that gives them cause for concern.)

There’s a saying, popularized by Fredric Jameson, that it’s easier to imagine the end of the world than to imagine the end of capitalism. It’s no surprise that Silicon Valley capitalists don’t want to think about capitalism ending. What’s unexpected is that the way they envision the world ending is through a form of unchecked capitalism, disguised as a superintelligent AI. They have unconsciously created a devil in their own image, a boogeyman whose excesses are precisely their own.

Which brings us back to the importance of insight. Sometimes insight arises spontaneously, but many times it doesn’t. People often get carried away in pursuit of some goal, and they may not realize it until it’s pointed out to them, either by their friends and family or by their therapists. Listening to wake-up calls of this sort is considered a sign of mental health.

We need for the machines to wake up, not in the sense of computers becoming self-aware, but in the sense of corporations recognizing the consequences of their behavior. Just as a superintelligent AI ought to realize that covering the planet in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to realize that increasing market share isn’t a good reason to ignore all other considerations. Individuals often reevaluate their priorities after experiencing a personal wake-up call. What we need is for companies to do the same — not to abandon capitalism completely, just to rethink the way they practice it. We need them to behave better than the AIs they fear and demonstrate a capacity for insight.

"There is no justice in the laws of nature, no term for fairness in the equations of motion. The Universe is neither evil, nor good, it simply does not care. The stars don't care, or the Sun, or the sky.

But they don't have to! WE care! There IS light in the world, and it is US!"

The author of this piece is basically taking a serious comment about a potential future problem (humanity creating servants that work far too well and cannot easily be stopped, a la The Sorceror's Apprentice), and interpreting it entirely as an allegory about present problems.

The trouble with this is that if you ignore the future threat because you're too busy talking about the allegory, you risk blundering into disaster. People smugly saying to themselves "the real problem isn't global warming, it's capitalism!" are completely ignoring the object-level problem that the globe is in fact getting physically warmer.

An AI created by a team of anarchists, or Soviet computer programmers, or a team of computer scientists who give absolutely zero fucks about profit or the economic implications of their work would be just as much a risk here as an AI created by anybody else.

...

Furthermore, even if we treat the whole thing as an allegory and an excuse to psychoanalyze Silicon Valley billionaires... The danger of "the created being efficiently does what it was meant to, and works too well," was originally imagined by people who are NOT rich and powerful capitalists. Indeed, the danger was imagined long before capitalism came to dominate the world, as the fact that the story of the sorceror's apprentice was written by Goethe in 1798 illustrates. For that matter, there's a similar story dating back to ancient Rome.

The author of this piece seems entirely ignorant of this history, which sadly fails to surprise me.

I could’ve penned a better critique of the capitalist mentality and its dangers.

First, it is the fetishizarion of not just material goods, but also of individualism. Second, it is the desire to succeed at all costs.

These factors make capitalist bastards ten times more dangerous than any team of “anarchists” or whatever. Add to that the single-handed control over enormous capitals, and the recipe for disaster is ready.

Before AI was the bogey-man, it was "kids these days!" Same shit, different day: they fear the lessons we actually teach those who follow us.

[EDIT]: Also "nouveau riche! We can't trust them!"

Rule #1: Believe the autocrat. He means what he says.
Rule #2: Do not be taken in by small signs of normality.
Rule #3: Institutions will not save you.
Rule #4: Be outraged.
Rule #5: Don’t make compromises.

I could’ve penned a better critique of the capitalist mentality and its dangers.

First, it is the fetishizarion of not just material goods, but also of individualism. Second, it is the desire to succeed at all costs.

These factors make capitalist bastards ten times more dangerous than any team of “anarchists” or whatever. Add to that the single-handed control over enormous capitals, and the recipe for disaster is ready.

Just to be clear, are you agreeing with the article's condemnation of capitalism, or with its dismissal of the potential for runaway AI to be a serious danger in the future?

The former can stand on its own merits. I have no impulse to defend the capitalist mentality.

The latter is problematic because it involves ignoring a physical threat in favor of seeing it as some kind of metaphor. It's as if an astronomer started warning us of a giant comet headed for the Earth, and people tried to deconstruct his warning and say he was only talking like this because of a subconscious desire to see the sky fall in retaliation for [snip psychobabble here].

As an analogy, global warming is a threat to humanity. Because capitalism is the current world order, capitalism is the system with its invisible hand on the tiller, and can easily be blamed for global warming. However, if the world were socialist, global warming would still be possible. If enough fuel was burned, and if enough scientists were ignored, the same result would obtain, and the globe would still warm.

Hopefully, the socialist governments of this alternate world would recognize and avoid the problem. But it is imaginable that a socialist government might fail to address the problem- say, if climate science suffered from a bout of something like Lysenkoism, or if the prevailing global ideology was something like Mao's "Man Must Conquer Nature" mindset.

This is because the problem has an existence totally independent of economics and sociology and human opinions. The problem is real, it has a physical existence that has nothing to do with how we think about it or why we do the things that cause it.

...

Likewise, runaway AI is a problem that is real in that if you program a machine to maximize [simple thing], and you make the machine many times more intelligent than a man, the machine is likely to apply its superhuman intelligence in ways that man might not desire.

If a socialist government with a planned economy builds a master computer to maximize the output of steel, and this computer is many times smarter than a man, we might return in a thousand years to find the computer has turned the whole Earth into steel and a pile of mine tailings. If the socialist government builds a master computer to maximize the number of people who are "very satisfied" with the rule of the master computer, we might return in a thousand years to find that the world is full of brains in jars constantly being injected with euphorics. These are exactly the same class of outcome we might expect from a capitalist organization doing the same thing.

Because the problem is not WHY you built a machine with potentially unlimited power and intellect, then irrevocably commanded it to pursue a goal that, taken to its conclusion, would be the ruin of the human species.

The problem is THAT you built such a machine and issued such a command to pursue such a goal.

I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea.

Standard total failure to comprehend how AI (of any kind) actually works, and specifically how common human notions of 'a good idea', and human motivation in general, fits into such a tiny obscure part of the space of possible motivational systems. Cookie cutter criticism of corporations is uninteresting without detail on how the legal framework should be modified to achieve the (vaguely specified) desired change.

Individualism is at the core of the problem. You can have many people vote for an outcome, and if the Earth and everyone is turned into a lump of steel, well, that’s at least what they wanted.

In case of capitalism, it may be that only Elon Musk or Mark Zuckerberg wanted something, but the rest are fucked along with them.

I do not disagree with anything you just said.

The problem is... if a superintelligence clever enough to manipulate humans with the same ease that humans use to manipulate dogs arises, then we have a problem regardless of whether the superintelligence was created by socialists or by capitalists.

If you thought Elon Musk was bad, you have no concept of how bad it could be to have a machine that literally values only one thing in charge. At least Elon Musk is still a hairless biped of the species homo sapiens, who instinctively values the approval of other humans and can be motivated by evolved human reactions like embarrassment, fear, and the prospect of winning the love and trust of others. A machine for optimizing steel production is going to be much worse.

And again, this is the real problem.

If everyone votes to turn the Earth into a mass of paperclips, fine, at least everyone had a say in the decision.

But if everyone is mesmerized into doing so by the AI's swarm of hypnotic frequency-projecting drones it obtained for other projects and repurposed without our knowledge that it could be thus repurposed... Not so good.

...

So again, the question is not "is capitalism pernicious," the question is "are AI superintelligences a potential threat to humanity if they emerge to value and optimize for only one thing?" And I would argue that the answer is yes. Especially if, as noted above, they are smarter than humans in the same sense a human is smarter than a dog.

Again, it's like global warming: condemning the world order from which the problem sprang is not a substitute for acknowledging the problem and seeking to prevent it.

I used to find it odd that these hypothetical AIs were supposed to be smart enough to solve problems that no human could, yet they were incapable of doing something most every adult has done: taking a step back and asking whether their current course of action is really a good idea.

Standard total failure to comprehend how AI (of any kind) actually works, and specifically how common human notions of 'a good idea', and human motivation in general, fits into such a tiny obscure part of the space of possible motivational systems.

Yeah. Human motivation is not in any sense abstract, it is a very specific system evolved into our brains over several million years, in order to work with a system of neurochemistry that may be tens of millions of years older still!

Only the most basic principles of human motivation (e.g. survival, reproduction, tit-for-tat revenge) can even hope to translate into the mental universe of a truly alien species

Self-projection is an acute and present danger. You can say that robots can turn the planet into waste; but human civilization seems to be just as capable of doing it, without any centralized coordination of a super-AI.

This is what the article was about. That capital itself is a machine.

If that allegory flew over the heads of the readers here because they’re already looking for John Connor... well... What can I say?

Condemning the world order is a start. Before this first step, no constructive discussion can begin. It is like arguing with a person who developed a cancerous tumor that is slowly poisoning the body (an apt description of our civilization’s self-destructive potential with rapid desertification, overfishing, extreme pollution that poisons the entire food chain with delayed effects soon to be seen over several generations)... but the person refuses to argue about the tumor, and instead would like to only argue about the poisons, and be de-toxified, but keep the tumor.

Sure thing. I know sometimes the criticism goes too far, but not in this case. There are many visions of the future under a variety of systems, but the capitalist one requires constant expansion, otherwise the system collapses. It is designed like this. Primitive societies and autarkies can self-impose zero growth or stasis. Capitalism cannot, that would be equivalent to shutting down a running airplane engine.

Like I said, the article is a poor criticism nonetheless, as it fails to correctly connect the deep flaws of the order with direct consequences, then show how the order is incapable of controlling itself and finish with an unshakable proof of doom.

Look, my only objection to this article is that it portrays superintelligent AI as a chimera, an imaginary threat that only a bunch of rich Silicon Valley playboys could have conceived of. When this is not in fact the case, and superintelligent AI could take the job capitalism is doing over a period of centuries and finish it in a matter of years, faster than we could conceivably react to prevent it and probably competently enough that we can't prevent it.

It is believable that enough dissenting humans could stop capitalism. It is hard for us to predict with any confidence that we could stop an AI that grows exponentially more capable with each passing week or month.

...

So this is, again, like saying "global warming is not the real problem, the problem is capitalism." No, global warming IS a real problem, and capitalism is a real problem too, in part because of its contribution to global warming.

My criticism of the article is that it uses as a launchpad the premise that mocking people trying to warn you about a potential threat to civilization is safe, because those same people present a different threat to civilization. Which is nonsensical, for a number of reasons. Among them, because the AI risk alarm bell was run LONG before Elon Musk got on the bus, and was run by people who are not themselves powerful, wealthy capitalists.

I think the late Asimov made a remark that rebellious robots destroying their masters was a trope of the 1930s sci-fi which particularly annoyed him. These apocalyptic visions are not constructive in essence, even if they are true.

Worse still, if the warnings by men like Musk are true, then we should double down on large-scale Luddism and anti-corporate hostility, because it would be the only way out of the situation.

The article actually mentions that the elite is pretty chillaxed about this impending AI doom. Which means they think we either take these warnings at face value and act accordingly, destroying them in the process (unlikely), or they think we will just get distracted by their doomsaying and once again forget who the Architect is.

Both are threats, and somewhat interconnected ones. Both must be addressed, along with many other issues.

Immediately suspect anyone who tries to simplify the problems of the world by saying "(X) is the REAL bad guy, and everything else is a distraction."

"Well, Grant, we've had the devil's own day, haven't we?"

"Yes. Lick 'em tomorrow though."

-Generals Sherman and Grant, the Battle of Shiloh.

"They are nearer to me than the other side, in thought and sentiment, though bitterly hostile personally. They are utterly lawless - the unhandiest devils in the world to deal with - but after all their faces are set Zion-wards."- Lincoln on radical Abolitionists.

"You need to believe in things that aren't true. How else can they become?"-Terry Pratchett's DEATH.

Radical changes in corporate behaviour and outcomes could be achieved with such relatively minor changes as a partial repeal of limited liability (civil liability up to a small fixed amount attaches on purchase of a share; cumulative with substantial shareholdings), overhaul of corporate tax and capital gains tax code to incentivise appropriate social goods, progressive corporation tax (with company size), tweaks to antitrust/merger and board member regulations. Relatively minor compared to a completely different form of government, I mean. Obviously this is infeasible in the political environment of most first world countries, but then so is communism or similarly radical changes. The annoying thing is that these powerful behaviour modifiers are not only barely used, due to overprivilidging of corporate freedom from a false analogy to individual freedom, they're not even considered by the so-called radicals, who skip over consideration of sensible changes to the ground rules in favour of silly autocratic fantasies (that history has already proven over and over to be utterly ineffective and usually muderous).

These is actually an analogy to AI here as well; non-trivial AI systems are extremely (if non-lineraly) sensitive to changes in fitness/utility function and can exhibit radically different behaviour from small changes to goal weights. If people gave up on a deployments without even trying goal system tweaks, hardly any machine learning systems would get deployed. In practice there is always lots of iteration and tweaking to get the desired behaviour. In essence the problem with recursively self-improving AI is that it is likely to self-improve (much) faster than issues can be discovered/diagnosed/fixed.

I see what you’re trying to say, Starglider, but minor changes would not change the power structure. A change that does not alter the power structure can and will be easily rolled back as a minor nuisance, should it prove incovenient to those in power.

Case in point: Trump and climate change.

The theory of small-step reform is already beyond salvaging. The radicals don’t ignore it because they are bad and mean. They ignore it because it hasn’t provided a god damn thing over the last several decades.

I think the late Asimov made a remark that rebellious robots destroying their masters was a trope of the 1930s sci-fi which particularly annoyed him. These apocalyptic visions are not constructive in essence, even if they are true.

There is a profound misunderstanding here, and a profound difference between what Asimov is complaining about and what is actually a danger. Which is a result of Asimov not really understanding how intelligence or the programming of intelligence work, because no one understood that.

Nevertheless, Asimov actually did have a valuable insight: namely, that machine intelligence could present us with problems not because of active, deliberate rebellion, but because of unexpected consequences of its own normal functioning. The "Powell and Donovan" stories were among the first good stories that were in any meaningful sense 'about computers,' precisely because they capture the experience we today know as "alpha testing" and "debugging."

But Asimov missed a fundamental, important point, or rather failed to totally deduce it from first principles generations before AI researchers discovered it. He is not to be condemned for this, but because he did not know this important thing, he is not a true authority on AI risk. No more than Newton should be considered an authority on gravity whose opinions override the more recent discoveries of Einstein and others.

See...

Asimov thought that of course no one would design artificial intelligence that might cause ruin or disaster. Because Asimov thought it would be easy to design the goal structure of a artificially intelligent machine. That it would be easy to program something like the Three Laws of Robotics into the machine, so that you could construct a safeguard that basically said "hey, no destroying civliization or creating a dystopia in order to fulfill your other directives." And the AI would just shrug and say "okay," and remember that rule.

This is very easy to believe if your understanding of AI is based on anthropomorphization, as nearly everyone's is.

But in real life, anthropomorphizing AI is a fallacy. There is no single line you can write in an AI code to say "no causing disasters!" When people actually sat down and started thinking about how to implement something like Asimov's Three Laws, it turned out to be very hard to do this.

There are two responses to this. One is to think "wow, maybe we need to put less effort into equipping our AIs with ever-greater processing power, and more into equipping them with something like the Three Laws of Robotics so they don't abuse the processing power they have."

The other is to just assume by default that the robots will follow something like the Three Laws, there is NO WAY that a thing created by humans can or would do anything other than what humans intuitively expect it to do.

The former is reasonable. The latter... is directly contrary to every experience humanity has ever had with computers. Computers never do what we intuitively expect, they do what is a direct and logical consequence of their own programming. If we fail to program something into the machine, the machine will not desire it.

The article actually mentions that the elite is pretty chillaxed about this impending AI doom. Which means they think we either take these warnings at face value and act accordingly, destroying them in the process (unlikely), or they think we will just get distracted by their doomsaying and once again forget who the Architect is.

Wake up Neo and all that.

The warning is simple:

"It is dangerous to develop an AI that has the potential to self-modify, or the capacity to be far more intelligent than a human being, WITHOUT some way of being assured its goal system is rich and flexible enough to be compatible with human life."

The fact that a billionaire starts echoing this, years after other people including AI researchers have said the exact same fucking things, does not automatically become a reason to start ignoring the warning.

Furthermore, the billionaires doing the echoing may well think they ARE helping (say, by funding researchers who work on AI goals at the same time they work on everything else). Or that they are powerless to change things (because if they don't develop more advanced AI, someone else will). Or that they can stop whenever they

All of which are at best unwise and at worst incredibly stupid, but that doesn't automatically mean that the warning itself was wrong.

Haven't you ever met someone who thinks global warming is a danger, and still drives an SUV? Cognitive dissonance is a powerful thing.

...

That being said, if you take that warning and say "okay, AI research that outruns our ability to ensure that the AI has stable or benevolent goals is bad, let's restrain tech industry billionaires NOW!" ... That is fine. I have no problem with that at all. I have no love or loyalty for those tech billionaires. I just don't want to see people start deciding that real physical problems that could destroy our civilization don't matter, purely on the grounds that said tech billionaires claim those problems do matter.

If Donald Trump says that a giant comet hitting the Earth would be bad... I despise Donald Trump. But his opposition to getting hit by a comet doesn't make me LESS likely to believe that getting hit by a comet would be bad.

The theory of small-step reform is already beyond salvaging. The radicals don’t ignore it because they are bad and mean. They ignore it because it hasn’t provided a god damn thing over the last several decades.

Firstly, this is bullshit, because environmental regulation, minimal wage, anti-discrimination in hiring were all 'small-step' reforms with undeniably positive outcomes. In the last few years the US entered a period of limited, probably temporary rollbacks to some of these, but the cummulative difference in outcomes versus having no reforms since even 1970 is massive. Secondly, 'small-step reform' on the actual legal basis for public & private corporations hasn't actually happened in the last several decades or even the last century. There has been an increase in banking regulation, but that only affects the operations of a fraction of one sector.

The legilastive underpinnings of the public corporation model (ownership and taxation) have not changed significantly for over a century. Anti-trust and the introduction of corporate taxation around 1900 was it; the liability model hasn't changed for even longer. Obviously there is substantial resistance to doing so, but there's substantial resistance to say single-payer healthcare as well. The reason the legal basis for corporations is not likely to change whereas socialised healthcare or gun control are polticially possible (or at least, not unthinkable) is that liberals can articulate an actual programme and potential benefits for the later, whereas they can't seem to deal with corporations in anything but a throw hands up in the air, 'they're evil but there's nothing we can do' or 'destroy the whole system' way. Maybe this is 'medium-step reform' rather than 'small-step reform' because it is somewhat disruptive (the wailing of economists from the imposition of liability on shareholders would be incredible to behold), but it is massively less disruptive than trying to replace representative government and/or eliminate corporations as the primary means of organising labour into productive enterprises.

Land value tax is the one fundamental, 'legal definition of ownership'-type proposal that I do hear a lot about in the UK, mostly due to property bubble effects. It's a fringe position but at least it's well-defined and there are good arguments for it even from some professional economists. Of course a substantial fraction of the country has profited from the propery bubble so anything that opposes it is still politicially unviable at present.

Case in point: Trump and climate change.

Trump has produced a lot of bluster, but very limited actual impacts which will probably grind to a halt when the Democrats take control of congress, and will be reversed entirely when the US government swings back to Democratic control. On the subject of environmental regulation in general, most liberals do seem to think that stricter regulation can solve the problem within the context of the existing government and economic system, i.e. the problem is one of political support rather than feasibility. My argument was exactly that changes to the legal definition of companies can produce massive behaviour changes but this is unlikely to get political support. Even if you believe that revolution is more politically feasible than democracy* in producing desired outcomes, it is still intellectually lazy of liberals to whine about corporations without examining the mechanics of how laws produce behaviour for collective entities and what laws could produce preferable outcomes.

* This is of course incorrect, not so much because historically revolutions have always caused massive collateral damage and communist regiemes are always a disaster, but because contemporary first-world society has evolved mechanisms that are very good at co-opting all the most skilled and dangerous people, suppressing and misattributing symptoms, redirecting revolutionary sentiment into harmless (and competing) outlets, and criminalising any outliers.

First World just robbed their way to riches, so the point on whether it can adapt without continued robbery has to be proven by history. So far the First World is in a tailspin, its workplaces automated away and wages’ purchasing power collapsing. Historically the status quo, and its support with brute force, have caused just as much, if not more, damage as any revolution. Some revolutions were also triggers for radical reform in other societies trying to prevent one, so the accurate estimation of any consequences cannot be geographically or temporarily limited.

Now that this is out of the way, second point: that reform had “undeniably positive outcomes”. False. First World counties without a minimum wage exhibited similar, if not same, wage growth. Just as real wage growth stalled regardness of whether there was, or is, a nationwide minimum wage system.

A lot of reading recently caused me to massively re-think the history of human development. In short, you are simply optimistic and naive. I couldn’t give a crap about liberal whining and their understanding of corporations. I have seen quite a bit of the corporate underside, and it is foul, no matter what legalistic blah you would employ to excuse such entities.

Could you cite a large cooperative that organized large groups for an extended period in an effective manner?

Systems that scale well for a village do not necessarily provide a blueprint for how to sustain a global civilization.

Mondragon should be large enough to exceed that scale, but it keeps functioning.

Mondragon is an example I'm willing to think more about, but notice that Mondragon exists and survives under the same rules that more conventional corporations do. If Mondragon can thrive now, it shouldn't take that radical a change of the existing order to enable many more Mondragons to thrive.

“Global civilization” is not what needs to be sustained. Humans are. Civilization might just as well be local, if it is the only way to keep solidarity.

If you would prefer solidarity in medieval farming villages to non-solidarity in a world with electrification and manufacturing, you're going to find a lot fewer people willing to sign up for solidarity on your terms than you otherwise would. Almost all of my willingness to support socialism is contingent on my expectation that socialism would be able to keep the lights on. Among other things, because I consider that important to whether I AM being sustained as a human.

Lights were on before the current global order. Electrification and other signs of progress (mass vaccination, etc) coincided with a period of intense warfare and fragmentation.

I think that, while poorly written, the article at least tried to make a point that we have to think about our social order first, and everything else second. The social order kind of defines what we are producing, and how. Right now there is very little awareness.