Tag: philosophy

Alrenous @Alrenous 2h2 hours ago
Finally, if you’re really confident in your philosophy, it should move you action. Or why bother?
You moved to China. Good work.

Edit: I totally misread Alrenous here: he’s not saying “Change the world”, he’s saying “change your own life/environment”. So the below, while still, in my view, true and important, is not particularly relevant to his point. Oh well.

He makes a valid point that good knowledge cannot be achieved without trying things:

Alrenous @Alrenous 3h3 hours ago
Have to be willing to fail to do something new. Something new is patently necessary. NRx isn’t willing to fail. That’s embarrassing.

The problem with this is that neoreaction is the science of sovereignty. Like, say, the science of black holes, it is not really possible for the researcher with modest resources to proceed by experiment, valuable though that would be.

We have ideas on how to use and retain sovereignty, but less to say about how to achieve it. There is a great deal of prior art on how to gain power via elections, guerrilla warfare, coup d’état, infiltration; we don’t really have much of relevance to add to it.

We could do experiments in this area, by forming a political party or a guerrilla army or whatever, but that’s a long way from our core expertise, and though we would like to experiment with sovereignty, attempting to get sovereignty over the United States to enable our experiments is possibly over-ambitious. We could hope to gain some small share of power, but we believe that a share of power is no good unless it can be consolidated into sovereignty.

Given that we do not have special knowledge of achieving power, it seems reasonable that we should produce theory of how power should be used, and someone better-placed to get power and turn it into sovereignty should run their military coup or whatever, and then take our advice. That’s what we care about, even if cool uniforms would be better for getting chicks.

This is an ambitious project, but I think it is genuinely a feasible route to implementing our principles. Marxism’s successes in the 20th Century didn’t come because its theories were overwhelmingly persuasive; they came because Marxism had theories and nobody else did.

Since then, we have seen Steve Bannon, who apparently has at least read about and understood Moldbug, in a position of significant power in the Trump administration. We have seen Peter Thiel also with some kind of influence, also with at least sympathies towards NRx. These are not achievements in the sense that in themselves they make anything better. But they are experimental validations of the strategy of building a body of theory and waiting for others to consume it.

I have for the last few days been suggesting that Mark Zuckerberg could win the presidency as a moderate technocrat who will save the country from Trump and the Alt-Right Nazis, consolidate power beyond constitutional limits, as FDR did, and reorganise FedGov along the lines of Facebook Inc. This outcome is, frankly, not highly probable, but I insist that it is not absurd. One of the things that controls the possibility of this sort of outcome is whether people in positions of influence think it would be a good thing or a bad thing. If, with our current level of intellectual product we can get to the point of 2017 Bannon, is it not plausible that with much more product, of higher quality, much more widely known and somewhat more respectable, the environment in DC (or London or Paris) could be suitable for this sort of historically unremarkable development to be allowed to happen?

This, presumably, is the strategy the Hestia guys are pursuing with Social Matter and Jacobite, and I think it is the right one. We are at a very early stage, and we have a long way to go before a smooth takeover of the United States would be likely, though in the event of some exceptional crisis or collapse, even our immature ideas might have their day. But we do have experimental feedback of the spread of our ideas to people of intelligence and influence: if we had ten Ross Douthats, and ten Ed Wests, and ten Peter Thiels, discussing the same ideas and putting them into the mainstream, we would have visible progress towards achieving our goals.

Eric Raymond writes a very good post on Natural Rights and morality. The general approach he takes is the same as mine: utilitarianism sounds alright, but actually predicting the consequences of particular actions at particular moments is so damned hard that the only sensible way to do it is to get to a set of rules that seem to produce mainly good outcomes, and then treat them as if they were moral absolutes. Deep down, I know they’re not moral absolutes, but, as in other fields, a convenient assumption is the only way to make the problem tractable.

Like Raymond, I followed those principles to a libertarian conclusion. Well, to be completely honest, it’s more that I used those principles to justify the “natural rights” that I’d previously considered naively to be self-evident.

It’s still a big step. If you start from moral laws, you can always predict roughly where you’re going to end up. Using a consequentialist framework, even one moderated through a rules-system, there’s always a chance that you may change your mind about what set of proposed “moral absolutes” actually work best. That’s what happened to me.

I was particularly struck by a phenomenon where the more deeply and carefully I attacked a question rationally, the more my best answer resembled some traditional, non-rationalist, formulation. That led me to suspect that where my reasoning did not reach a traditionalist conclusion, I just wasn’t reasoning far enough.

That’s not particularly surprising. Ideas evolve. Richard Dawkins made a big deal of the fact that evolutionary success for an idea isn’t the same thing as success for the people who believe the ideas, and while that is a fair point in itself, I do not recall, at least from his writings back in the 80’s which I read avidly, him drawing a parallel with the well-known conclusion, made here by Matt Ridley via Brian Micklethwait, that in the very long run parasites do better by being less harmful to their hosts. By that principle, new religions (parasitic memeplexes) should be treated with fear and suspicion, while old ones are relatively trustworthy. Hmmm.

There are whole other layers to moral philosophy than this one of “selecting” rules. On one hand, utilitarianism is a slippery and problematic thing in the first place, and on the other side, moral rules, whether absolute laws or fake-absolute heuristics, have to be social to be meaningful, so the question of how they become socialised and accepted cannot be completely disentangled from what they should be. I am satisfied with my way of dealing with both these issues, but at the end of the day, I’m not that keen to write about it. When I think I’ve done moral philosophy well, I end up with something close to common sense. When I do it less well, I end up with things catastrophically worse than common sense. I therefore am inclined to rate common sense above philosophy when it comes to morality.

Let me just restate the thought experiment I embarked on this week. I am hypothesising that:

“Human-like” artificial intelligence is bounded in capability

The bound is close to the level of current human intelligence

Feedback is necessary to achieving anything useful with human-like intelligence

Allowing human-like intelligence to act on a system always carries risk to that system

Now remember, when I set out I did admit that AI wasn’t a subject I was up to date on or paid much attention to.

On the other hand, I did mention Robin Hanson in my last post. The thing is, I don’t actually read Hanson regularly: I am aware of his attention to systematic errors in human thinking; I quite often read discussions that refer to his articles on the subject, and sometimes follow links and read them. But I was quite unaware of the amount he has written over the last three years on the subject of AI, specifically “whole brain emulations” or Ems.

More importantly, I did actually read, but had forgotten, “The Betterness Explosion“, a piece of Hanson’s, which is very much in line with with my thinking here, as it emphasises that we don’t really know what it means to suggest we should achieve super-human intelligence. I now recall agreeing with this at the time, and although I had forgotten it I suspect it at the very least encouraged my gut-level scepticism towards superhuman AI and the singularity.

In the main, Hanson’s writing on Ems seems to avoid the questions of motivation and integration that I emphasised in part 2. Because the Em’s are actual duplicates of human minds, there is no assumption that they will be tools under our control; from the beginning they will be people with which we will need to negotiate — there is discussion of the viability and morality of their market wages being pushed down to subsistence level.

There is an interesting piece “Ems Freshly Trained” which looks at the duplication question, which might well be a way round the integration issue (as I wrote in part 1, “it might be as hard to produce and identify an artificial genius as a natural one, but then perhaps we could duplicate it”, and the same might go for an AI which is well-integrated into a particular role).

There is also discussion of cities which consist mainly of computer hardware hosting brains. I have my doubts about that: because of the “feedback” assumption at the top, I don’t think any purpose can be served by intelligences that are entirely isolated from the physical world. Not that they have to be directly acting on the physical world — I do precious little of that myself — but they have to be part of a real-world system and receive feedback from that system. That doesn’t rule out billion-mind data centre cities, but the obstacles to integrating that many minds into a system are severe. As per part 2, I do not think the rate of growth of our systems is limited by the availability of intelligences to integrate into them, since there are so many going spare.

Apart from the Hanson posts, I should also have referred to an post I had read by Half Sigma, on Human Capital. I think that post, and the older one linked from it, make the point well that the most valuable (and most renumerated) humans are those who have been succesfully (and expensively) integrated into important systems.

I felt a bit bad writing the last post on artificial intelligence: it’s outside my usual area of writing, and as I’d just admitted, there are a number of other points within my area that I haven’t got round to properly putting in order.

However, the questions raised in the AI post aren’t as far from the debates Anomaly UK routinely deals in as I first thought.

Like the previous post, this falls firmly in the category of “speculations”. I’m concerned with telling a consistent story; I’m not even arguing at this stage that what I’m describing is true of the real world today. I’ll worry about that when the story is complete.

Most obviously, the emphasis on error relates directly to the Robin Hanson area of biases and wrongness is human thinking. It’s not surprising that Aretae jumped straight on it. If my hypothesis is correct, it would mean that Aretae’s category of “monkeybrains”, while of central importance, is very badly named: the problems with our brains is not their ape ancestry, but their very purpose: attempting to reach practical conclusions from vastly inadequate data. That is what we do; it is what intelligence is, and the high error rate is not an implementation bug but an essential aspect of the problem.

(I suppose there are real “monkeybrains” issues in that we retain too high an error rate even when there actually is adequate data. But that’s not the normal situation)

The AI discussion relates to another of Aretae’s primary issues: motivation. Motivation is getting an intelligence to do what it ought to be doing, rather than something pointless or counterproductive. When working with human intelligence, it’s the difficult bit. If artificial intelligence is subject to the problems I have suggested, then properly specifying the goals that the AI is to seek will quite likely also turn out to be the difficult bit.

I’m reminded in a vague way of Daniel Dennett’s writings on meaning and intentionality. Dennett’s argument, if I remember it accurately, is that all “meaning” in human intelligence ultimately derives from the externally-imposed “purpose” of evolutionary survival. Evolutionary successful designs behave as if seeking the goal of producing surviving descendants, and seeking this goal implies seeking sub-goals of feeding, defence, reproduction, etc. etc. etc. In humans, this produces an organ that explicitly/symbolically expresses and manipulates subgoals, but that organ’s ultimate goal is implicit in its construction, and not subject to symbolic manipulation.

The hard problem of motivating a human to do something, then, is the problem of getting their brain to treat that something as a subgoal of its non-explicit ultimate goal.

I wonder (in a very handwavy way) whether building an artificial intelligence might involve the same sort of problem of specifying what the ultimate goal actually is, and making the things we want it to do register properly as subgoals.

The next issue is what an increased supply of intelligence would do to the economy. Though an apostate libertarian, I have continued to hold to the Julian Simon line that “Human inventiveness is the Ultimate Resource”. To doubt that AI will have a revolutionarily beneficial effect is to reject Simon’s claim.

Within this hypothesis, the availability of humanlike (but not superhuman) AI is of only marginal benefit, so Simon is wrong. Then, what is the ultimate resource?

Simon is still closer than his opponents; the ultimate resource (that is the minimum resource as per the law of the minimum) is not raw materials or land. If it is not intelligence per se, it is more the capacity to endure that intelligence within the wider system.

I write conventional business software. What is it I spend my time actually doing? The hard bit certainly isn’t getting the computer to do what I want. With modern programming languages and and tools, that’s really easy — once I know what it is I want. There used to be people with the job title “programmer” whose job it was to do that, with separate “analysts” who told them what the computer needed to do, but the programmer was pretty much an obsolete role when I joined the workforce twenty years ago.

Conventional wisdom is that the hard bit is now working out what the computer needs to do — working with users and defining precisely how the computer fits into the wider business process. That certainly is a significant part of my job. But it’s not the hardest or most time-consuming bit.

The biggest part of the job is dealing with errors: testing software before release to try to find them; monitoring it after release to identify them, and repairing the damage they cause. The testing is really hard because the difficult bits of the software interact with multiple outside people and systems, and it’s not possible to fully simulate them. New software can be tested against pale imitations of the real world, and if it’s particularly risky, real users can be reluctantly drafted in to “user acceptance” testing of the software. But all that — simulating the world to test software, having users effectively simulate themselves to test software, and running not-entirely-tested software in the real world with a finger hovering over the kill button — is what takes most of the work.

This factor is brought out more by the improvements I mentioned in the actual writing of software, but it is by no means new. Fred Brooks wrote in The Mythical Man-Month that if writing a program took n days, integrating it into a system would take 3n days, properly productionising it (so that it would run reliably unsupervised) would take 3n days, and these are cumulative, so that a productionised, integrated version of the program would take something like ten times as long as a stand-alone developer-run version to produce.

Adding more intelligences, natural or artificial, to the system is the same sort of problem. Yes, they can add value. But they can do damage also. Testing of them cannot really be done outside the system, it has to be done by the system itself.

If completely independent systems exist, different ideas can be tried out in them. But we don’t want those: we want the benefits of the extra intelligence in our system. A separate “test environment” that doesn’t actually include us is not a very good copy of the “production environment” that does include us.

All this relates to another long-standing issue in our corner of the blogosphere: education, signalling and credentialism. The argument is that the main purpose of higher education is not to improve the abilities of the students, but merely to indicate those students who can first get into and then endure the education system itself. The implication is that there is something very wrong with this. But one way of looking at it is that the major cost is not either producing or preparing intelligent people, but testing and safely integrating them into the system. The signalling in the education system is part of that integration cost.

Back on the Julian Simon question, what that means is that neither population nor raw materials are limiting the growth and advance of civilisation. Rather, civilisation is growing and advancing roughly as fast as it can integrate new members and new ideas. There is no ultimate resource.

It is not an original observation that the things that most hurt our civilisation are self-inflicted. The organisation of mass labour that produced industrialisation also produced the 20th century world wars. The flexible allocation of capital that drove the rapid development of the last quarter century gave us the spectacular misallocations with the results we’re now suffering.

The normal attitude is that these accidents are avoidable; that we can find ways to stop messing up so badly. We can’t. As the external restrictions on our advance recede, we approach the limit where the benefits of increases in the rate of advance are wiped out by more and more damaging mistakes.

Twentieth Century science-fiction writers recognised at least the catastrophic risk aspect of this situation. The concept that the paucity of intelligence in the universe is because it tends to destroy itself is suggested frequently.

SF authors and others emphasised the importance of space travel as a way of diversifying the risk to the species. But even that doesn’t initially provide more than one system into which advances can be integrated; at best it reduces the probability that a catastrophe becomes an extinction event. Even if we did achieve diversity, that wouldn’t help our system to advance faster, unless it encouraged more recklessness — we could take a riskier path, knowing that if we were destroyed other systems could carry on. I’m not sure I want that; it raises the same sort of philosophical questions as duplicating individuals for “backup” purposes. In any case, I don’t think even that recklessness would help: my point is not just that faster development creates catastrophic risk, but that it increases the frequency of more moderate disasters, like the current financial crisis, and so wipes out its own benefits.

An older friend frequently asks me, as a technologist, when computers will have human-like intelligence, and what the social/economic effects of that will be.

I struggle to take the question seriously; AI is something that was dropped as a major research goal around the time I was a student twenty years ago, and it’s not an area I’m well-informed about. As I mentioned in my review of the rebooted “Knight Rider” TV series, a car that could hold up a conversation is a more futuristic idea in 2008 than it was back when David Hasselhof was doing the driving.

And yet for all that, it’s hard to say what’s really wrong with the layman’s view that since computing power is increasing rapidly, it is an inevitability that whatever the human brain can do in the way of information processing, a computer should be able to do, quite possibly within the next few decades.

But what is “human-like intelligence”? It seems to me that it is not all that different from what the likes of Google search or Siri do: absorb vast amounts of associations between data items, without really being systematic about what the associations mean or selective about their quality, and apply some statistical algorithm to the associations to pick the most relevant.

There must be more to it than that; for one thing, trained humans can sort of do actual proper logic, about a billion times less well than this netbook can, and there’s a lot of effectively hand-built (i.e. specifically evolved) functionality in a some selected pattern-recognition areas. But I think the general-purpose associationist mechanism is the most important from the point of view of building artificial intelligence.

If that is true, then a couple of things follow. First, the Google/Siri approach to AI is the correct one, and as it develops we are likely to see it come to achieve something resembling humanlike ability.
But it also suggests that the limitations of human intelligence may not be due to limitations of the human brain, so much as they are due to fundamental limitations in what the association-plus-statistics technique can practically achieve.

Humans can reach conclusions that no logic-based intelligence can get close to, but humans get a lot of stuff wrong nearly all the time. Google Search can do some very impressive things, but it also gets a lot of stuff wrong. That might not change, however much the technology improves.

There are good reasons to suspect that human intelligence is very close to being as good as it can get.
One is that thinking about things longer doesn’t reliably produce better conclusions. That is the point of Malcolm Gladwell’s “Blink” (as far as I understand it; I take Gladwell to be the champion of what Neal Stephenson called “those American books where once you’re heard the title you don’t even need to read it”).

The next, related, reason is that human intelligence doesn’t scale out very well; having more people think about a problem doesn’t reliably give better answers than having just one do it.

Finally, the fact that, in spite of evolutionary pressure, there is enormous variation in the practical usefulness of human intelligences, suggest that making it better is not simply a case of improving the design. If the variation were down to different design, then the better designs would have driven out the worse ones long ago. I think it is far more to do with circumstances, and with the fundamental difficulty of identifying the correct problems to solve.

The major limitation on conventional computing is that it can only do so much per second; only render so many triangles, only price so many positions or simulate so many grid cells. Improving the speed and density of the hardware is pushing back that major limitation.

The major limitation on human intelligence, particularly when it is augmented with computers as it generally is now, is how much it is wrong. Being faster or bigger doesn’t push back the major limitation unless it can make the intelligence wrong less often, and I don’t think it would.

What I’m saying is that the major cost of human intelligence is not in the scarce resources required to execute the decision-making, but the damage caused by all the bad decisions that humans make.

The major real-world expense in obtaining high-quality human decision-makers is identifying which of the massive surplus available are actually any good. Being able to supply vastly bigger numbers of AI candidates would not drive that cost down.

Even the specialisms that humans have might be limited more by the cost they impose on the quality of general decision-making than by the cost of actually implementing the capability.

If that’s the situation, then throwing more computing resources at AI-type activity might not change things that much: computers can be as intelligent as humans, but not more intelligent. That’s not nothing, of course: it opens the door to replacing a lot of human activity with automated activity, with all the economic effects that implies.

There will be limitations in application because if human-like intelligence really is what I think it is, then the goals being sought by an AI are necessarily as vague as everything else: they will be clumps of associations, and the “intelligence” will just do the things that are associated with the goal clump. We won’t be able to “program” it the way we program a logic-based system, just kind of point it in the right direction in the same we we do when we type something into a Google search box.

I don’t know if what I’ve put here is new: I think the view of what the major issue in intelligence is is fairly widespread (“associationism”?), but in all previous discussions I’ve seen or participated in, there’s been an assumption that if in x years from now we will have artificial human-like intelligence, then in 2x years from now, or probably much less, we will have amazing superhuman artificial intelligence. That is what I am now doubting.

With intelligences available “in the lab” we might be able to prepare and direct them more effectively than we do now. But even that’s not obviously helpful: with human education, again, the limitation is not so much how long it takes and how much work it is, rather how sure we are it is actually doing any good at all. We may be able to give an artificial intelligence the equivalent of a hundred years of university education, but is a person with that experience really going to make better decisions? The things we humans work most hard at learning and doing: accumulating raw information and reasoning logically, are the things that computers are already much better than us at. The things that only humans can do are the things we simply don’t know how to do better, even if we were to re-implement on an electronic platform, speeded up, scaled up, scaled out.

Note that all the above is the product of making statistical guesses using masses of ill-understood unreliable associations, and is very likely to be wrong.

I’ve never been able to understand authority as anything other than thugs with bigger sticks

Well, sure. That goes without saying. But thugs with bigger sticks are a fact of life, unless you set out yourself to be the biggest thug of all. Which, despite his having “chosen reason over authority”, does not seem to have been Aretae’s plan (I’m not sure exactly how to go about it, but I doubt it would leave much time for cookery).

This is a step back from our previous discussion, because it’s not about formalism versus democracy, or monarchy versus neocameralism, it’s about law versus anarchy.

The metaphor I would prefer, though, is not a “step back”, but a step down. Morality, or “Right Conduct”, like system architectures, has layers*.

The base layer is absolute imperatives. These pretty much have to be supernatural, or else non-existent. Aretae believes that nobody can give him an order that he absolutely must obey. I agree. At that layer, I am an anarchist:

There is no God but Man Man has the right to live by his own law blah blah blah… Man has the right to kill those who would thwart those rights.

Having deified my own reason and my own appetites above all alleged authority, I can now follow them to get what I want.

The technology risk/governance types in a large organisation come up with rules about what a programmer on the coalface is allowed to do to the company’s precious systems. They frequently come up with rules for application code, and rules for configuration. If they’re not careful, or not expert, they end up with definitions that either classify java bytecode as configuration for the jvm, or else classify users’ spreadsheets as application code. Code and configuration really aren’t different things, they’re just different layers. They smell the same.

If Aretae starts to construct rules of thumb for how to act by his own reason for his own appetites, those rules will smell a lot like morality. They may not actually be ultimate imperatives that he has to obey, but then java bytecode isn’t actually machine instructions that are executed by a CPU.

So when I argue for authority, I do so not on the basis of ultimate morality, but on the basis of what works better for me. I don’t shy away from the words, however, because of the remarkable resemblance between what I reason to be the most utilitarian form of government, and what was once believed to have been imposed by supernatural forces. It is too close to be coincidental — I think for most people, they would be better off accepting the old morality and getting on with their cooking.

Further, the “no authority” attitude is not antithetical to formalism. The real opponents of formalism are those who do believe that some forms of government have an ultimate moral legitimacy that others lack. Aretae and I believe that all governments are ultimately “thugs with bigger sticks”, and the argument is not about which has more moral authority, but about which works better for us. That argument of course remains unresolved, but that’s because TSID, not because of different fundamental assumptions.

Gray fleshes out some detail of what he wrote about in Black Mass, which I discussed here in 2008. I was put off Black Mass by what I thought was excessive generalisation, a misplaced attempt to force a grand unifying thesis on events.

Dealing with a specific, that problem does not arise, and I have little to quarrel with here.

The first point is the recency of the dominance of the idea of primary universal human rights — Moyn dates the idea to the late 1970s, and Gray blames it on John Rawls. He identifies the key flaw in Rawls’ theory, which is that it simply takes for granted a state structure that cares somehow for the well-being of its subjects in a fairly broad way, and only suggests how such a state should best define and pursue that aim. How such a historically unlikely state can come to exist and be preserved is not addressed.

(That is also, of course, the flaw that has separated me from most forms of libertarianism, which is an alternative —indeed superior— program resting on the same unwarranted assumptions*).

Best quote: But if human rights are artifacts that have been constructed in specific circumstances, as I would argue, history is all-important; and history tells us that when authoritarian regimes are suddenly swept aside, the result is often anarchy or a new form of tyranny—and quite often a mix of the two. Human rights as artifacts echoes what I andDavid Friedman have said; the anarchy and tyranny following revolution is just what I was talking about in the context of the Nobel Peace Prize. The neatness is slightly marred by the use of that unfortunate word “authoritarian” again — here it seems to mean “anything other than modern liberal democracy”, which is at least less mysterious than Assange’s version.

The review also serves as an example of Mencius Moldbug’s claim, that the common assumptions of today are the Harvard ideas of two generations ago.

Obviously the claim of inherent human rights is not entirely new — I vaguely recollect some mention of “all men … unalienable rights” in an old document of some kind. What is new, according to Moyn and Gray is the moral primacy of human rights; not endowed by a creator but independent, the starting point of a moral system.

Gray’s piece also contains what could be seen as a response to my criticism of Black Mass; he constrains what he calls “utopian” projects to those where it can be known in advance that its central objectives cannot be realized. The question of what can be realized and what cannot is, of course, usually the centre of political controversy to start with. “Politicians make promises they can’t keep” — there’s a shocking new idea.

*This is unfair to some libertarians, including David Friedman. Separate post to follow.

Rights in human societies, including modern ones, are based on the same pattern of behavior as territorial behavior in animals or enforcement via feud and the threat of feud, even if less obviously so. Each individual has a view of his entitlements and is willing to bear unreasonably large costs in defense of them.

Politics and morality can become mired in ill-understood abstractions, so I’m re-evaluating my ideas in more concrete terms; what should be done? What should I do, what should we do, for any values of we that I can get a sensible answer with?

The two questions are separate. Taking the first, what should I do? Taking myself in isolation, there have been two coherent answers to that question: one is “whatever God says”, and the other is “Whatever I like”.

I prefer the second of those, but it can use some refinement. Doing what I want now could cause me problems in the future; I need to anticipate, and delay gratification to gain more in the long run.

There is a more subtle refinement too: I am not detached from the world; I can change the world, and in the process change myself. It can be easier to manage myself to be satisfied with what is, than to manage the world to satisfy myself. Dispassion is part of the mix as well.

But that’s all viewing one person in isolation – an unrealistic approach. Humans are social, and need to form groups to succeed. As well as pursuing my personal goals, I need to gain the cooperation of my neighbours. How to do that is the larger part of what is normally thought of as the sphere of morality.

The most obvious fact is that the answer varies. What will win me cooperation in one society will have me shunned in another; what works in one century (decade, sometimes) fails in the next.

All we can say is that it is necessary for me to conform to the collective expectations of the other people I interact with – to fulfill my designated role in whatever society.

For that to make sense, I have to know what my society is. In theory that’s difficult: it’s some group of people who interact with me and share expectations of each others’ behaviour. In practice, it’s usually easier to identify, but not always. I’ll come back to that.

As in the individual case, that is not the end of the story. Some societies allow their members to achieve their goals more effectively than others do. Societies change, as individuals do, and they can fail or be replaced. We can say that each person should do what is required of them by their society, and still say that one society is better than another. It might be better in that it is more useful to its members, or it might be better in a different sense in that it is less vulnerable to shocks, more able to grow in reach and strength.

These judgments on societies matter, because, while seeking our own goals and conforming to our place in society, we still may have some power to direct society in a given direction. If we have a vision of a good society, we can aim to change our society for the better.

One practical aside – the aims of improving society and being a good member of it can come into conflict, and attempts to resolve those two competing priorities are often at the centre in dram and history. Froude’s Times of Erasmus and Luther contrasts Erasmus’s desire to be a good citizen of Christendom with Luther’s defiance of his allotted role in the cause of improving Christendom. In this case Froude comes down on the side of Luther, but the question is more important than the answer.

There’s an important point missing: We can talk about what makes a society good or bad, and how a member of society can attempt to change it, but ultimately my aim is to advance my own interests, and that might be most effectively done by changing society in a way that is not better either for the society in its own right or for its members generally.

It seems reasonable to say that societies will do better, for themselves and their members, if they somehow prevent this from happening to any significant degree. That’s not a theorem – conceivably an arrangement that permits it may bring compensating benefits that outweigh the damage sustained – but they’d better be very substantial benefits.

I’m trying to keep separate two different ways in which a society can be good – it can be good for its members, or it can be good for itself, seen as a metaphorical organism: able to survive, adapt and improve. Inasmuch as a society is a way for its members to better their own lot, the first good is primary, and the second only significant in that it supports the first.

There are a few different forms that can exist to prevent a society being wrecked by selfish interests. (Again, there are two quite distinct ways of being wrecked: the society can be weakened to the degree that it is replaced, either from without or within, by a different society, or else it can remain secure, but provide less value to its members). The first defence is rigidity. If the society is very resistant to any change at all, then it is resistant to wrecking. The problem is it is unable to develop, and unable to react to changing circumstances. Some societies in the past have been successful for their members by being stable, but the rapid changes in the world and in the capabilities of people over the past few centuries have swept all of them away.

To safely accomodate flexibility, a society must preferentially encourage its members to change it in ways that benefit the society and its members.

There is a three-way trade-off: my interests, the interests of my neighbours, and the interests of the organism of society. We rely on society to allow the first trade-off, between each other, to be resolved in an efficient and non-destructive way. The second tradeoff, between a society and its members, is more difficult.

Nothing I’ve written here is new. Never mind Carlyle and Froude, quite a lot of it can be found in Aristotle. However, it’s not a set of ideas that I’ve put together before, and includes things that I explicitly rejected when I was young and arrogant.

Also, it’s not a set of ideas that provides easy answers to difficult questions. That’s always a good sanity check1. If your calculations show you can build a perpetual motion machine, or solve NP-complete problems in linear time, you’ve probably made a mistake. This framework doesn’t usually answer difficult questions, but it at least tells you why they’re difficult.

I promised to write about patriotism, and now I have set up the scenery. Froude’s comment2 on a “distinguished philosopher” seems anti-rational; and so it is, but I am prepared to be persuaded to it.

The problem that society solves is how to cooperate with my neighbours; how to achieve more together than we could in conflict, or even more than we could independently. We cannot do this without some framework that enables us to match expectations, and that framework needs to be stable enough for us to move with confidence from one interaction to the next.

The framework can be changed, for the better or the worse. As well as enabling our cooperation, therefore, it needs to be such that I can be assured of continuing to benefit from it in future. The future, though, is uncertain, and it his hard for me to know that circumstances will not arise where my neighbour can gain by destroying the assurances that I have relied on. This is the second tradeoff above, between the members of the society and the society itself. The society exists for its members, but we need to maintain it too.

There is a smaller-scale, easier parallel to this situation, which I wrote about before. When two people become a family, each is threatened by the possibility that the other will destroy or abandon what has been created. Reassurance is at hand, however, through the irrational attachments that people in that position have been bred to form towards each other, which discourages them from breaking the bonds even if it becomes objectively convenient for them to do so. The irrationality is an advantage to the individual, as it enables him to make somewhat binding commitments in the absence of any external enforcement mechanism, and thereby reach more advantageous social arrangements.

My neighbours’ love of our country is what enables me to tolerate their freedom, as my wife’s love is what enables me to tolerate hers.

It is a threat to the tradeoffs if the society can be changed by individuals who are not dependent on it either practically or emotionally. That is why it is important to know who is in and who is out. This is often looked on as some kind of prehistoric handicap, but it is not. I’ve been talking about “societies”, not countries, so I have not yet closed the loop to say anything about patriotism. I admitted above that we need to identify which individuals are the ones we care about, from the point of view of succeeding personally by fulfilling our expected role in society. There are two answers, on two levels. First, those who we expect to interact with in future. Second, those who can change the expectations that we have towards the first group, and that the first group has towards us. If someone will be dealing socially with me, I need him to be within the social framework. If someone can affect the social framework itself, I want him to be constrained not to damage its effectiveness or longevity.

That’s still, on the face of it, rather imprecise. However, for most people, through most of history, it’s been very easy to work out. There’s a good reason for that: if you don’t know who is in your society and who isn’t, you are in a lot of trouble – at least your society is, and that means that, in the long run, you are too. With personal love comes jealousy, and with the patriotism that gives a society its longevity comes a certain chauvinism. That’s a necessary feature, not a bug. If someone isn’t a member of your society, they need to be kept away from it, or at least made powerless over it, lest they damage it.

Tribes work as societies on that basis. We had nation-states for a few centuries, and they worked too, more or less. Now we do not have a society where it is clear who is in and who is out, and where the members are bound to preserve and improve it. We have many compensations, and I haven’t proved we’re worse off in net, but I’ve at least shown how we could be, how, other things being equal, patriotism is a virtue.

In the end, we may go back to tribes, or as John Robb has it, to some new kind of tribe.

Footnotes:

1 “My own conviction with respect to all great social and religious convulsions is the extremely commonplace one that much is to be said on both sides” – Froude, The Influence of the Reformation on the Scottish Character

2 “I once asked a distinguished philosopher what he thought of patriotism. He said he thought it was a compound of vanity and superstition; a bad kind of prejudice, which would die out with the growth of reason. My friend believed in the progress of humanity–he could not narrow his sympathies to so small a thing as his own country. I could but say to myself, ‘Thank God, then, we are not yet a nation of philosophers.’

“A man who takes up with philosophy like that, may write fine books, and review articles and such like, but at the bottom of him he is a poor caitiff, and there is no more to be said about him.” – Froude, Times of Erasmus and Luther

I’ve been on holiday for a couple of weeks, and I expected to write quite a lot here in that time.
The reason I didn’t is that my political thinking has pretty much come to a conclusion. I don’t like it at all, but it’s a conclusion for all that.

When Adam Smith was writing, there were many theories, public and private, about what a business ought to do. Smith pointed out, [drawing from Darwin and Malthus] (edit, yes I really wrote that, oops), that whatever theory they believed, the businesses that survived would be those which aimed at maximising profit, or those that, by coincidence, behaved as if that was what they aimed at.
The situation in politics is that, while there are many theories about what politicians should do, those politicians will succeed that behave as if their aim is to achieve power at any cost. Perhaps historically many politicians had other aims, and the successful ones were those who happened to act as a pure power-seeker would, but now there is sufficient understanding of what path will gain and hold power that those who consciously diverge from the path least will be those who win.
To be clear, I’m not simply talking about electoral politics here. I’m talking about all politics, in non-democratic systems, in the electoral process, and in the wider and more important politics beyond elections, where power lives in media, civil service, educational, trade union and other centres outside the formal government.
The trivial fact – that power will go to those that want it – is reinforced by the more effective co-operation that pure power-seekers can achieve than ideologues. A large number of power-seekers, although rivals, will co-operate on the basis of exchanges of power. The result is a market in power, and that is the most effective basis for large-scale collective action. Those attempting to achieve specific, different but related aims will find it much more difficult to organise and co-operate on the same scale.

Is it not possible, then, to have significant influence, not by competing directly with politicians but by competing with the media/educational branches of the establishment by promoting ideas? The metacontext, as the folks at Samizdata say. It is indeed possible to influence politics by doing that, and that is what libertarians have done for the last half century or so. But I’m not sure it’s possible to have good influence. Certainly some good things have happened because libertarians have changed the metacontext to the point where the things have appealed to power-seekers. But some bad things have happened that way too. The fact is that while the “background” beliefs of the electorate and other participants in politics does have an effect, there is no reason to assume that correct background beliefs cause better policies than incorrect background beliefs.
One of the most depressing aspects of activism is that on the very few occasions when you get someone onto your side, either by persuading them or just finding them, more often than not they’re still wrong. They’re persuaded by bad arguments rather than good arguments. Activism would appeal to me on the idea that I will win out in the end because my arguments are good, but in fact not only do my good arguments not win against my opponents’ bad arguments, my good arguments do not even win against my allies’ bad arguments. The idea that truth is a secret weapon that is destined to win out once assorted exceptional obstacles have been overcome is an utter fantasy.
As a result, even if you do achieve marginal influence by working for policies or ideas that would be widely beneficial, your success is likely to backfire. The other players in the game are working for the narrow interest of identifiable groups and, as such, are able to mobilise far greater resources. They also are willing to trade with other power seekers, which improves their effectiveness further. The idealist is not able to do that, because the idealist obtains only the particular powers he wants to keep, whereas the politician grabs whatever power he can, even if it is of no use to him, and that which is of no use to him, he trades. The only way to do that is to get whatever power you can, which is my definition of a politician.
It still feels like there is something noble in working for better government, even if the project appears doomed. But there isn’t. After all, most utopians from anarchist to fascist to Marxist are working for better government, but we oppose them because their utopias are unachievable and their attempts to get there are harmful. Your ideas don’t work because they’re flawed, my ideas don’t work because politics is flawed. Hmmm. Why are my ideas better than yours, again?
And that is the final straw. In truth, I have never been an activist. I have neither appetite or aptitude for practical politics, which after all is basically a people business, but I used to believe it was interesting to look in isolation at the question of what those with political power ought to do with it, so as to make the government as good as possible, in a vaguely utilitarian way. What brings my political efforts to an end is the realisation that that is meaningless. A political theory based on the assumption that a government will act in the general interest once it understands how to do so is as useful as a theory based on the assumption that the world is flat and carried by elephants. Politics has given me some entertainment over the years, but not as much as Terry Pratchett has.
If I am going to assume that governments work in the general interest, once they understand how to do it, I might just as well assume that industrialists work in the general interest, in which case all my clever arguments about the value of private property rights for resolving opposing private interests are completely irrelevant.
It’s amusing that of all the posts on this blog, one of the most important turns out to be one that I thought at the time was unimportant: this one, originally driven by my musings on Newcombe’s Paradox.
Almost all significant propositions are, implicitly or explicitly, of the form IF {some hypothetical state of the world} THEN {something will result}. In politics, the hypothetical frequently involves some person making some decision. The proposition therefore needs to take into account whatever is necessary for that person to actually make that decision – and the other effects of those necessary conditions may well be more significant than the stated result.
I came very close to making all the connections back then, even raising the significance of my facetious “if I were Führer” form of putting political propositions. I am not Führer, and never will be, and neither will anyone like me, and all my political logic collapses on that just like any other proof premised on a falsehood.
Where does that leave me? I am no longer a libertarian – I find libertarian arguments just as correct as I always did, but they are of no relevance to the real world. I could continue to comment here on the stupidities that people accept from various politicians, but I would be doing it in the same spirit as if I were judging the team selection of a football club – in full awareness of my own impotence and irrelevance. Maybe I will. It would make more sense to take up something useful, like gardening.
I can also attempt to benefit humanity by encouraging others to detach from politics as I am doing. Someone has to have power, and if you think you can get it and you would be good at it, by all means go for it. If not, then leave well alone. Be one of the ruled, and pursue whatever aims you choose without the illusion that you have the right, the duty or the capability to change the policies of the rulers. Embrace passivism.