Magical Foundations of Agreement

We’ve looked at the banality of problems that plague execution, the limitations of the perfect worlds of strategy, and the deep, often conflicting, hierarchies that underlie action. We’ve seen how difficult it is to create, communicate, and scale coherent action hierarchies. But these might still seem like practical challenges that can be overcome if we could just make our models accurate enough, our meta-capabilities developed enough, our goals clear enough, our language explicit enough, and our incentives powerful enough. Maybe not today, but in the future. Maybe if we could gather sufficient empirical evidence, develop direct brain-to-brain communication, build an optimal planning computer, or otherwise improve our capabilities, we could converge to solutions that are fair, stable, effective, and objectively correct.

It is common to blame lack of progress towards convergence on bad faith or incompetence. It is obvious how power, money, fame, or nepotism incentivize defense of the status quo. It is easy to find examples of hypocrisy, immorality, laziness, or unfairness. It is tempting to reach a comforting, but wrong, conclusion that perverse incentives, bad people, and flawed systems are the reason we don’t move towards obvious solutions.

There are, of course, massive practical imperfections that can be improved. But underneath all the misunderstandings, inefficiencies, bad faith, and practical challenges there are legitimate disagreements that have been engaged in good faith by smart, conscientious, and resourceful people for millennia; disagreements that have proven impossible to conclusively resolve even with the purest of intentions and even in the most idealized of thought experiments. Underneath the status quo are concessions – not just to the practical difficulty, but – to the theoretical impossibility of complete agreement.

The search for universal agreement is much like the search for perpetual motion machines. Success would allow us to replace trade-offs with clean solutions, to transcend the ugly reality of compromises, limitations, and angst, to reach utopia. It would profoundly change the world and it often seems so deceptively close: just one lever or argument away. But approaches that depend on universal agreement are non-starters in the same way that perpetual motion machines are. And as with perpetual motion machines, people will continue to spend their lives on the impossible because they confuse it with the merely difficult.

The flaws in execution and strategy dominate our attention because they are readily apparent. But it is the limitations of models, meta-capabilities, and goals that set theoretical limits on the level of convergence that can be achieved.

Models Rest On Axioms, Not Truths

If you expose yourself to a sufficient number of people and arguments, you’ll experience an unsettling realization that apparently solid and universal truths aren’t obvious to, or shared by, everyone. You optimistically try to understand the actions of another, or confidently try to justify your own, when it finally hits you that they don’t share your premises. The easiest way to viscerally experience this is in the matters of faith, morality, or tradition.

The simplest, and most common, response is to deny that objectivity of our premises has been questioned: instead, we assume that others misunderstand or don’t act in good faith. The alternative is to consider that one or both of us may act from unexamined assumptions: that magic made complex abstractions seem like obvious truths. We can then unpack these abstractions, trace them to their fundamental truths, and attempt to agree on those.

We scarcely consider questioning whether truth is accessible or accurate models beneficial. We’ve experienced desirability of such premises so often that much of our functioning depends on them. The quest for universal agreement rests on the belief that there are models that are true, accessible, and shareable; and that our failure to discover and converge on such models is therefore a failure of execution.

But as we unpack the unexamined assumptions that we previously thought were truths, we find that the more basic truths they rest on are themselves unexamined assumptions protected by magic. As we unweave our models, we find each solid fact morph into a vague premise that requires clarification and defence on evermore fundamental basis.

If we conscientiously continue to unwrap the magic in our models, we’ll eventually reach foundations that are rooted in biology and inductive learning. Foundations, which are themselves magical abstractions of complex, low-level, and obscure processes.

Even assumptions as basic as the objectivity of sensory input rest on questionable premises. We aren’t privy to raw sensory input: what we see is the heavily processed output of our brains’ models, which themselves rest on biological and learned axioms, with all the biases and inaccuracies this leads to. Even the inputs into these models don’t offer clean, raw information because they are provided by biological sensors, which evolved for more pragmatic purposes.

What is ultimately underneath models aren’t facts, but axioms: premises that we take for granted and do not defend. Because the choice of fundamental axioms depends on the specifics of our genetics and experiences, it is not possible to escape their variability and thus the subjectivity and plurality of models that are built on them.

No matter how deeply you dig, you will always stand on assumptions. This doesn’t prove that objective facts do not exist, but it does mean that perception and interpretation of such facts will be colored by axioms and thus remain open to disagreement.

Axioms Bootstrap Meta-Capabilities

How are we to resolve such disagreements? Or, even if we could converge on a set of objective truths, how are we to turn them into models, strategies, and actions? How are we to communicate and adjust them? Indeed, how is the process of unpacking our models, which we took for granted in the last section, even to take place?

The quest for universal agreement assumes that true conclusions flow objectively and unambiguously from true premises, that the methods we use to evaluate, infer, communicate, and convince are universally valid and accepted. If a chain of deductions fails to produce true conclusions, then there must be a mistake in execution: the facts were wrong or the deductions were flawed.

The methods that allow us to arbitrate between models, proceed from premises to conclusions, make sense of evidence and arguments, and do just about every other information processing task are the essence of our meta-capabilities. They include ways to evaluate coherence, such as the rules of logic; to test theories, such as the scientific method; to communicate, such as language; and to perform numerous other fundamental tasks.

Furthermore, there is a long history of attempts to reach agreement on topics like these that spans thousands of years and is the essence of philosophy. The only constant of this effort is an ever-expanding tree of axioms, schools, and methods that drives home how much there is for us to disagree on, even with the highest intelligence and the best of intentions.

If these disagreements seem like academic abstractions that belabor the obvious, it is probably because you still believe that your premises are universally true and disagreements are a matter of ignorance or bad faith. Even pluralistic ideas like tolerance, markets, and democracy are still premises that claim universality; there is no agreement on their meaning, correctness, or implications. It is precisely because magical truths appear too obvious to require debate that philosophy seems unnecessary to so many people.

On the other hand, if you’ve spent time thinking about such topics, then you may be itching to defend your conclusions. But my goal here isn’t to argue about the merits of specific models; it’s only to show that universal agreement isn’t possible. For this purpose, it is sufficient that there are defensible reasons to hold a variety of premises and that they are all indeed held by intelligent, moral, knowledgeable people.

While existence of subjectivity in facts is debilitating, existence of subjectivity in methods is catastrophic: not only do we need these methods to decide between, and proceed from, axioms, but they are the only thing we have to decide between the methods themselves. To bootstrap abilities, such as evaluation and inference, we need to accept axioms that can only be justified with circular logic.

There are no positions that are free of axioms. Axioms that are ultimately backed by God or induction or rationalistic theories or social norms or other acts of faith – whatever we name them. There is no indisputable way to select between them and thus no way to prevent disagreement on even the most basic foundations; and without solid foundations there is no hope for agreement on positions that depend on them.

Building on Magical Foundations

Just because all positions ultimately rest on faith, doesn’t make them all equal. Some axioms are more widely shared or more practically desirable than others. Some models are simpler or better supported by empirical evidence than others. Some positions are more coherent or more effective than others. And some types of faith are more magical than others.

It is possible to choose between positions despite their foggy foundations. We can certainly evaluate hierarchies within their own framework of axioms and methods: most can be improved or discarded on their own terms. We can rank positions by a combination of criteria, such as effectiveness, popularity, beauty, or internal coherence, instead of relying on absolute truth as the sole arbitrar. We can even simply agree to treat a set of basic axioms and methods as true. We just can’t hope to agree universally.

Even with such an agreement, the challenge of communication and implementation of any non-trivial hierarchy is insurmountable. Because we must act in the world as it actually exists, we run into a bootstrapping problem: we need to define language, methods, and axioms; eliminate cognitive biases, social traditions, and economic disincentives; and generate knowledge and capabilities, but the development of each of them depends on already having the others in place.

But how could such an agreement be reached? As we’ve now seen, the set of axioms, methods, and choices that underlie positions is extraordinarily large and interconnected. To reach agreement, the entire set must be accepted instantaneously and simultaneously; otherwise, the act of defending, communicating, and agreeing will itself create variations.

But complexity forces us to define and understand this set with reductionist methods. People spend their entire lives studying tiny parts deep within a hierarchy. Few of them reach complete agreement on even that tiny scale; those that do feel like they’ve found a soulmate. These independently developed pieces cannot be merged without conflict, even when they are all agreed to in isolation.

It is incredibly improbable for even a few people to share an entire set of axioms, methods, and choices by chance. And it is even more improbable to reach complete agreement on such a set, no matter what selection criteria we choose. Improbable between a few people, and impossible between everyone.

But let’s suppose that we somehow get around the bootstrapping problem: we agree on a hierarchy and have a way to propagate it instantaneously and universally, say with a Matrix-style technological and authoritarian ability to upload knowledge into human minds. Even then, universal agreement would remain out of reach because of variability in individual axioms and capabilities.

However impressive the human ability to learn, it is not infinite: somewhere at the bottom there are axioms that are genetically hard-wired or developed from experiences prior to the possibility of knowledge upload. Our ability to learn, accept, and act from taught axioms partially depends on these unalterably individual foundations.

Similarly, we all have different capabilities. The types of axioms we can understand, and how well, and the types of methods we can use, and how well, depends in part on these capabilities.

To reduce these limitations we would need to reduce human variability. Even without specifically considering morality or second order effects, to do this would be to lay society on a Procrustean bed: we’d make ourselves extremely fragile merely to enable implementation of an arbitrarily chosen hierarchy. And our agreement would still drift apart as each individual’s hierarchy morphs to the specifics of their experiences.

What enables robust, real-world agreement – despite individual variability and the impossibility of bootstrapping a massive set of interdependent axioms – is trust. Individuals adopt axioms and methods even when they did not develop them, cannot completely understand them, and can see that there are defensible, even desirable, alternatives. But such a high level of influence on the part of the trusted is ripe for abuse and such a high level of obedience on the part of the trusters is ripe for dissent.

To deal with abuse, dissent, and even to just communicate agreement across a variety of human capabilities in the first place, requires choosing between effectiveness (which increases the chance of success) and coherence (which increases trust if successful.)

Real-world groups do ultimately unite around sets of axioms. But their agreement tends to be loose and implicit – in other words, magical – as well as local. Although these agreements are theoretically arbitrary, constraints of pragmatism and consistency severely limit the number of arrangements that are workable in practice. They come about organically through resolution of challenges, creation of commitments, and development of trust. Attempts to make them explicit or universal expose the magic that holds them together and disintegrate them.

Incompatibility of Goals

But agreement on models and meta-capabilities, as impossible as it is, still isn’t enough for lasting agreement on strategies: our goals, axioms, and constraints have fundamental, theoretical incompatibilities which create tensions.

While some resolve such tensions with a permanent choice between conflicting goods, most acknowledge that they are all worth keeping: the question is how to trade-off between them.

One possibility is to order them so that we can pursue lower ranked objectives at least until they conflict with higher ranked ones. An improvement on that is a weighted ranking, which allows lower ranked goods to exert influence, and even to overwhelm, when present in sufficient quantities.

A further improvement is to acknowledge that the weights can be interdependent and situational, which makes any static ordering insufficient: we need a dynamic weighting function. And because such a function is going to have to interpret imperfect information and extrapolate with imperfect models, it might not be deterministic.

A ranking function formalizes a way to trade-off conflicting goods, but it doesn’t resolve disagreement. Any specific weighting, formula, or input will be contentious.

Take moral philosophy as an example. For much of history, its essence has been to define and rank the virtues. It has produced a multitude of answers, from the three main branches of normative ethics to the vast swaths of meta-ethics, but no definitive answer. A ranking function is a model that is built on axioms and thus remains open to disagreement.

It may seem that if we share a set of fundamental axioms and methods we should naturally converge to the same ranking function, but, while an agreement on such foundations is very helpful, it remains insufficient.

The tensions that a ranking function is to resolve are not reducible to our axioms; they are between them and thus require new axioms for arbitration. Which new axioms we accept depends on individual variability, path-dependence of experience, and outcomes of ranking functions we experiment with, which are frequently probabilistic.

Even a solved bootstrapping problem does not give us a deterministic path to new axioms; to create a ranking function we need to again face the challenges of model and method creation.

One alternative to ranking, or rather a special case of it, is to take the idea of incompatibility further into incommensurability: to accept that some of the conflicting goods cannot be compared and hence cannot be ranked.

Incommensurability has been a contentious concept both in the philosophy of values (where it roughly means that some values cannot be ranked) and in the philosophy of science (where it roughly means that sets of scientific theories – like separate coherent action hierarchies – cannot fully understand each other.)

My attempt to argue the impossibility of universal agreement rests, in part, on incommensurability in both of those senses. But while incommensurability is a good, one-word compression of the problem, it is not a convincing answer.

Measuring the Incommensurate

There are no universally indisputable ways to rank. Even when the scope is restricted, reasonable systems frequently lack objective, deterministic, comprehensive, or unique ranking functions. And theoretically justifiable formulas often resist practical implementation.

But this doesn’t make the act of ranking avoidable. We can try to deny its necessity with priceless values or incommensurability. We can try to objectify it with deterministic functions. But we still face trade-offs and every choice we make defines our ranking. We may be unable to construct it in advance, but we can get some sense of it by looking backwards.

It is exactly to avoid the necessity of such choices that idealists construct perfect worlds. And it is exactly through the necessity of such choices that their plans fall apart. Our decisions place value on the priceless, compare the incommensurate, and articulate the indescribable. It is through competing choices that we generate ranking functions for our internal hierarchies, set prices in our economies, and agree on shared goals in the political arena. This competition generates answers that can be flawed in many ways, but remain somewhere between invaluable and optimal because they alone integrate diverse incommensurate systems.

Agreement and Understanding

If you recall our model of action, we create strategies by combining models, goals, and constraints using our meta-capabilities. As we’ve now seen, every component of this process is open to disagreements, which are, not just practically, but theoretically, unresolvable.

To understand others we need to agree on definitions and rules of communication. To make sure that conclusions follow from premises we need to agree on the laws of logic. To decide which axioms to call truths we need to agree on ways to evaluate the premises. To get through this process we need to share enough foundational axioms, capabilities, time, values, and incentives. If successful, we can proceed to use logic and language to build reasonable, but still not unique or indisputable, action hierarchies from our premises.

Even idealized action hierarchies are a product of countless decisions, each with a choice of decision-making methods, with numerous interdependencies between many of the choices and methods. Even before capabilities, constraints, biases, preferences, and incentives enter the picture, positions sprawl into massive, interconnected, magical, and evolving ecosystems.

To understand each other’s hierarchies we need to turn these complex, amorphous systems into unequivocal flowcharts with clear axioms, methods, choices, and trade-offs. More capable people with better intentions and higher awareness are able to get further, perhaps much further, in this process, but the end result still isn’t comprehension, much less agreement; it’s just a fuller understanding of the fundamental reasons we disagree.