Philosophical Disquisitions

Pages

Thursday, December 8, 2016

Race is a social construct. Gender is a social construct. Sexuality is a social construct. Money is a social construct. Hierarchies of power? Also, social constructs. It seems like lots of things are social constructs these days. Or, at least, people often make the claim that such-and-such a thing is a ‘social construct’. But what does this really mean?

That is the question answered by Esa Diaz-Leon’s excellent paper ‘What is social construction?’. It tries to clarify the nature of social constructionism by identifying several major types of social construction and tracing out their implications for various social debates. I want to share some of the main ideas from the paper in this post.

1. The Core Tenets of Social Constructionism
It will help if we start with an example. Suppose I say that ‘gender is a social construct’. If I were pressed to further explain my views what would I say? I would probably say something like: when we refer to person X as being a man or woman we are not simply referring to biological or physical facts about them. Rather, we are referring to a system of collective beliefs and expectations that ascribes the status of womanhood/manhood to them. In other words, I would be contrasting my claim about gender being a social construct with what we might call a ‘biological realist’ claim that gender is simply a function of biologically determined facts.

My claim would carry further implications. Most people who say that ‘such and such a property is a social construction’ usually have normative/political intentions lying behind what they say. They believe that there is something artificial and unjustifiable about the construction in question. They think we should work to reform the social beliefs and practices that reinforce the construction in question. So, for example, someone who thinks gender is a social construction is also likely to believe that ascribing the property of womanhood/manhood to a particular individual is to subject them to oppression and subordination they do not deserve, or to afford them rights and privileges they have not earned.

In her paper, Diaz-Leon looks to work done by Ian Hacking as a starting point for understanding claims about social construction. Hacking suggested that such claims could be broken down into three sub-claims. As modified by Diaz-Leon, these sub-claims look something like this:

Social Constructionism

If someone claims that X is socially constructed, they tend to hold that:

Contingency Claim: The instantiation and distribution of X is contingent on social events and arrangements. If these social events and arrangements were different, then X would be different.

Badness Claim: There is something bad/unjust/inferior about the current instantiation and distribution of X.

Reformation Claim: It would be possible (and better) if X could be done away with or radically transformed.

The contingency claim is really the core of social constructionism. The badness claim is what usually motivates the contingency claim. The reformation claim is linked to both. It is because X is bad that it should be reformed. And it is because X is socially constructed that the reformation is possible.

This last statement is, of course, a fallacy - a fallacy that plagues much of the social constructionist debate. Just because something is socially constructed does not mean it is easy to reform and just because something is biologically real (say) does not mean that it is incapable of reformation. The unequal status of certain races and ethnic groups is historically socially constructed but it is often sustained and reinforced by complex, interlocking networks of social belief and practice. Successfully reforming those networks of belief and practice might be exceptionally hard. Contrariwise, my bad eyesight is biologically real, but it is easily fixed by glasses, contact lenses or corrective surgery.

The main goal of Diaz-Leon’s article is to shed some light on the nature of the social constructionist’s reformation claim. She hopes that by figuring out the different ways in which something can be socially contingent we can in turn figure out whether claims about the ease of social reformation are justified. This will help us to determine whether claims like ‘X is socially constructed’ are true and useful in debates about social justice.

2. Idea Constructionism
There are basically two variables in any social constructionist claim. The first is the phenomenon that is constructed (e.g. gender/race/sexuality/money) and the second is the form of the construction (i.e. the social events and arrangements that sustain the phenomenon). When distinguishing between social constructionist claims, we can use these two dimensions of variance. Diaz-Leon does this, in the first instance, by drawing a distinction between idea-constructionism and object-constructionism, both of which focus on different phenomena that are constructed. Object-constructionism turns out to be far more important, but it’s useful to talk about idea-constructionism first, even if only to dismiss it.

Idea-constructionism suggests that the phenomenon that is constructed is an idea, concept or theory, not something in the real world. An idea-constructionist about gender would, in effect, be saying that our ‘theory or concept of gender is socially constructed’. The problem with this kind of claim is that it is trivially true. Theories, concepts and ideas are all human constructs. They are produced by human beings, operating in social environments, at particular historical moments. The theory of gravity is a social construct in this sense. But the theory of gravity is trying to explain something in the real world, i.e. some phenomenon that is not purely abstract or conceptual. In other words, the theory of gravity is not socially constructed in any interesting or controversial sense. People have tried to make that claim but they are, arguably, misled into thinking that social contingency necessarily debunks or undermines an idea or concept.

Just to be clear, the opposite of idea-constructionism — something we might call idea-determinism — is also problematic. In many cases it is trivially false. Someone who claims that the theory of gravity is entirely determined by the phenomenon in the real world that it purports to explain is wrong. The theory is formulated in ways (e.g. by using particular symbols and equations) that are socially contingent. It is rare, in other words, for any idea, concept or theory to be purely socially constructed or purely determined by phenomena in the real-world. They are usually some mix of both.

That might be all we need to say on idea-constructionism except that some philosophers have tried to render it more plausible. Sally Haslanger is one. She has modified idea-constructionism and distilled it down to three key ideas:

Modified Idea Constructionism

If someone is an idea constructionist with respect to X (where X is some individual) then they are usually sympathetic to:

(a) The social contingency of our understanding of X

(b) Nominalism about Xs kind, or, to put it another way, denials that the domain in which X lies has some inherent non-human imposed structure.

(c) Explanations of the stability of X in external rather than internal terms.

The first of these three claims is standard social constructionism. The other two require a bit of explanation. Nominalism is a particular theory about natural kinds. It is from metaphysics. The idea is that all kind-membership is determined by human convention and not by any structure that is inherent in the world beyond human convention. The only things that really exist are particular individual objects; all classes or groups are just intellectual conventions. Thus, when I group together a bunch of grey, hard, lumpy objects under the general class of ‘rock’ I am not, according to the nominalist, carving nature at its joints; I am, rather, applying a conventional classification rule that can be helpful to humans but is not reflective of the underlying reality. Idea-constructionists, according to Haslanger, are nominalists about particular phenomena.

The internal/external explanation claim is also a bit tricky. The use of the ‘internal/external’ terminology is not standard. Diaz-Leon suggests that what Haslanger means must be something along the lines of: (i) internal explanations are explanations that appeal strictly to inter-theoretic virtues, e.g. coherence, simplicity, evidence, fit with evidence and so on; (ii) external explanations are ones that appeal to extra-theoretic, seemingly irrelevant, factors, e.g. political bias/convenience, personal prejudices and so on. So, for example, an idea-constructionist about the theory of gravity might suggest that the theory gained acceptance not because it explained the evidence regarding objects in motion relative to one another, but because the power of the Church was on the wane, the industrial revolution was getting started, and the mechanistic nature of the theory was politically and economically convenient in that historical context.

Diaz-Leon and Haslanger suggest that nominalism is really the most interesting part of modified idea-constructionism, but because nominalism is a metaphysical claim (i.e. a claim about what is or is not out there in the real world) it does not really support idea-constructionism in the strict sense. It is more at home with object-constructionism.

3. Object-Constructionism
Object-constructionism suggests that the phenomenon that is being constructed is not simply an idea or theory but rather some object (or event or state of affairs) that exists in the world. An object-constructionist about gender would, in effect, be saying that whether a particular individual is counted as a woman or man is the product of social events and arrangements, not (primarily anyway) biological or physical properties that they happen to possess.

Object-constructionism comes in two major forms, varying depending on the form of the construction (this is where that second dimension of variance that I mentioned earlier on becomes important). The first of these is causal object constructionism:

Causal Object-Constructionism: X [some particular individual] is socially constructed causally as an F [some property/class ascribed to the individual] iff social factors…play a significant role in causing X to have those features by which it counts as an F. (Haslanger 2003, 317 - taken from Diaz-Leon 2013, 5))

or

X causally constructs Y if and only if X causes Y to exist or persist or X controls the kind-typical properties of Y. (Mallon 2008, 5 - taken from Diaz Leon 2013, 5)

Any technological artifact would count as causally socially constructed in this sense. Take the wristwatch as an example. The wristwatch is a particular individual object (X) and it belongs to the general class of wristwatches (F- devices that tell the time on your wrist). Clearly, the wristwatch didn’t come into existence through spontaneous creation or a sequence of natural, non-human events. It was designed and fashioned by human beings, operating in particular social circumstances, at particular historical moments. It is, thus, causally socially constructed according to the terms of both definitions. Social factors have played a significant role in causing the wristwatch to have the features by which it counts as a wristwatch.

Contrast that with the other type of object-constructionism: constitutive object constructionism. This can be defined as follows:

Constitutive Object-Constructionism: X is socially constructed constitutively as an F iff X is a kind or sort F such that in defining what it is to be an F we must make reference to social factors. (Haslanger 2003, 318 - taken from Diaz-Leon 2013, 5)

or

X constitutively constructs Y if and only if X’s conceptual or social activity regarding an individual y is metaphysically necessary for y to be a Y. (Mallon 2008, 6 - taken from Diaz-Leon 2013, 5)

Social roles are the classic case of constitutive construction. Take the status of being Prime Minister (or President or whatever political role you like) as an example. This status is determined entirely by social factors and events. It requires compliance with some formally agreed upon procedure (nomination, voting, election) and collective belief in and acceptance of the validity of that procedure. These factors constitute entirely what it means to be Prime Minister. Without these social factors, a particular individual cannot be Prime Minister. These social factors sustain this status on an ongoing basis. If there is a political revolution or a change in the agreed upon procedure or process, the individual who used to be called Prime Minister will lose this status.

This makes constitutive construction quite different from causal construction. A wristwatch won’t lose the properties that make it a wristwatch if there is some change in social factors. If all the wristwatch makers suddenly die, my wristwatch will continue to exist. If all the officials that make someone Prime Minister are overthrown, and the legal rules change, the Prime Minister will too (though, of course, this ends up being a contentious matter when people dispute the social factors that make someone the ‘legitimate’ office holder).

Before we wrap up on this, it is worth noting that some objects can be both causally and constitutively constructed. Money is probably the best example — specifically money in the form of physical currency. Take a dollar bill as an example. This is a social artifact, designed and caused to have certain properties (images, serial numbers, watermarks etc.) by social factors. It is, thus, causally socially constructed. But it only has its status as legal tender thanks to particular legal rules and conventions that are socially accepted. If those rules and conventions change, it will lose its status as money. It is thus constitutively constructed as money.

4. What difference does this make?
Now that we are a bit clearer about the different types of social construction we can go back to the earlier question: what difference does this make when it comes to the normative projects underlying claims about social construction? Remember, people who claim that such and such a property or attribute is socially constructed (such as race or gender) are usually claiming that we should reform the social construction so that it is more just or morally appropriate. Does it make a difference if we are dealing with causal object-constructionism or constitutive object-constructionism?

Diaz-Leon thinks it does. Social constructionists think that by reforming or changing social practices and arrangements we can reform the phenomena in which we are interested. But that isn’t necessarily true. There is a general rule of thumb that we should apply:

Rule of Thumb: If X is causally (and not constitutively) socially constructed, then it will be difficult to change X by reforming social practices and arrangements; contrariwise, if X is constitutively socially constructed, then it will be easier to change X by reforming social practices and arrangements.

The argument in favour of this rule of thumb is simple enough. The problem is that if something is causally constructed it has a degree of metaphysical independence from the social forces that constructed it. The wristwatch is a good example of this. Human society could collapse and the wristwatch could go on existing. Changing social practices and arrangements won’t necessarily change the wristwatch, though it may prevent future wristwatches from being created. On the other hand, if something is constitutively constructed, there is in an immediate metaphysical dependence-relationship between it and the social factors that sustain it. Eliminating those social practices and arrangements will, necessarily, change the object.

But this is only a rule of thumb. The metaphysical dependence between a constitutively constructed object and the underlying social practices and arrangements could be extremely elaborate and complex. You might think that changing one thing will eliminate the problematic construction only to learn that the construction is sustained by a wider set of practices and arrangements. This is probably true in cases of gender and race. Changing one or two laws or institutions has not eliminated all gender/race based discrimination or necessarily changed how the social role of being woman or a member of an ethnic minority is constructed.

There is more to be said, but I’m going to leave it there. Diaz-Leon has two additional, and important, arguments in her paper. The first claims that just because X is socially constructed does not mean that the properties ascribed to X are not intrinsic to it. The other is that claims to the effect that X is socially constructed and biologically real are not necessarily incompatible with one another, at least in the case of causal construction. You’ll have to read the paper for those arguments though.

Tuesday, November 29, 2016

I want to share an interesting framework for thinking about negative freedom. Negative freedom is a central concept in liberal political theory. One of the primary duties of the state, according to liberal political theory, is to protect negative freedom.

But what does negative freedom consist in? Broadly speaking, negative freedom is the absence of external constraint on action. If I sign my name to the bottom of a document, I do not do so freely if you grabbed my hand and forced me to sign. You are an external constraint. You undermine my negative freedom.

This example is, however, relatively trivial. There are many external constraints on action. Which ones actually undermine my negative freedom? In some sense, I am constrained by my biology. I am not free to stop breathing. I am constrained from breathing anything other than oxygen. But does that mean that the necessity for oxygen-breathing is a freedom-undermining external constraint?
Or take a more contentious example. Suppose I work in an office. My office manager suggests that if I want to get a promotion I should wash his car every weekend. Suppose I duly go and wash his car every weekend. Am I doing so freely? Or does his not-so-subtle hint constitute a freedom-undermining interference? These are the kinds of the questions that fill the pages of political philosophy journals.

In their article, ‘Freedom as Independence’, Christian List and Laura Valentini do two things that help us to answer some of these questions. First, they map out the ‘logical space’ of negative freedom. And second, they use this map to identify and make the case for a new theory of negative freedom — one that has been overlooked by liberal theorists to date.

I want to describe these two features of List and Valentini’s article in this post. I do so because I think their methodology for mapping out the logical space of freedom can be useful in other contexts, and also because I think the idea of ‘freedom as independence’ is worth considering.

[Note: I covered some of this ground already in my post ‘The Logical Space of Algocracy (Redux)’. This is a slightly longer explanation of the discussion of List and Valentini’s framework that occurred in that post.]1. The Logical Space of Freedom
Two theories of negative freedom predominate in contemporary political theory. The first is the classic liberal theory of freedom as non-interference:

Freedom as Non-Interference: An agent’s freedom to do X is the actual absence of relevant constraints on the agent’s doing X.

I am free to walk down the street provided that no one or no external force is actually constraining me from walking down the street. This is a simple, clean but ultimately problematic way to define negative freedom. There are a couple of features of the definition that are worth calling out. Notice first how it includes the phrase ‘relevant’ constraints. This is a sop to the fact that there is disagreement about what counts as a freedom-undermining constraint. To get a sense of the scope of the disagreement, I would suggest reading my earlier post on Quentin Skinner’s genealogy of freedom. There, I distinguished between external force, coercion, and self-sabotage as potentially relevant types of interference.

Notice second the use of the term ‘actual’ in the definition. This tells us that freedom as non-interference is a non-modal definition of negative freedom. It is only concerned with what happens in the actual world, not with what happens in other possible worlds. This is thought to be a problem by so-called neo-republican theorists of freedom. They think that limiting the focus to the actual world means that liberals cannot account for the absence of freedom in the case of the happy slave. The ‘happy slave’ is a thought experiment in which we are asked to imagine a slave who conforms his/her will to that of his/her master. In other words, they act in a way that always pleases their master. As a result, the slave master never interferes with or imposes constraints on their actions. This means that, according to the definition given above, the slave is free: there are no relevant constraints on their ability to act in this world.

This seems unsatisfactory to the republicans. If you look beyond this actual world to other possible worlds, it seems clear that the slave’s freedom is being undermined. If the slave happens to act in a way that does not please their master, their master stands ready to intervene and prevent them from doing so. They live under the dominion of the slave master. This suggests to the republicans that negative freedom requires more than the absence of constraint in this world. It requires the absence of relevant constraints across a number of possible worlds. They use this to form their own preferred conception of freedom, something called freedom as non-domination.

Freedom as Non-Domination: An agent’s freedom to do X is the robust absence of arbitrary relevant constraints on the agent’s doing X.

We have added two terms to the definition. The ‘robust’ descriptor is supposed to capture the modal nature of freedom as non-domination (i.e. the absence of constraint across a number of possible worlds). The ‘arbitrary’ descriptor requires more explanation. On top of thinking that there is modal dimension to freedom, many republicans think that there is a moral dimension to it too. In other words, not all constraints are morally equal. Some are justified. If I commit a crime and am imprisoned as a result, my freedom to act is constrained, but we might view this as a morally justified constraint. And so, we might be inclined not to include that within the scope of freedom-undermining constraints. This is why we might focus on the absence of arbitrary constraints. (This isn’t quite right but I’ll say more about it in a moment).

Up to this point, we have just been describing the two major theories of negative freedom. When List and Valentini do this in their paper, they note something interesting. They note that the two theories vary along two dimensions: the modal dimension (are they robust or not?) and the moral (do they limit themselves to arbitrary interferences or not?). This suggests that it is possible to arrange theories of freedom into a two-by-two matrix, illustrating the variance along both dimensions. And when you do this, you see something that you might otherwise miss: freedom as non-interference and freedom as non-domination only represent two out of four possible conceptualisations of negative freedom.

Now, as it happens, there are some liberal theories of freedom that belong in the upper right quadrant (i.e. that are moral but non-robust). List and Valentini mention the theories of freedom defended by Robert Nozick and Ronald Dworkin as specific examples. These theories say that in order to have negative freedom you must be free from arbitrary constraint in the actual world. But the lower left quadrant is almost completely neglected. The purpose of List and Valentini article is to describe and defend the theory of freedom that belongs in this quadrant.

2. Defending Freedom as Independence
They call this theory of freedom ‘freedom as independence’. It can defined like this:

Freedom as Independence: An agent’s freedom to do X is the robust absence of relevant constraints on the agent’s ability to do X.

The theory is non-moral and modal. It says that you must be free from constraints across a range of possible worlds — it shares this requirement with freedom as non-domination. But it also says that any relevant constraint (even if it is morally justified) counts against your freedom.

List and Valentini present a detailed argument in defence of freedom as independence. I can’t hope to do justice to the nuances of that argument in this post. I’ll just give you a sense of how it works. It starts with two desiderata that plausible theories of freedom ought to meet: (i) they ought to ‘pick out as sources of unfreedom those modal constraints on action that stand in need of justification’ (List and Valentini 2016, 1049); and (ii) they ought to ‘display[] an adequate level of fidelity to ordinary-language use’ (List and Valentini 2016, 1051). The argument then works like this:

(1) There are four logically possible theories of negative freedom: (i) freedom as non-interference; (ii) moralised freedom as non-interference; (iii) freedom as independence; and (iv) freedom as non-domination. The theories vary depending on whether they have a robustness requirement (or not) and a moralised exemption clause (or not).

(2) A plausible theory of freedom should have a robustness requirement (therefore theories (i) and (ii) are ruled out).

(3) A plausible theory of freedom should not have a moralised exemption clause (therefore theory (iv) is ruled out).

(4) Therefore, of the four logical possible theories of negative freedom, freedom as independence is the most plausible.

The bulk of the argumentation comes in the defence of premises (2) and (3). List and Valentini defend premise (2) on fairly standard grounds: by appealing to the happy slave thought experiment. They think it is a major defect of all liberal theories of freedom as non-interference that they cannot account for the unfreedom of the slave. The slave’s situation, even if they are happy, stands in need of moral justification and so the first desideratum is not met. Also, in most ordinary language analyses, we would be inclined to say that a slave is unfree (indeed, the slave’s situation may be the paradigm of unfreedom) and so the second desideratum is not met.

That’s the basic argument anyway. The details are a little bit more complicated. As they point in the paper, there are a few different approaches to freedom as non-interference that attempt to account for the slave’s predicament, e.g. by including something like a robustness requirement that focuses on the probability of interference. List and Valentini dismiss solutions of this sort on the grounds that even if the interference from a slave master was improbable it would stand in need of justification and would not conform with our ordinary language usage. Other modifications are discussed and dismissed in the paper. This leaves premise (2) in reasonably good health.

List and Valentini’s argument for premise (3) is more complex. The initial case in its favour is straightforward. Suppose we accept a moral exemption clause. In that case, we would say that someone who was justly imprisoned was not unfree. But this would fail to satisfy our two desiderata. Imprisoning someone definitely requires moral justification and ordinary language usage would similarly insist that an imprisoned person was not free.

The argument becomes complex when List and Valentini try to use it to make the case against freedom as non-domination. The problem is that although they categorise that theory as having a moral exemption clause, some of its most famous proponents insist that it does not. It all comes down to how we interpret the word ‘arbitrary’ in the definition. Philip Pettit — probably the most famous neo-republican — has argued that it does not have a moral connotation. All he means when he says that you must be robustly free from arbitrary constraint is that you are free from constraints that do not match your own avowed interests. So if I have an interest in not paying tax, but the government insists upon taking tax monies from me (or stands ready to do so in some nearby possible world) then my freedom as non-domination is compromised. This is true even if taking tax money is morally legitimate.

List and Valentini argue against Pettit on a couple of grounds. They think a non-moralised theory of arbitrary constraints creates problems when you turn to politics. Pettit has expended considerable energies in recent years trying to square his neo-republican view with democracy. He tries to argue that neo-republicanism supports democratic decision-making on the grounds that democratic decision procedures are the way to work out the citizens’ avowed interests. But List and Valentini say that this only works if the conception of ‘interests’ that is at play in this argument is moral.

The problem is that virtually every democratic decision — barring one involving complete unanimity — will involve the creation of coercive policies that go against at least one citizen’s avowed interests. That much is clear from the tax example I just gave. Democratic decision-making requires compromise: some avowed interests will have to give way to others. The only way to resolve this is to take onboard a moralised theory of avowed interests, i.e. to insist that some interests are morally legitimate and others are not. But if you make that move you resign yourself to the lower right-hand quadrant of the logical space of freedom.

A neo-republican could hang tough and insist on the non-moralised account of arbitrary interferences. But List and Valentini think that this will be unpalatable because the neo-republican theory also purports to provide some account of political justice.

That’s roughly the argument anyway. As I say, there is more detail in the paper than I can hope to cover here. To summarise, by noticing how freedom as non-interference and freedom as non-domination vary along two distinct dimensions, List and Valentini help to construct a logical space of possible theories of negative freedom. Doing so enables them to spot a neglected theory of freedom, namely freedom as independence. This theory has been ignored in the literature to date but is arguably more plausible than the existing contenders.

Sunday, November 27, 2016

(The following is, roughly, the text of a talk I delivered to the IP/IT/Media law discussion group at Edinburgh University on the 25th of November 2016. The text is much longer than what I actually presented and I modified some of the concluding section in light of the comments and feedback I received on the day. I would like to thank all those who were present for their challenging and constructive feedback. All of this builds on a previous post I did on the ‘logical space of algocracy’)
I’m going to talk to you today about ‘algocracy’ - or ‘rule by algorithm’. ‘Algocracy’ is an unorthodox term for an increasingly familiar phenomenon: the use of big data, predictive analytics, machine learning, AI, robotics (etc.) in governance-related systems.

I’ve been thinking and writing about the rise of algocracy for the past three years. I’m currently running a project at NUI Galway about it. The project is kindly funded by the Irish Research Council and will continue until May 2017. I’ve also published a number of articles about the topic, both on my blog and in academic journals. If you are interested in what I have to say, I would like to suggest checking out my blog where I keep an index to all my writings on this topic.

Today I want to try something unusual. Unusual for me at any rate. I’m normally an arguments-guy. In my presentations I like to have an argument to defend. I like to start the presentation by identifying the key premises and conclusions of that argument; I like to clarify the terminology; and I like to spend the bulk of my time defending the argument from a series of attacks.

I’m not going to do that today. I’m going to try something different. I’m going to try to map out a conceptual framework for thinking about the phenomenon of algocracy. I’ll do this in five stages. First, I’ll talk generally about why I think conceptual frameworks of this sort are important and what we should expect from a good conceptual framework. Second, I’ll outline some of the conceptual frameworks that have been offered to date to help us understand algocracies. I’ll explain what I like and don’t like about those frameworks and what I think is missing from the current conversation. Third, I will introduce a method for constructing conceptual frameworks that is based on the work of Christian List. Fourth, I adopt that method and construct my own suggested conceptual framework: the logical space of algocracy. And then fifth, and finally, I will highlight some of the advantages and disadvantages of this logical space.

At the outset, I want to emphasise that everything I present here today is a work in progress. I know speakers always say this in order to protect themselves from criticism, but it’s more true in this case than most. I’ve been mulling over this framework for a couple of years but never pursued it in any great depth. I agreed to give this talk partly in an attempt to motivate myself to think about it some more. Of course, I agreed to this several months ago and, predictably and unsurprisingly, I managed to procrastinate about it until five days ago when I started writing this talk.

I’m not going to say that the ideas presented here are under-baked, but I will say that they are under-cooked. I hope they are thought-provoking and that in the discussion session afterwards we can figure out whether they are worth bringing to the table. (Apologies for the strained culinary metaphor)

1. Why I love Conceptual Frameworks
I use the term ‘conceptual framework’ to describe any thinking tool that tries to unify and cohere concepts and ideas. I’m a big fan of conceptual frameworks. In many ways, I have spent the past half decade collecting them. This is one of the major projects on my blog. I like to review conceptual frameworks developed by other authors, play around with them, see if I truly understand how they work, and then distill them down into one-page images, flowcharts and diagrams.

In preparation for this talk, I decided to look over some of my past work and I thought I would share with you a few of my favourite conceptual frameworks.

First up is Nicole Vincent’s Structured Taxonomy of Responsibility Concepts. This is something I stumbled upon early in my PhD research about the philosophy of criminal responsibility. It has long been noted that the word ‘responsible’ can be used to denote a causal relationship, a moral relationship, a character trait, and an ethical duty, among other things. HLA Hart tried to explain this in his famous parable of the sea captain and the sinking ship. The beauty of Vincent’s framework is that it builds upon the work done by Hart and maps out the inferential relationships between the different concepts of responsibility.

Second, we have Quentin Skinner’s Genealogy of Freedom a wonderfully elegant family tree of the major concepts of freedom that have been articulated and defended since the birth of modern liberalism. Skinner describes the basic core concept of freedom as the power to act plus some additional property. He then traces out three major accounts of that additional property: non-domination; non-interference; and self-realisation.

Third, there is Westen’s four concepts of consent. Consent is often described as being a form of ‘moral magic’ - it is the special ingredient that translates morally impermissible acts (e.g. rape) into permissible ones (e.g. sexual intercourse). But the term consent is used in different ways in legal and moral discourse. Westen’s framework divides these concepts of consent up in two main sub-categories: factual and prescriptive. He then identifies two further sub-types of consent under each category. This helps to make sense of the different claims one hears about consent in moral and legal debates.

Speaking of claims about consent, here’s a slightly different conceptual framework. The previous examples are all taxonomies and organisational systems. Alan Wertheimer’s map of the major moral claims that are made about intoxication and consent to sex is an attempt to work out how arguments relate to one another. Wertheimer starts his detailed paper on the topic by setting out five claims that are typically made about intoxicated consent. My diagram tries to depict the inferential relationships between these claims. I think this helps to give us a ‘lay of the land’ (so to speak) when it comes to this controversial topic. Once we appreciate the lay of the land, we can understand where someone is coming from when they make a claim about intoxicated consent and where they are likely to end up.

Fifth, here is Matthew Scherer’s useful framework for thinking about the regulation of Artificial Intelligence. This adds another dimension to a conceptual framework: a temporal dimension. It shows how different regulatory problems arise from the use of Artificial Intelligence at different points in time. There are ex ante problems that arise as the technology is being created. And there are ex post problems that arise once it has been deployed and used. It is useful to think about the different temporal locations of these problems because some institutions and authorities have more competence to address those problems.

Finally, and another version of the time-sensitive conceptual framework, we have this life-cycle of prescriptive legal theories, developed by David Pozen and Jeremy Kessler. A prescriptive legal theory is a theory of legal decision-making that tries to remove contentious moral content from a decision-making rule (a classic example would be the originalist theory of interpretation). Kessler and Pozen noticed patterns in the development and defence of prescriptive legal theories. Their life-cycle is designed to organise these patterns into distinctive stages. The major insight from this lifecycle is that prescriptive legal theories usually work themselves ‘impure’ - i.e. they end up reincorporating the contentious moral content they were trying to avoid.

I could go on, but I won’t. Like I said, I enjoy collecting and diagramming conceptual frameworks of this sort. But I think it would be more useful at this stage to draw some lessons from these six examples. In particular, it would be useful to highlight the key properties of good conceptual frameworks. I don’t think we can be exhaustive or overly prescriptive in this matter: good, creative scholarship will come up with new and exciting conceptual frameworks. Nevertheless, the following general principles would seem to apply:

A good conceptual framework should enable you to understand some phenomenon of interest.

A good conceptual framework should allow you to see conceptual possibilities you may have missed (e.g. theories of freedom or responsibility that you have overlooked)

A good conceptual framework should enable you to see how concepts relate to one another.

A good conceptual framework should allow you to see opportunities for research and further investigation.

A good conceptual framework should appreciate complexity while aiming for simplicity.

There are also, of course, risks associated with conceptual frameworks. They can be Procrustean. They can become reified (treated as things in themselves rather than as tools for understanding things). They can be overly simplistic, causing us to ignore complexity and miss important opportunities for research. There is a fine line to be walked. Good conceptual frameworks find that line; bad ones miss it.

2. Are there any conceptual frameworks for understanding algocracies?
That’s all by way of set-up. Now we turn to meat of the matter: can we come up with good conceptual frameworks for understanding algocracies? Two things will help us to answer this question. First, getting a better sense of what an algocracy is. Second, taking a look at some of the existing conceptual frameworks for understanding algocracies.

I said at the very start that ‘algocracy’ is an unorthodox term for an increasingly familiar phenomenon: the use of big data, predictive analytics, machine learning, AI, robotics (etc.) in governance-related systems. The term was not coined by me, though I have certainly run with it over the past few years. The term was coined by the sociologist A.Aneesh during his PhD research back in the early 2000s. That research culminated in a book in 2006 called Virtual Migration in which he used the concept to understand changes in the global labour market. He has also used the term in a number of subsequent papers.

Aneesh’s main interest was in different human governance systems. A governance system can be defined, roughly, like this:

Governance system: Any system that structures, constrains, incentivises, nudges, manipulates or encourages different types of human behaviour.

It’s a very general, wishy-washy definition, but ‘governance’ is quite a general wishy-washy term so that seems appropriate. Aneesh drew a contrast between three main types of governance system in his research: markets, bureaucracies and algocracies. A market is a governance system in which prices structure, constrain, incentivise, nudge (etc) human behaviour. And a bureaucracy is a governance system in which rules and regulations structure, constrain, incentivise, nudge (etc.) human behaviour. Which means that an algocracy can be defined as:

Algocracy: A governance system in which computer coded algorithms structure, constrain, incentivise, nudge, manipulate or encourage different types of human behaviour. (Note: the concept is very similar to the ‘code is law’ idea promoted by Lawrence Lessig in legal theory but to explain the similarities and differences would take too long)

In his study of global labour, Aneesh thought it was interesting how more workers in the developing world (particularly India where his studies took place) were working for companies and organisations that were legally situated in other jurisdictions. This was thanks to the new technologies (computers + internet) that facilitated remote work. This gave rise to new algocratic governance systems within corporations, which sidestepped or complemented the traditional market or bureaucratic governance systems within such organisations.

That’s the origin of the term. I tend to use the term in a related but slightly different sense. I certainly look on algocracies as kinds of governance system — ones in which behaviour is shaped by algorithmically programmed architectures. But I also use the term by analogy with terms like ‘democracy’, ‘aristocracy’, ‘technocracy’. In each of those cases, the suffix ‘cracy’ is used to mean ‘rule by’ and the prefix identifies who does the ruling. So ‘democracy’ is ‘rule by the people’ (the demos), aristocracy is ‘rule by aristocrats’ and so on. Algocracy then can also be taken to mean ‘rule by algorithm’, with the emphasis being on rule. In other words, for me ‘algocracy’ captures the authority that is given to algorithmically coded architectures in contemporary life. Whenever you are denied a loan by a credit-scoring algorithm; whenever you are told which way to drive by a GPS routing-algorithm; or whenever your name is added to a no fly list by a predictive algorithm, you are living within an algocratic system. It is my belief, and I think this is borne out in reality, that algocratic systems are becoming more pervasive and important in human life. I especially think is true because algorithms are the common language in which computers, smart devices and robots communicate. So as these artifacts become more pervasive, so too will the phenomenon of algocracy.

So what kinds of conceptual frameworks can we bring to bear on this phenomenon? Some work has been done already on this score. There are emerging bodies of scholarship in law, sociology, geography, philosophy, and information systems theory (among many more) that address themselves to the rise of algocracy (though they tend not to use that term) and some scholars within those fields have developed organisational frameworks for understanding and researching algocracies. I’ll focus on legal contributions in this presentation since that’s what I am most familiar with, and since I think what has been presented in legal theory so far tends to be shared by other disciplines.

I’ll start by looking at two frameworks that have been developed in order to help us understand how algocratic systems work.

The first tries to think about various stages involved in the construction and implementation of algocratic system. Algocracies do things. They make decisions about human life; they set incentives; they structure possible forms of behaviour; and so on. How do they manage this? Much of the answer lies how they use data. Zarsky (2013) suggests that there are three main stages in an algocratic system: (i) a data collection stage (where information about the world and relevant human beings is collected and fed into the system); (ii) a data analysis stage (where algorithms structure, process and organise that data into useful or salient chunks of information) and (iii) a data usage stage (where the algorithms make recommendations or decisions based on the information they have processed).

Citron and Pasquale (2014) develop a similar framework. They use different terminology but they talk about the same thing. They focus in particular on credit-scoring algocratic systems which they suggest have four main stages to them. This is illustrated in the diagram below:

Effectively, what they have done is to break Zarsky’s ‘usage’ stage into two: a dissemination stage (where the information processed and analysed by the algorithms gets communicated to a decision-maker) and a decision-making stage (where the decision-maker uses the information to do something concrete to an affected party, e.g. deny them a loan because of a bad credit score).

Another thing that people have tried to do is to figure out how humans relate to or get incorporated into algocratic systems. A common classificatory framework — which appears to have originated in the literature on automation — distinguishes between three kinds of system:

Human-in-the-loop System: These are algocractic systems in which an input from a human decision-maker is necessary in order for the system to work, e.g. to programme the algorithm or to determine what the effects of the algorithmic recommendation will be.

Human-on-the-loop Systems These are algocratic systems which have a human overseer or reviewer. For example, an online mortgage application system might generate a verdict of “accept” or “reject” which can then be reviewed or overturned by a human decision-maker. The system can technically work without human input, but can be overridden by the human decision-maker.

Human-out-of-the-loop Systems This is a fully algocratic system, one which has no human input or oversight. It can collect data, generate scores, and implement decisions without any human input.

This framework is useful because relationship of humans to the systems is quite important when we turn to consider the normative and ethical implications of algocracy.

This brings us to the third type of conceptual framework I wanted to mention. These ones focus on identifying and taxonomising the various problems that arise from the emergence of algocratic systems. Zarsky, for instance, developed the following taxonomy, which focused on two main types of normative problem: fairness-related problems and efficiency-related problems. I constructed this diagram to visually represent Zarsky’s taxonomy.

More recently, Mittelstadt et al have proposed a six-part conceptual map to help understand the ethical challenges posed by algocratic decision-making systems. This can be found in their paper ‘The Ethics of Algorithms’.

While each of these conceptual frameworks has some use, I find myself dissatisfied by the work that has been done to date. First, I worry that the frameworks introduced to help us understand how algocratic systems work are both too simplistic and too disconnected. It is important to think about the different stages inside an algocratic system and about how humans relate to and get affected by those systems. But it is important to remember that the relations that humans have to these systems can vary, depending on the stage that we happen to be interested in. There is a degree of complexity to how these stages get constructed and this is something that is missed by the simple ‘in the loop/on the loop/out of the loop’ framework. Furthermore, while I’m generally much happier with the work done on taxonomising and categorising the ethical challenges of algocracy, I worry that this work also tends to be disconnected from the complexities of algocratic systems. This is something that a good conceptual framework would avoid.

So can we come up with one?

3. A Model for Building Conceptual Frameworks: List’s Logical Spaces
I think we can. And I think some of the work done by Christian List is instructive in this regard. So what I propose to do in the remainder of this talk is develop a conceptual framework for understanding algocracy that is modelled on a series of conceptual frameworks developed by List.

List, in case you don’t know him, is a philosopher at the London School of Economics. He is a major proponent of formalised and axiomatised approaches to philosophy. Most of his early work is on public choice theory, voting theory and decision theory. More recently, he has turned his attention to other philosophical debates (e.g. philosophy of mind and free will). He has also written a couple of papers in the past half decade on the logical spaces in which different political concepts such a ‘democracy’ and ‘freedom’ live.

List’s logical spaces try to identify all the concepts of freedom or democracy that are possible, given certain constraints. It is difficult to understand this methodology in the abstract so let’s look at his logical space of freedom and democracy for guidance.

Freedom is a central concept in liberal political theory. Indeed, liberalism is, in essence, founded on the notion that political systems must respect individual freedom. But what does this freedom consist in? List argues that two major theories of freedom predominate in contemporary debates (cf. Skinner’s genealogy of freedom, which I detailed earlier on): freedom as non-interference and freedom as non-domination. The former holds that we are free if we are free from relevant external constraints; the latter holds that we are free if we are robustly free from non-arbitrary constraints.

The difference is subtle to the uninitiated but essential to those who care about these things. I have written several posts about both theories in the past if you care to learn more (LINKs). List suggests that the theories vary along two dimensions: the modal and the moral. That is to say, they vary depending on (a) whether they think the freedom to act requires not just freedom in this actual world but freedom across a range of possible worlds; and (b) whether they only recognise as interferences with freedom those interferences that are not morally grounded (i.e. interferences that are ‘arbitrary’). Freedom as non-interference is, typically, non-modal and non-moral: it focuses on what happens in the actual world, but counts all relevant interferences in the actual world, regardless of their moral justification, as freedom-undermining. Contrast that with republican theories of freedom as non-domination. These theories are modal and moral: they depend on the absence of interference across multiple possible worlds but only count interferences that are non-arbitrary. (Technical aside: some republicans, like Pettit, have argued that freedom as non-domination can be de-moralised but List argues that this is an unstable position - I won’t get into the details here)

What’s interesting from List’s perspective is that even though most of the contemporary debate settles around these two concepts of freedom, there is a broader logical space of freedom that is being ignored. After all, there are two dimensions along which theories of freedom can vary which suggests, at a minimum, four logically possible theories of freedom. The two-by-two matrix below depicts this logical space:

The advantages of mapping out this logical space become immediately apparent. They allow List to discover and argue in favour of an ignored or overlooked theory of freedom: the one in the bottom right corner. And this is exactly what he does in a paper published last year in Ethics with Laura Valentini entitled ‘Freedom as Independence’.

How about democracy? List takes a similar approach. He argues that democracy is, at its root, a collective decision-making procedure. It is a way of taking individual attitudes toward propositions or claims (e.g. ‘I prefer candidate A to candidate B’ or ‘I prefer policy X to policy Y’) and aggregating them together to form some collective output. This is illustrated schematically in the diagram below.

One of List’s key arguments, developed in his paper ‘The Logical Space of Democracy’ is that the space of logically possible collective decision procedures — i.e. ways of going from the individual attitudes to collective outputs — is vast. Much larger than any human can really comprehend. To give you a sense of how vast it is, imagine a really simple decision problem in which two people have to vote on two options: A and B. There are four possible combinations of votes (as each voter has two options). And there are several possible ways to go from those combinations to a collective decision (well 24 to be precise). For example, you could adopt a constant A procedure, in which the collective attitude is always A, irrespective of the individual attitudes. Or you could have a constant B procedure, in which the collective attitude is always B, irrespective of the individual attitudes. We would typically exclude such possibilities because they seem undesirable or counterintuitive, but they do lie within the space of logically possible aggregation functions. Likewise, there are dictatorial decision procedures (always go with voter 1, or always go with voter 2) and inverse dictatorial decision procedures (always do the opposite of voter 1, or the opposite of voter 2).

You might find this slightly silly because, at the end of the day, there are still only two possible collective outputs (A or B). But it is important to realise that there are many logically possible ways to go from the individual attitudes to the collective one. This highlights some of the problems that arise when constructing collective decision procedures. And, remember, this is just a really simple example involving two voters and two options. The logical space gets unimaginably large if we go to decision problems involving, say, ten voters and two options (List has the calculation in his paper, it is 21024).

A logical space with that many possibilities would not provide a useful conceptual framework. Fortunately, there is a way to narrow things down. List does this by adopting an axiomatic method. He specifies some conditions (axioms) that any democratic decision procedure ought to satisfy in advance, and then limits his search of the logical space of possible decision procedures to the procedures that satisfy these conditions. In the case of democratic decision procedures, he highlights three conditions that ought to be satisfied: (i) robustness to pluralism (i.e. the procedure should accept any possible combination of individual attitudes); (ii) basic majoritarianism (i.e. the collective decision should reflect the majority opinion); and (iii) collective rationality (i.e. the collective output should meet the basic criteria for rational decision making). He then highlights a problem with these three conditions. It turns out that it is impossible to satisfy all three of them at the same time (due to classic ‘voting paradoxes’). Consequently, the space of logically possible democratic decision procedures is smaller than we might first suppose. We are left with only those decision procedures that satisfy at least two of the mentioned conditions. Once you pare the space of possibilities down to this more manageable size you can start to think more seriously about its topographical highlights. That’s what the diagram below tries to illustrate.

I don’t want to dwell on the intricacies of List’s logical spaces, I’m only referencing them because I think they provide a useful methodology for constructing conceptual frameworks. They balance the tradeoff between complexity and simplicity quite effectively and exhibit a number of other features listed earlier on. By considering the various dimensions along which particular phenomena can vary, List allows us to see conceptual possibilities that are often overlooked. Sometimes the number of conceptual possibilities identified can be overwhelming, but by applying certain axioms we can constrain our search of the logical space and make it more manageable.

4. Constructing A Logical Space of Algocracy
So can we apply the same approach to algocracy? I think we can. We can start by identifying the parameters (dimensions) along which various algocractic procedures vary.

At a first pass, three parameters seem to define the space of possible algocratic decision procedures. The first is the particular domain or type of decision-making. Legal and bureaucratic agencies make decisions across many different domains. Planning agencies make decisions about what should be built and where; revenue agencies sort, file and search through tax returns and other financial records; financial regulators make decisions concerning the prudential governance of financial institutions; energy regulators set prices in the energy industry and enforce standards amongst energy suppliers; the list goes on and on. In the formal model I outline below, the domain of decision-making is ignored. I focus instead on two other parameters defining the space of algocratic procedures. But this is not because the domain is unimportant. When figuring out the strengths or weaknesses of any particular algocratic decision-making procedure, the domain of decision-making should always be specified in advance.

The second parameter concerns the main components of the decision-making ‘loop’ that is utilised by these agencies. In section two, I mentioned Zarsky, Citron and Pasquale’s attempts to identify the different ‘stages’ in algocratic decision-procedures. One thing that strikes me about the stages identified by these authors is how closely they correspond to the stages identified by authors looking at automation and artificial intelligence. For instance, the collection, processing and usage stages identified by Zarsky et al feel very similar to the sensing, processing and actuating stages identified by AI theorists and information systems engineers.

This makes sense. Humans in legal-bureaucratic agencies use their intelligence when making decisions.Standard models of intelligence divide this capacity into three or four distinct tasks. If algocratic technologies are intended to replace or complement that human intelligence, it would make sense for those technologies to fit into those distinct task stages.

My own preferred model for thinking about the stages in a decision-making procedure is to break it down into four distinct stages. As follows:

(a) Sensing: the system collects data from the external world.

(b) Processing: the system organises that data into useful chunks or patterns and combines it with action plans or goals.

(c) Acting: the system implements its action plans.

(d) Learning: the system uses some mechanism that allows it to learn from what it has done and adjust its earlier stages (this results in a ‘feedback loop’).

Although individual humans within bureaucratic agencies have the capacity to perform these four tasks themselves, the work of an entire agency can also be conceptualised in terms of these four tasks. For example, a revenue collection agency will take in personal information from the citizens in a particular state or country (sensing). These will typically take the form of tax returns, but may also include other personal financial information. The agency will then sort that collected information into useful patterns, usually by singling out the returns that call for greater scrutiny or auditing (processing). Once they have done this they will actually carry out audits on particular individuals, and reach some conclusion about whether the individual owes more tax or deserves some penalty (acting). Once the entire process is complete, they will try to learn from their mistakes and triumphs and improve the decision-making process for the coming years (learning).

The important point in terms of mapping out the logical space of algocracy is that algorithmically coded architectures could be introduced to perform one or all of these four tasks. Thus, there are subtle and important qualitative differences between the different types of algocratic system, depending on how much of the decision-making process is taken over by the computer.

In fact, it is more complicated than that and this is what brings us to the third parameter. This one concerns the precise relationship between humans and algorithms for each task in the decision-making loop. As I see it, there are four general relationship-types that could arise: (1) humans could perform the task entirely by themselves; (2) humans could share the task with an algorithm (e.g. humans and computers could perform different parts of the analysis of tax returns); (3) humans could supervise an algorithmic system (e.g. a computer could analyse all the tax returns and identify anomalies and then a human could approve or disapprove their analysis); and (4) the task could be fully automated, i.e. completely under the control of the algorithm.

This is where things get interesting. Using the last two parameters, we can construct a grid which we can use to classify algocratic decision-procedures. The grid looks something like this:

This grid tells us that when constructing or thinking about an algocratic system we should focus on the four different tasks in the typical intelligent decision-making loop and ask of each task: how is this task being distributed between the humans and algorithms? When we do so, we see the logical space of possible algocratic decision procedures.

5. Advantages and Disadvantages of the Logical Space Model
That brings us to the critical question: does this conceptual framework have any of the virtues I mentioned earlier on?

I think it has a few. I think it captures the complexity of algocracy in a way that existing conceptual frameworks do not. It tell us that there is a large logical space of possible algocratic systems. Indeed, it allows us to put some numbers on it. Since there are four stages and four possible relationship-types between humans and computers at those four stages, it follows that there are 44 possible systems (i.e. 256) within any given decision-making domain. What’s more, I think you could make the logical space even more complex by adding a third dimension of variance. What would that dimension consist in? Well one obvious suggestion would be to distinguish between different types of algorithmic assistance/replacement at each of the four stages. For instance, computer scientists sometimes distinguish between algorithmic processes that are (i) interpretable and (ii) non-interpretable (i.e. capable of being deconstructed and understood by humans or not). That could be an additional dimension of variance. It could mean that for each stage in the decision-making process there are 8 possible configurations, not just four. That would give us a logical space consisting of 84 possibilities.

But the interpretability/non-interpretability distinction is just one among many possible candidates for a third dimension of variance. Which one we pick will depend on what we are interested in (I’ll return to this point below).

Another virtue of the logical space model is that it gives us an easy tool for coding the different possible types of algocratic system. For the two-dimensional model, I suggest that this be done using square brackets and numbers. Within the square brackets there would be four separate number locations. Each location would represent one of the four decision-making tasks. From left-to-right this would read: [sensing; processing; acting; learning]. You then replace the names of those tasks with numbers ranging from 1 to 4. These numbers would represent the way in which the task is distributed between the humans and algorithms. The numbers would correspond to the numbers given previously when explaining the four possible relationships between humans and algorithms. So, for example:

[1, 1, 1, 1] = Would represent a non-algocratic decision procedure, i.e. one in which all the decision-making tasks are performed by humans.

[2, 2, 2, 2] = Would represent an algocratic decision procedure in which each task is shared between humans and algorithms.

[3, 3, 3, 3] = Would represent an algocratic decision procedure in which each task is performed entirely by algorithms, but these algorithms are supervised by humans with some residual possibility of intervention.

[4, 4, 4, 4] = Would represent an pure algocratic decision procedure in which each task is performed by an algorithm, with no human oversight or intervention.

If we created a three dimensional logical space, we could simply modify the coding system by adding a letter after each number to indicate the additional variance. For example, if we adopted the interpretability/non-interpretability dimension, we could add ‘i’ or ‘ni’ after each number to indicate whether the step in the process was interpretable (i) or not (ni). As follows:

[4i, 4ni, 4i, 4ni] = Would represent a pure algocratic procedure that is interpretable at the sensing and acting stages, but not at the processing and learning stages.

This coding mechanism could have some practical advantages. Three are worth mentioning. First, it could give any designer and creator of an algocratic system a quick tool for figuring out what kind of system they are creating and the potential challenges that might be raised by the construction of that system. Second, it could give a researcher something to use when investigating real-world algocratic systems and seeing whether they share further properties. For instance, you could start investigating all the [3, 3, 3, 3] systems across various domains of decision-making and see whether the human supervision is active or passive across those domains. Third, it might give us a simple tool for measuring how algocratic a system is or how algocratic it becomes over time. So we might be able to say that a [4ni, 4ni, 4ni, 4ni] is more algocratic than a [4i, 4i, 4i, 4i] and we might be able to spot the drift towards more algocracy within a decision-making domain.

But there are also clearly disadvantages with the logical space model. The most obvious is that the four stages and four relationships are not discrete in the way that the model presumes. To say that a task is ‘shared’ between a human and an algorithm is to say something imprecise and vague. There may be many different possible ways in which to share a task. Not all of them will be the same. This also true for the description of the tasks. ‘Processing’, ‘collecting’ and ‘learning’ are all complicated real-world tasks. There are many different ways to process, collect and learn. That additional complexity is missed by the logical space model.

It’s hard to say whether this a fatal objection or not. All conceptual models involve some abstraction and simplification of reality. And all conceptual models ignore some element of variation. List’s logical space of freedom, for instance, involves similarly large amounts of abstraction and simplification. To say that theories of freedom vary along modal and moral dimensions is to say something very vague and imprecise. Specific theories of freedom will vary in how modal they are (i.e. how many possible worlds they demand the absence of interference in) and in their understanding of what counts as a morally legitimate interference. As a result of this, List prefers to view his logical space of freedom as a ‘definitional schema’ - something that is fleshed out in more detail with specific conceptualisations of the four main categories of freedom. It is tempting to view the logical space of algocracy in a similar light.

Another obvious problem with the logical space model is that it is constructed with a particular set of normative challenges in mind. I was silent about this in my initial description of it, and indeed I didn’t fully appreciate it until I afterwards, but it’s pretty clear looking back on it that my logical space is useful primarily for those with an interest in the procedural virtues of an algocratic system. As I have argued elsewhere, one of the main problems with the rise of algocracy is that it could undermine meaningful human participation in and comprehension of the systems that govern our lives. That’s probably why my logical space model puts such an emphasis on the way in which tasks are shared between humans and algorithms. I’m concerned that when there is less sharing, there is less participation and comprehension.

But this means that the model is relatively silent about some of the other normative concerns one could have about these technologies (e.g. bad data, biased data, negative consequences). It’s not that these concerns are completely shut out or shut down; it’s just that they aren’t going to be highlighted simply by identifying the location with the logical space that is occupied by any particular algocratic system. What could happen, however, is that empirical investigation of algocratic systems with similar codes could reveal additional shared normative advantages/disadvantages, so that the code becomes shorthand for those other concerns.

Again, it’s hard to say whether this is fatal or not. It might just mean that the logical space I constructed is not ‘the’ logical space of algocracy but rather ‘a’ logical space of algocracy. Other people, with other interests, could construct other logical spaces. That’s doesn’t mean this particular logical space is irrelevant or useless; it just means its relevance and utility is more constrained.

Anyway, I think I have said enough for now. I’ll leave things there and hand it over to you for questions.

Monday, November 21, 2016

In this episode I talk to Nicole Vincent. Nicole is an international philosopher extraordinaire. She has appointments at Georgia State University, TU Delft (Netherlands) and Macquarie University (Sydney). Nicole's work focuses on the philosophy of responsibility, cognitive enhancement and neuroethics. We talk about two main topics: (i) can neuroscience make us happier? and (ii) how should we think about radically changing ourselves through technology?

Thursday, November 17, 2016

How should we decide what gets made, when it gets made, and who should get it once it is made? This is one of the foundational questions of economics. Proponents of the free market insist that private individuals, interacting with one another via a marketplace, responding to a price mechanism, should determine the answers; proponents of central planning think that a suitably organised government bureaucracy should do the work; others prefer a mixed approach.

Friedrich Hayek’s knowledge argument is a famous contribution to this debate. It extolls the benefits of the free market over central planning. Although there are many explanations and commentaries on Hayek’s argument, I have yet to come across one that I really like — one that both does justice to the nuances of Hayek’s original claims while at the same time highlighting their flaws. Richard Bronk’s article ‘Hayek on the wisdom of prices: A reassessment’ is the closest thing I have read, but Bronk’s article lacks concision and clarity.

I want to make up for this. I want to extract the logical core of Hayek’s argument, revealing the key premises and assumptions that go into it, and then subject it to critical scrutiny. I am going to do this over the course of two posts. In this first post I’ll just go through the main steps in Hayek’s argument, briefly commenting on its flaws. In the second post I’ll look in more detail at the weaknesses in the argument, focusing in particular on Bronk’s own arguments about the flaws of the price mechanism.

1. From the Distribution Problem to the Knowledge Problem
Let’s call the ‘what gets made, who gets it’ (etc) problem the ‘distributional problem’:

Distributional problem: All societies need to figure out how best to distribute their scarce resources (material resources, labour, time etc.), i.e. they need to figure out what gets done, when and by whom.

It is important not to underestimate the difficulty of the distributional problem. Human society is both complicated and complex. It consists of many different, dynamically interrelated parts. Figuring out who wants what, who needs what, and how they relate to one another is a fiendishly difficult thing. There are thousands of distributional decisions that need to be made on an minute-to-minute basis. How many shoes should be made? How many shoelaces? How much food should be grown? What types of food? What skills should be taught? Who should teach them? And on and on.

Hayek’s key insight was to suggest that the answer to the distributional problem depends upon the answer to another problem:

Knowledge Problem: To figure out who should get what and when, we need to know certain things: we need to know what people want and need, what resources are available to meet those wants and needs, what the best (most efficient) means of deploying those resources is, how people react to our distributional decisions and so on.

It is also important not to underestimate the difficulty of the knowledge problem. Given the complex and complicated nature of human society, there are many discrete and constantly changing knowledge gaps that need to addressed if we are to figure out who should get what and when.

The essence of Hayek’s knowledge argument is that central planners are not very good at solving the knowledge problem whereas free markets, despite some obvious flaws, are. Those two claims constitute the core of his ‘knowledge argument’. Let’s look at both in more detail.

2. The Case Against Central Planning
The first claim is that central planners fail to solve the knowledge problem. Why not? To answer that we need to understand what central planning is, and it is, in fact, a somewhat complex notion. Roughly, we are talking about a state-run bureaucracy that collects information and makes decisions about what should get made and how it should be distributed. There are many different ways for this play out in practice. You could imagine a single, dictatorial bureaucrat sitting at the centre of an institution deciding what should get done and when in a largely intuitive manner. Or you could imagine something more complex and technocratic, like the cybernetic management system that was used by the Allende government in Chile in the 1970s. There are also ways in which central planners could create market-like structures that replicate some, but not all, features of the free market (itself a highly contested concept). The possible market-like structures that could be adopted featured heavily in the general ‘socialist calculation debate’ in economics (to which Hayek’s argument is a contribution).

So when Hayek says that a centrally planned economy will not solve the knowledge problem what kind of centrally planned economy is he talking about? The general model would be something along the lines of what existed in Soviet Russia: reasonably complex bureaucratic organisations where information is collected and processed by diverse (sometimes politically antagonistic) groups and fed through some decision-making system. The key point is that there is a kind of ‘bottleneck’ within the system. Instead of distributional decisions being made all the time and in parallel, distributional decisions are forced through a single bureaucratic decision-making node. This means that all the information relevant to the distributional decision needs to reach that node. Hayek argues that this is not going to happen.

The argument works like this:

(1) If a centrally planned economy is going to work (i.e. going to solve the distribution problem), central planners will need the knowledge relevant to making distributional decisions.

(2) Central planners cannot have the relevant knowledge.

(3) Therefore, a centrally planned economy is not going to work.

The second premise is key here. Hayek presents four arguments in support of that premise. The first is:

(4) Much of the knowledge required for distributional decision-making is tacit, i.e. cannot be easily translated into explicit representations that can then be communicated between relevant decision-makers.

I discussed the phenomenon of tacit knowledge in a previous post about automation and unemployment. The basic idea is that much of the know-how underlying the creation and supply of goods and services is tacit. It is based on practical, oftentimes subconscious, skills that individual workers and manufacturers have acquired over the course of their working lives. Think of the expert surgeon who has performed thousands of hours of complex surgery and intuits when something is going wrong. They act on these intuitions and they often, consequently, improve the quality of the service they provide. There is nothing necessarily mystical or unusual in this ability. The intuitions don’t come from nowhere; they come from practical experience. But they are, nonetheless, very difficult to express and communicate. It is hard to see how a central planner could gain access to this tacit knowledge unless they themselves replicated the experience levels of the individual suppliers of goods and services.

The second argument in support of premise (2) is:

(5) The knowledge required is too diverse to be amassed into (and appreciated by) one perspective.

Markets are complex and multi-faceted. The knowledge any one individual has of the market is necessarily partial and incomplete. Hayek argues, and this seems plausible, that no one individual or group is likely able to amass together all those partial perspectives into a unified and complete perspective. Instead, what will happen is that central planners will think they have complete knowledge. They will become over-confident in their ability to understand and predict the behaviour of the people affected by their decisions.

The third argument in support of premise (2) is:

(6) Central planners cannot know subjective values and subjective values are part of the knowledge needed to solve the distributional problem.

Hayek defended the subjective theory of value. He held that the value of a good or service was determined by the interaction of the subjective preferences of the agents supplying and demanding that good or service. It was not determined by any intrinsic/objective property of the good or service. Scarce resources are best distributed when the actions of suppliers are responsive to the preferences of demanders. But since it is impossible to really know what is really going on in someone’s mind — i.e. to know what they truly prefer — it follows that it is impossible for a central planner to have access to all the knowledge they need. At best, they will get a partial understanding of subjective value by examining the external behaviour of individuals, but this external behaviour can be misleading.

The fourth and final argument in support of premise (2) is:

(7) Central planners cannot rival the knowledge discovery mechanisms of the free market and knowledge discovery is also essential to solving knowledge problem.

This is probably the most complicated aspect of the case against central planning. The idea is that the efficient distribution of goods and services does not just depend on current knowledge but that it also depends on creation and innovation. Suppliers discover more efficient ways of doing things: they innovate in production processes, creating new machinery and new tools, and they innovate in supply chains, creating new ways to get goods and services to consumers. In other words, they create new forms of knowledge that then get fed into the resolution of the distributional problem. Central planners may be able to encourage some experimentation and innovation but they will never, according to Hayek, rival the creative potentialities of the free market. (I should say that this is something hotly contested by defenders of socialist planning like Oskar Lange and there are some historical counterpoints highlighting to role of big government projects in innovation and experimentation).

Anyway, this gives us the first part of Hayek’s argument (the case against central planning). The argument is diagrammed below.

3. The Case in Favour of Free Markets
The second part of Hayek’s argument is an argument in favour of free markets. To some extent, this argument implicit in the critique of central planning: the knowledge gaps faced by central planners would not, it is claimed, arise on the free market. But you cannot get all the way to that conclusion from what has been said thus far. After all, at least some of the knowledge gaps that arise for central planners would seem to arise on the free market too. If knowledge is tacit, diverse and subjective, then surely it is just as difficult for it to be discovered by players on the market too?

This is where Hayek makes his most famous contribution. He argues that the free market has one tool at its disposal that can help to fill in these knowledge gaps: the price mechanism. For him, the free market functions like an information communications system, with prices being the signals that communicate important information (knowledge) to the players on the market.

The argument works a little something like this:

(8) If free markets are going to solve the distribution problem, they will have to solve the knowledge problem.

(9) A key feature of a free market is the price mechanism: the supply and demand-related decisions of the players on the free market create and respond to prices.

(10) The price mechanism can solve the knowledge problem.

(11) Therefore, free markets can solve the distribution problem.

There is a lot that needs to be said about this argument. The first premise (8) simply applies the general principle underlying Hayek’s argument (i.e. that the distributional problem depends on the knowledge problem). The second premise (9) is appealing to a key feature of the free market. As we will see below, it is not a unique feature of the free market (prices can exist on ‘unfree’ markets), but prices do function in a particular way on the free market. The third premise (10) follows this up by highlighting the particular way in which prices function, namely to solve the knowledge problem. ‘Solves’ is a bit strong, of course. We are not going to fill every relevant knowledge gap. The idea is, rather, prices do a better job than a central planner ever could.

So premise (10) is then the key to the whole argument. What can be said in its favour? Three things stand out from Hayek’s discussion:

The subjective knowledge about how much individual consumers and suppliers value a good or service is encapsulated in the market price. This price is the result of diverse, locally-situated actors, coming together and interacting on a marketplace. In other words, it pulls together the diverse perspectives that are difficult to encapsulate in the spreadsheets and statistical data beloved by central planners. We can also argue that the price draws upon the tacit knowledge of the producers and suppliers working on the market. They have the know-how required to produce goods and services and they can communicate the value of that know-how through the price they charge.

(13) Prices respond to developments, e.g. changes in preference, new discoveries or innovations in production and so forth.

The individual suppliers and consumers change the prices they are willing to receive and spend in response to local, dynamically updated information. Furthermore, the players on the market are incentivised to do new things in the hope that it will result in higher profits or lower costs. If a new production process is discovered that supplies a good or service at a cheaper cost, this will be fed into the market price, thereby telling people that a new production process is worth taking onboard (and vice versa).

(14) Prices communicate relevant information to people, thereby enabling them to know what is and is not worth doing on the market.

One of the big problems facing central planners is that they have to know what needs to be produced and what needs to be supplied, and then communicate this knowledge to the people who make and supply things. This is not easy: you have to somehow draw all the knowledge together and give a quick and easy signal that conveys that knowledges. Prices address this problem with remarkable efficiency. Prices don’t tell you everything you need to know about human wants and needs and how best to meet them. But they do provide a nice, simple, and clear signal of what people want and what methods will best meet those wants. The signal (the price) compresses a lot of information into one place and is readily available to anyone who needs to know it. This helps solve the communication problem that would otherwise arise.

This second portion of the knowledge argument is diagrammed below.

4. Problems and Next Steps
Hayek’s argument is not above criticism. This is to state the obvious. But three criticisms are worth mentioning here by way of conclusion. The first is that market prices will only work their communicative magic if they are undistorted by government interference. For Hayek, prices need to freely respond to changes in local behaviour and to new discoveries if they are going to collate and communicate relevant knowledge. If the government intervenes by setting price floors or price ceilings, this cannot happen. Similarly, if the government imposes additional costs where none should arise, you get further distortions in the knowledge being communicated.

Second, even though Hayek thinks that prices contain a lot of the information needed to solve the knowledge problem, he does not think that they contain all relevant information. This makes his view quite different from modern-day proponents of the efficient market hypothesis (who do think that market prices contain all relevant information). Hayek thinks that the players on the market are constantly trying to achieve some informational advantage over their peers: they are trying to discover new production processes or spot knowledge gaps that others have missed (opportunities for arbitrage). This is both healthy and necessary. It means that markets can encourage innovation and that prices can constantly adapt and update in response to new information. If market prices already contain all the relevant information, it would be difficult to make sense of much market behaviour.

Third, Hayek’s argument overlooks the various ways in which markets can themselves distort prices, either by failing to collate some relevant information or by being hijacked by dominant narratives. Bronk argues that this is more common than we might like to think, particularly in certain markets. This is possibly the most interesting critique of Hayek’s argument and I will look at it in more detail in a future post.