Tag: Digital Inequalities

One of the most pressing issues we confront when analysing the digital economy is a pronounced tendency towards oligopoly which makes a lie of an earlier generation’s utopian embrace of the Internet as a sphere of free competition and a driver of disintermediation. There are important lessons we can learn from platform studies about the reasons for this, concerning the architecture of platforms and the logic of their growth. But it’s important we don’t lose sight of how these dynamics are reliant upon existing legal and economic processes which predate the ‘digital revolution’. As Jonathan Taplin points out in Move Fast and Break Things, their competitive advantage was reliant upon a specific regulatory environment that was far from inevitable. From pg 79:

The economist Dean Baker has estimated that Amazon’s tax-free status amounted to a $ 20 billion tax savings to Bezos’s business. Baker notes, “In a state like New York, where combined state and local sales taxes average over 8.0 percent, Amazon could charge a price that was 1.0 percent below its brick and mortar competition, and still have an additional profit of 7 percent on everything it sold. That is a huge deal in an industry where profits are often just 2–3 percent of revenue.” Bezos, eager to preserve this subsidy, went to work in Washington, DC, and got Republican congressman Christopher Cox and Democratic senator Ron Wyden to author the Internet Tax Freedom Act. The bill passed and was signed by President Bill Clinton on October 21, 1998. Although not barring states from imposing sales taxes on ecommerce, it does prevent any government body from imposing Internet-specific taxes.

This is only one example. An adequate understanding of the digital economy requires that we identify the regulatory environments within which each category of tech firm operates and how this has contributed to their thriving or struggling. When we combine this institutional analysis with platform dynamics, we can begin to account for the level of market concentration which Taplin summarises on pg 119-120:

In antitrust law, an HHI score —according to the Herfindahl-Hirschman Index, a commonly accepted measure of market concentration —is calculated by squaring the market share of each firm competing in a given market and then adding the resulting numbers. The antitrust agencies generally consider markets in which the HHI is between 1,500 and 2,500 to be moderately concentrated; markets in which the HHI is in excess of 2,500 are highly concentrated. The HHI in the Internet search market is 7,402. Off the charts.

He goes on to argue on pg 121-122 that this situation helps generate a cash glut with serious systemic consequences:

The problem is that the enormous productivity of these companies, coupled with their oligopolistic pricing, generates a huge and growing surplus of cash that goes beyond the capacity of the economy to absorb through the normal channels of consumption and investment. This is why Apple has $ 150 billion in cash on its balance sheet and Google has $ 75 billion. These enterprises cannot find sufficient opportunities to reinvest their cash because there is already overcapacity in many areas and because they are so productive that they are not creating new jobs and finding new consumers who might buy their products. As former treasury secretary Lawrence Summers has put it, “Lack of demand creates lack of supply.” Instead of making investments that could create new jobs, firms are now using their cash to buy back stock, which only increases economic inequality.

In other words: the inequality which digital capitalism generates is only contingently a function of technology.

In the last few weeks, I’ve found myself using the term ‘playbook’ in a number of contexts. It’s typically defined as “a book containing a sports team’s strategies and plays, especially in American football” but I’m not quite sure where I picked up the phrase from as someone who hasn’t had much interest in sport for a long time.

It’s been on my mind since reading Merchants of Doubt, an incisive historical exploration of a dangerous corporate tendency towards the deliberate cultivation of doubt in relation to pressing issues such as nuclear winter, acid rain, DDT and climate change. As I suggested in a post a couple of weeks ago, we can talk meaningfully of a ‘playbook for merchandising doubt’. In fact something akin to this was once explicitly published, as the authors of Merchants of Doubt summarise on pg 144-145:

Bad Science: A Resource Book was a how-to handbook for fact fighters. It contained over two hundred pages of snappy quotes and reprinted editorials, articles, and op-ed pieces that challenged the authority and integrity of science, building to a crescendo in the attack on the EPA’s work on secondhand smoke. It also included a list of experts with scientific credentials available to comment on any issue about which a think tank or corporation needed a negative sound bite. 42 Bad Science was a virtual self-help book for regulated industries, and it began with a set of emphatic sound-bite-sized “MESSAGES”:

1. Too often science is manipulated to fulfill a political agenda.

2. Government agencies … betray the public trust by violating principles of good science in a desire to achieve a political goal.

3. No agency is more guilty of adjusting science to support preconceived public policy prescriptions than the Environmental Protection Agency.

4. Public policy decisions that are based on bad science impose enormous economic costs on all aspects of society.

6. Proposals that seek to improve indoor air quality by singling out tobacco smoke only enable bad science to become a poor excuse for enacting new laws and jeopardizing individual liberties.

Has anyone encountered comparable documents to this? The scale and organisation of doubt merchandising surely means they have been produced. But perhaps there’s a broader category to be explored here: the explicit articulation of surreptitious tactics.

It highlights how coordination presupposes communication, suggesting that even the most duplicitous strategies of the powerful will tend to leave a paper trail. Where we see what appears to be organisation, even if the actors involved deny this, do we have reason to believe there may somewhere exist a ‘playbook’ or something akin to it? I would tentatively define this as the formal articulation of a tactical repertoire that can be drawn upon in informal contests, even if the definition of these elements may be obscured behind a thick veneer of technocratic distance. By ‘informal contests’ I mean those where rules are not defined or a contest actually declared. The existence of a playbook reveals how advantages in organisational capacity might translate to a practical advantage in competition.

I’d be intrigued to know if these ruminations resonate with anyone, particularly those who might be able to furnish further examples

In Naomi Klein’s new book No Is Not Enough, there’s a lucid overview of the intersection between political and environmental crisis. The role of drought in fermenting the conditions for the Syrian civil war was something which Marc Hudson first explained to me last year. From pg 182-183:

The irony is particularly acute because many of the conflicts driving migration today have already been exacerbated by climate change. For instance, before civil war broke out in Syria, the country faced its deepest drought on record—roughly 1.5 million people were internally displaced as a result. A great many displaced farmers moved to the border city of Daraa, which happens to be where the Syrian uprising broke out in 2011. Drought was not the only factor in bringing tensions to a head, but many analysts, including former secretary of state John Kerry, are convinced it was a key contributor.

In fact, if we chart the locations of the most intense conflict spots in the world right now—from the bloodiest battlefields in Afghanistan and Pakistan, to Libya, Yemen, Somalia, and Iraq—what becomes clear is that these also happen to be some of the hottest and driest places on earth. The Israeli architect Eyal Weizman has mapped the targets of Western drone strikes and found an “astounding coincidence.” The strikes are intensely concentrated in regions with an average of just 200 millimeters (7.8 inches) of rainfall per year—so little that even slight climate disruption can push them into drought.

In other words, we are bombing the driest places on the planet, which also happen to be the most destabilized. A frank explanation for this was provided in a US military report published by the Center for Naval Analyses a decade ago: “The Middle East has always been associated with two natural resources, oil (because of its abundance) and water (because of its scarcity).” When it comes to oil, water, and war in the Middle East, certain patterns have become clear over time. First, Western fighter jets follow that abundance of oil in the region, setting off spirals of violence and destabilization. Next come the Western drones, closely tracking water scarcity as drought and conflict mix together. And just as bombs follow oil, and drones follow drought—so, now, boats follow both. Boats filled with refugees fleeing homes ravaged by war and drought in the driest parts of the planet.

Surely these intersections should be at the forefront of how we imagine social processes? I realise there are many reasons why this isn’t the case but the one I’ve been pondering is the sustained hold of the nature/society distinction. If we see nature and society as distinct domains, we’re liable to be blind towards the environmental factors at work in social catastrophe. Only an idiot would deny the relationship in principle but the effects are projected into the future, as an expected horizon in which the natural will impact upon the social. But in doing so, their present entanglement with all the consequences flowing from this, comes to be lost in the analysis of events which are interpreted as narrowly political.

One of the most interesting issues raised by the rise of data science in party politics is how to untangle corporate rhetoric from social reality. I have much time for the argument that we risk taking the claims of a company like Cambridge Analytica too seriously, accepting at face value what are simply marketing exercises. But the parallel risk is that we fail to take them seriously enough, dismissing important changes in how elections are fought as marketing hype propounded by digital charlatans.

Perhaps we need to focus more on the data scientists themselves. As much as there is something of the Bond villain about Alexander Nix, CEO of Cambridge Analytica, it’s important that we don’t become preoccupied with corporate leaders. Who are the rank-and-file data scientists working on campaigns? What motivates them? How do they conceive of the work they do? There were interesting hints about this in the recent book Shattered, looking at Hilary Clinton’s failed election campaign. Much as was the case with Jeb Bush’s near entirely stalled campaign, there had been much investment in data analytics, with buy-in right from the top of the campaign. From pg 228-229:

These young data warriors, most of whom had grown up in politics during the Obama era, behaved as though the Democratic Party had come up with an inviolable formula for winning presidential elections. It started with the “blue wall”—eighteen states, plus the District of Columbia, that had voted for the Democratic presidential nominee in every election since 1992. They accounted for 242 of the 270 electoral votes needed to win the presidency. From there, you expanded the playing field of battleground states to provide as many “paths” as possible to get the remaining 28 electoral votes. Adding to their perceived advantage, Democrats believed they’d demonstrated in Obama’s two elections that they were much more sophisticated in bringing data to bear to get their voters to the polls. For all the talk of models and algorithms, the basic thrust of campaign analytics was pretty straightforward when it came to figuring out how to move voters to the polls. The data team would collect as much information as possible about potential voters, including age, race, ethnicity, voting history, and magazine subscriptions, among other things. Each person was given a score, ranging from zero to one hundred, in each of three categories: probability of voting, probability of voting for Hillary, and probability, if they were undecided, that they could be persuaded to vote for her. These scores determined which voters got contacted by the campaign and in which manner—a television spot, an ad on their favorite website, a knock on their door, or a piece of direct mail. “It’s a grayscale,” said a campaign aide familiar with the operation. “You start with the people who are the best targets and go down until you run out of resources.”

Understanding these ‘data warriors’ and the data practices they engage in is crucial to understanding how data science is changing party politics. Perhaps it’s even more important than understanding high profile consultancies and the presentations of their corporate leaders.

That’s the question I’ve been asking myself when reading through two books by Nick Couldry in which he develops a materialist phenomenological approach to understanding social reality. The first is The Mediated Construction of Social Reality (with Andreas Hepp) and the second is Media, Society, World. It’s in the latter book that he considers the representational power of media. From loc 683:

Media institutions, indeed all media producers, make representations: they re-present worlds (possible, imaginary, desirable, actual). Media make truth claims, explicit or implicit: the gaps and repetitions in media representations, if systematic enough, can distort people’s sense of what there is to see in the social and political domains.

There is a political economy underpinning this, in terms of the capacity to make such representations and the gains accruing from this capacity. The common reference points which accumulate as a consequence serve a broader economic purpose. From loc 701:

However, if basic consumer demand –for fashion, music, sport –is to be sustained at all, it requires ‘the media’ to provide common reference points towards which we turn to see what’s going on, what’s cool.

The interests and influence in play here have been crucial to the unfolding of late modernity. Media has been a site through which power has consolidated. What we are seeing with ‘post-truth’ is a deconsolidatiob of this apparatus, taking place at a number of different levels. From loc 886:

Representations matter. Representations are a material site for the exercise of, and struggle over, power. Put most simply, our sense of ‘what there is’ is always the result of social and political struggle, always a site where power has been at work. 150 But fully grasping this in relation to media is difficult: because the role of media institutions is to tell us ‘what there is’ –or at least what there is that is ‘new’ –media’s work involves covering over its daily entanglement in that site of power. Media aim to focus populations’ attention in a particular direction, on common sites of social and political knowledge. Media institutions’ embedding as the central focus of modern societies is the result of a history of institutional struggle that is becoming more, not less, intense in the digital media era. It is essential to deconstruct the apparently natural media ‘order’ of contemporary societies.

One of the arguments which pervades Uberworked and Underpaid, by Trebor Scholz, concerns the materiality of digital labour. As someone whose back and neck start to ache if I spend too much time at a computer, I’ve always found the tendency to assume there is something mysteriously immaterial about using computers to be rather absurd. But there’s more to Scholz’s argument then this generic tendency to fail to recognise the embodied character of digital engagement. From Loc 4103

It’s worth remembering that whether a worker toils in an Amazon warehouse or works for crowdSPRING, her body will get tired and hungry. She’ll have to take care of car payments, medical bills for her children, and student debts, not to mention saving for retirement. Digital work makes the body of the worker invisible but no less real or expendable.

It strikes me that what we are talking about here is the epistemic fallacy: taking what we know to exhaust what is. The mediation involved in digital labour impedes or entirely prevents knowledge of the material circumstances of the worker. The disaggregation and workflows facilitated by data infrastructures similarly obscure knowledge of the many workers whose efforts combine, in enormously complex way, to produce discernible outcomes. The political economy and social-technical infrastructure of digital labour is certainly complex, but it’s nonetheless useful to recognise the underlying epistemological issue at work here.

From The Revenge of the Monsters of Educational Technology, by Audrey Watters, loc 1187:

Many of us in education technology talk about this being a moment of great abundance—information abundance—thanks to digital technologies. But I think we are actually/ also at a moment of great austerity. And when we talk about the future of education, we should question if we are serving a world of abundance or if we are serving a world of austerity. I believe that automation and algorithms, these utterly fundamental features of much of ed-tech, do serve austerity. And it isn’t simply that “robot tutors” (or robot keynote speakers) are coming to take our jobs; it’s that they could limit the possibilities for, the necessities of care and curiosity.

Understanding this relationship between austerity and abundance strikes me as a crucial question of political theory. One which we evade if we reduce the former to the latter or vice versa, seeing abundance as negating austerity (as Tyler Cowen does, for instance) or austerity as negating abundance (by robbing it off its social significance as a cultural change).

In his Uberworked and Underpaid, Trebor Scholz draws out an important parallel between the platform capitalism of YouTube and the near universally praised Wikipedia:

Unsurprisingly, YouTube hires countless consultants to better understand how to trigger the participation of the crowd. They wonder how they can get unpaid producers to create value. But equally, on the not-for-profit site, Wikipedia is asking how they can draw in more female editors, for instance.

Both involve an orientation to their users which sees them as objects of management, even if we might see the ends to which they are being managed in very different terms. This makes a lie of what Nick Couldry describes as the ‘myth of us’: the imaginary of platform capitalism which sees it as facilitating the free expression of natural sociability which older socio-technical systems had constrained

There’s an interesting passage in Uberworked and Underpaid, by Trebor Scholz, in which he discusses the contrasting experience of Amazon Mechanical Turk by users and workers. From loc 719:

While AMT is profiting robustly, 11 it has –following the observations of several workers –not made significant updates to its user interfaces since its inception, and the operational staff appears to be overwhelmed and burned out. Turkers have written and shared various browser scripts to help themselves solve specific problems. While this is a wonderful example of mutual aid among AMT workers, it is also yet another instance of how the invisible labor of Turkers remains uncompensated. While people are powering the system, MTurk is meant to feel like a machine to its end-users: humans are seamlessly embedded in the algorithm. AMT’s clients are quick to forget that it is human beings and not algorithms that are toiling for them –people with very real human needs and desires.

It’s easy to slip into characterising platforms in terms of our familiar experiences of them as end-users. This is an important reminder that their user-friendly character is a contingent expression of the interests the corporation has in maximising user engagement, rather than anything intrinsic to the technology of the platform itself.

This is important for analytical reasons, but it’s also a crucial prop to the ideology of platform capitalism, sustaining an idea of platforms as user-friendly spaces which mediate interactions determined by external factors. As opposed to deeply rule-governed systems, with the content of those rules being determined by commercial imperatives. From loc 735:

Mechanical Turk starts to look even less positive when considering that in the case of labor conflicts, Bezos’s company remains strictly hands-off, insisting that AMT is merely providing a technical system. Why would they have anything to do with the labor conflicts occurring on the platform? This would be like Apple owning the factories in Shenzhen where its iPhones are assembled, but then rejecting any responsibility for the brutal work regimes and suicides of the workers in these factories because Foxconn controls daily operations.

Co-sponsored by the Pacific ICTD Collaborative, the School of
Communications (University of Hawaii at Manoa), and the Institute for
Information Policy (Penn State University)

*Abstracts due: February 10, 2017 *

*CALL FOR PAPERS*

A growing number of ordinary objects are being redesigned to include
digital sensors, computing power, and communication capabilities – and new
objects, and processes, are becoming part of the Internet. This emerging
Internet of Things (IoT) ecosystem – networks of physical objects embedded
with the ability to sense, and sometimes act upon, their environment, as
well as related communication, applications, and data analysis, enables
data to be collected from billions of everyday objects. The emerging
datasphere made possible by these developments offers immense potential to
serve the public good by fostering government transparency, energy
conservation, participatory governance, and substantial advances in medical
research and care. On the other hand, a growing body of research addresses
emerging privacy and civil liberties concerns related to big data,
including unjust discrimination and unequal access to data and the tools
needed to make use of it.

For example, big data analytics may reveal patterns that were previously
not detectable. Data about a variety of daily tasks that seem trivial is
increasingly being federated and used to reveal associations or behaviors,
and these analyses and the decisions made based on them pose potential
harms to individuals or groups. Many transactions that seemed innocuous can
now be used to discriminate – one’s movement throughout the day, items
purchased at the store, television programs watched, “friends” added or
looked at on social networks, or individuals communicated with or who were
in close proximity to the subject at various times, can all be used to make
judgements that affect an individual and his or her life chances. With the
advent of artificial intelligence and machine learning, we are increasingly
moving to a world where many decisions around us are shaped by these
calculations rather than traditional human judgement. For example,
sensitive personal information or behaviors (e.g., political or
health-related) may be used to discriminate when individuals seek housing,
immigration eligibility, medical care, education, bank loans or other
financial services, insurance, or employment. At the same time,
individuals, groups, or regions may also be disadvantaged due to a lack of
access to data (or related skills and tools) to make use of big data in
ways that benefit their lives and communities.

This preconference session seeks to advance understanding of digital
inequalities and discrimination related to big data and big data analytics.
*Papers between 5,000-8,000 words and position papers between 1,000-2,000
words are welcomed.*

*TOPICS OF INTEREST*

We welcome scholarly and applied research on, but not limited to, the
following:

• Big data brokers and sale of personal data (is privacy a commodity or a
right?)

• International norms and standards for big data.

• Policy/legal analysis related to big data and the preconference theme
(e.g., standards of liability for injury and defective work products
(algorithms/burden of proof), the challenge of Notice and Consent,
liability for bad or false or slanted or insufficient data collection,
government regimes for supervision of big data policies).

Papers may include empirical research as well as policy analyses, new
methodological approaches, or position papers addressing the preconference
theme. Submissions by graduate students working in this area are welcomed.

*The costs of the workshop are heavily subsidized by the participating
Institutes, to keep fees for participants at a nominal level.*

*IMPORTANT DATES*

*Abstracts due*: February 10, 2017

*Notifications to submitters*: February 27, 2017

*Full papers due*: May 12, 2017

*SUBMISSION GUIDELINES*

Abstracts of up to 500 words and a short bio of the author(s) should be
emailed to pictdc@hawaii.edu by February 10, 2017. Please include
“Digital Inequalities ICA 2017” in the subject line.

Full papers accepted for presentation at the preconference will, with the
consent of the authors, be submitted to the Journal of Information Policy (http://www.psupress.org/Journals/jnls_JIP.html/) for consideration for a
Special Issue curated by guest editors from the field. The papers will be
blind peer-reviewed, to assure their academic value to both authors (for
academic credit) and readers.

In a paper I’m writing for the first volume of the next Centre for Social Ontology project, I’m offering an analysis of what I call the evisceration of the human. I understand this as an intellectual project which seeks to get beyond self-understanding, hollowing out the phenomenological froth which characterises the interpretative human and getting to the underlying behavioural reality beneath it. It’s a project which, as Mark Andrejevic puts it in Infoglut, seeks to “sidestep self-understanding and self-representation to get at these recalcitrant minds directly” (p. 86). Interiority is reduced to empirical proxies, proliferating ad hoc hypothesis which explain away the apparent reality of the first-person perspective and reduce it to measurable and testable behavioural factors.

This operational abstraction constitutes a kind of ‘hollowing out’ of the human, seeking to reduce the category to its underlying behavioural reality rather than trying to cope with it in its bewildering wholeness. The instruments change, the precise formulation of the ad hocery changes but the underlying direction of explanatory travel remains the same. That at least is my hunch, though there’s a huge amount of work in the history of ideas which I’d have to undertake to justify it properly.

My focus however is on the present enthusiasm for eviscerating the human we see associated with digital technology and digital data. I really like this formulation from Audrey Watters, on loc 1245 of her The Monsters of Educational Technology:

We are now creating data at an unprecedented scale, with unprecedented velocity and increasing complexity. The temptation is to believe that if we can just collect all the data from our students – all their clicks –run it through an algorithm, do a little pattern-matching, and we’ll solve everything, we’ll unlock the secrets of the human brain, we’ll unlock the potential of each child.

The allure of our new instruments plays a crucial role in this iteration of the evisceration project. We believe that if only we were to collect enough data, the inner truth would be revealed. The secrets are there, waiting to be revealed, if only we can mine down into the human with enough precision and accuracy.

In John Thompson’s Merchants of Culture, there’s an interesting remark about the structural position of first time authors which I think has wider purchase. From pg 200:

Ironically, in a world preoccupied by numbers, the author with no track is in some ways in a strong position, considerably stronger than the author who has published one or two books with modest success and muted acclaim, simply because there are no hard data to constrain the imagination, no disappointing sales figures to dampen hopes and temper expectations. The absence of sales figures sets the imagination free. The first-time author is the true tabula rasa of trade publishing, because his or her creation is the book for which it is still possible to imagine anything and everything.

A world where metrics are ubiquitous is a world where imagination has died. When everyone has a track record, the space to imagine someone’s future as radically different from their past collapses.

To talk of ‘Pikettyville’ is then to conjure up an image of an urban system that has become hardwired to adopting, channelling and inviting excesses of social and economic capital in search of a space in which the rich not only ﬁnd safe haven but are also privileged by the kind of property and income tax regimes and wider economic climate that allows them to thrive on their capital investments, while the wider city experiences some of the most challenging economic conditions since the early 20th century (Atkinson et al., 2016b).

In his remarkably prescient Listen Liberal, Thomas Frank describes the rapid capture of the Democratic Party by the professional class which took place during those decades when economic transition left them ascendent within the country as a whole. This was originally a predominance of financiers within the party but, with a transition marked by the defection of finance to Romney in the 2012 election, it’s more recently been a matter of Silicon Valley.

As a striking example of this, on loc 2742 he describes the innovation mania sweeping a city like Boston,

Back in Boston, meanwhile, there is meaning and exciting purpose wherever you look. When I visited, in the spring of 2015, I found a city in the grip of a collective mania, an enthusiasm for innovation that I can only compare to a religious revival, to the kind of crowd-passion that would periodically sweep through New England back in the days when the purpose of Harvard was to produce clergymen, not startups. The frenzy manifests itself in countless ways. The last mayor of Boston was mourned on his passing as a man who “believed in innovation”; who “brought innovation to Boston.” The state’s Innovation Institute issues annual reports on the “Massachusetts Innovation Economy”; as innovation economies go, they brag, this one is “the largest in the U.S. when measured as a percent of employment.” And of course there are publications that cover this thrumming beehive of novelty: “BostInno,” a startup website dedicated to boosting startups, and “Beta Boston,” which is a project of the more established but still super-enthusiastic Boston Globe.

Meanwhile those outside these ‘innovation hubs’ struggle across the state. The self-confident creative class march ever onwards, supported by municipal and state governments for whom subsiding innovation is axiomatic, while inequality soars in a state ranked amongst the most unequal in the United States on common measures. It’s in this schism that we can see what Harris Gruman describes as a “liberalism of the rich” (loc 2928).

If we see this ‘innovation liberalism’ in terms of its class politics, the growing revolving door between Silicon Valley and government becomes much more than a matter of curiosity. As he describes on loc 2918-2934:

By that time, the place once filled by finance in the Democratic imagination had begun giving way to Silicon Valley, a different “creative-class” industry with billions to give in campaign contributions. Changes in the administration’s personnel paralleled the money story: at the beginning of the Obama years, the government’s revolving doors had all connected to Wall Street; within a few years, the people spinning them were either coming from or heading toward the West Coast. In 2014, David Plouffe, the architect of Obama’s inspiring first presidential campaign, began to work his political magic for Uber. Jay Carney, the president’s former press secretary, hired on at Amazon the following year. Larry Summers, for his part, became an adviser for an outfit called OpenGov. Back in Washington, meanwhile, the president established a special federal unit that used Silicon Valley techniques and personnel to revolutionize the government’s web presence; starstruck tech journalists call it “Obama’s stealth startup.”

The whole tenth chapter of Listen Liberal explores this issue and I can’t recommend it highly enough. I’m increasingly convinced that we can’t understand the failings of the contemporary Democratic party without an adequate account of the rise of digital elites within them, as the latest turn in a much long-standing process of capture by professionals. On loc 3184 he describes how talk of ‘innovation’ serves to prop up this accelerating inequality:

Technological innovation is not the reason all this is happening, just as the atomic bomb was not the cause of World War II: it is the latest weapon in an age-old war. Technological innovation is not what is hammering down working peoples’ share of what the country earns; technological innovation is the excuse for this development. Inno is a fable that persuades us to accept economic arrangements we would otherwise regard as unpleasant or intolerable—that convinces us that the very particular configuration of economic power we inhabit is in fact a neutral matter of science, of nature, of the way God wants things to be. Every time we describe the economy as an “ecosystem” we accept this point of view. Every time we write off the situation of workers as a matter of unalterable “reality” we resign ourselves to it.

Not claiming to be a victim, accommodating the downside of loose regulations out of a loyalty to free enterprise—this was a tacit form of heroism, hidden to incurious liberals. Sometimes you had to endure bad news, Janice felt, for a higher good, such as jobs in oil. I was discovering three distinct expressions of this endurance self in different people around Lake Charles—the Team Loyalist, the Worshipper, and the Cowboy, as I came to see them. Each kind of person expresses the value of endurance and expresses a capacity for it. Each attaches an aspect of self to this heroism. The Team Loyalist accomplishes a team goal, supporting the Republican Party. The Worshipper sacrifices a strong wish. The Cowboy affirms a fearless self. Janice was a Team Loyalist.

A really disturbing extract from Arlie Hochschild’s new book, Strangers In Their Own Land. Onloc 1445 she shares the profile of the “least resistant personality” offered by a consultancy firm in 1984, hired to advise on locating waste-to-energy plants in areas likely to provoke little resistance from the local community:

There should be a catchy phrase for this phenomenon. It’s important to understand in its own terms but contrasting emphasis on each pole tend to divert scholarly debates into tedious dichotomies that obscure the underlying reality. From loc 3411 of The Data Revolution by Rob Kitchin:

Often seemingly opposing outcomes are bound together so that people can be both liberated and coerced simultaneously –they gain personal benefit at the same time as they become enmeshed in a system that seeks to gain from their participation. In Althusser’s (1971) terms, such an arrangement works through interpellation, ensnaring people in its logic through persuasion and incentives. For example, supermarket loyalty cards provide customers with savings at the same time as they work to produce store loyalty and provide a rich seam of data that are used to try and sell more goods to those customers, thus increasing profits. Similarly, the price of being more secure from terrorist attacks is invasive surveillance of all members of society; citizens gain safety at the price of privacy.

A few months ago, I was surprised to see an advert for a Christian dating website on the tube. I just discovered, reading Arlie Hochschild’s The Outsourced Self, quite how widespread this is. From pg 38:

Evidence suggests these platforms do not create the impulse in question but they must surely increase the extent to which it is acted upon by normalising assortativity and making it much easier to achieve in practice.

Myself and Tom Brock are currently working on a paper in which we analyse the discourse of ‘intelligence’ in terms of the individualisation of structural advantage: a whole range of factors are wrapped up into the descriptor of someone as ‘intelligent’ which explains a complex outcome in terms of a somewhat mysterious and inevitably overloaded personal characteristic.

Reading Matthew Desmond’s superb Evicted, I was struck by the possibility of reversing the analysis. On loc 3108, he describes the extremely difficult circumstances one of his research participants faces:

Before she was evicted, Larraine had $ 164 left over after paying the rent. She could have put some of that away, shunning cable and Walmart. If Larraine somehow managed to save $ 50 a month, nearly one-third of her after-rent income, by the end of the year she would have $ 600 to show for it—enough to cover a single month’s rent. And that would have come at considerable sacrifice, since she would sometimes have had to forgo things like hot water and clothes. Larraine could have at least saved what she spent on cable. But to an older woman who lived in a trailer park isolated from the rest of the city, who had no car, who didn’t know how to use the Internet, who only sometimes had a phone, who no longer worked, and who sometimes was seized with fibromyalgia attacks and cluster migraines—cable was a valued friend.

But as he puts it, “People like Larraine lived with so many compounded limitations that it was difficult to imagine the amount of good behavior or self-control that would allow them to lift themselves out of poverty.” Much as the fortune of someone like Trump is explained, not least of all by themselves, in terms of their intrinsic talent, we see people like Larraine condemned for failing to exercise an imputed latent power that would be near magical in its presumed capacity to resolve the difficulties of her situation.

Traditional Tea Party supporters wanted to cut both the practice of cutting in line, and government rewards for doing so. Followers of Donald Trump, on the other hand, wanted to keep government benefits and remove shame from the act of receiving them – but restrict those benefits, implicitly, to native-born Americans, preferably white.