Phenomenal World

MASS PRIVACY

The inadequacy of individual informed consent

This week, an Australian college student noticed how data from Strava, a fitness-tracking app, can be used to discover the locations of military bases. Many outlets covered the news and its implications, including Wired and the Guardian. In the New York Times, Zeynep Tufekci’s editorial was characteristically insightful:

“Data privacy is not like a consumer good, where you click ‘I accept’ and all is well. Data privacy is more like air quality or safe drinking water, a public good that cannot be effectively regulated by trusting in the wisdom of millions of individual choices. A more collective response is needed.”

Samson Esayas considers the collective nature of data privacy from a legal perspective:

"This article applies lessons from the concept of ‘emergent properties’ in systems thinking to data privacy law. This concept, rooted in the Aristotelian dictum ‘the whole is more than the sum of its parts’, where the ‘whole’ represents the ‘emergent property’, allows systems engineers to look beyond the properties of individual components of a system and understand the system as a single complex... Informed by the discussion about emergent property, the article calls for a holistic approach with enhanced responsibility for certain actors based on the totality of the processing activities and data aggregation practices."

A Twitter note on Strava from Sean Brooks: “So who at Strava was supposed to foresee this? Whose job was it to prevent this? Answer is almost certainly no one…I’ve always hated the ‘data is the new oil’ metaphor, but here it seems disturbingly accurate. And ironically, organizations with something to hide (military, IC, corporate R&D) have the resource curse. They want to benefit from the extraction, but they also have the most to lose.” Link.

We mentioned Glen Weyl last week as a noteworthy economist engaging with ethical issues (see Beatrice Cherrier’s Twitter thread). A speculative paper he co-wrote on "data as labor" imagines a world in which companies paid users for their data. (We find the framing "data as labor" slightly misleading—Weyl's larger point seems to be about data as assets—the product of labor.) Link. ht Chris Kanich for bringing the two threads together on Twitter.

This Economist article also covers Weyl's paper: “Still, the paper contains essential insights which should frame discussion of data’s role in the economy. One concerns the imbalance of power in the market for data. That stems partly from concentration among big internet firms. But it is also because, though data may be extremely valuable in aggregate, an individual’s personal data typically are not.” Link.

A potentially exciting aspect of the GDPR is the right to data portability: "A free portability of personal data from one controller to another can be a strong tool for data subjects in order to foster competition of digital services and interoperability of platforms and in order to enhance controllership of individuals on their own data." Link.

DISCONTINUOUS ADVANCE

A flurry of articles in December and January assess the state of artificial intelligence

From Erik Brynjolfsson et al, optimism about productivity growth:

“Economic value lags technological advances.

“To be clear, we are optimistic about the ultimate productivity growth fueled by AI and complementary technologies. The real issue is that it takes time to implement changes in processes, skills and organizational structure to fully harness AI’s potential as a general-purpose technology (GPT). Previous GPTs include the steam engine, electricity, the internal combustion engine and computers.

“In other words, as important as specific applications of AI may be, the broader economic effects of AI, machine learning and associated new technologies stem from their characteristics as GPTs: They are pervasive, improved over time and able to spawn complementary innovations.”

PERVERSE CONSEQUENCES

Does banning the box increase hiring discrimination?

“Our results support the concern that BTB [Ban the Box] policies encourage racial discrimination: the black-white gap in callbacks grew dramatically at companies that removed the box after the policy went into effect. Before BTB, white applicants to employers with the box received 7% more callbacks than similar black applicants, but BTB increased this gap to 43%. We believe that the best interpretation of these results is that employers are relying on exaggerated impressions of real-world racial differences in felony conviction rates.”

Newly published by AMANDA AGAN and SONJA STARR, in line with their previous work on the same topic, available here.

These results bolster longstanding concerns about perverse consequences arising from ban the box legislation. (Similar studies include this one from 2006, and this one from 2016.) A 2008 paper provides a theoretical accompaniment to these worries, arguing that a privacy tradeoff is required to ensure race is not being used as a proxy for criminal history: “By increasing the availability of information about individuals, we can reduce decisionmakers’ reliance on information about groups.… reducing privacy protections will reduce the prevalence of statistical discrimination.” Link.

In a threepartseries from 2016, Noah Zatz at On Labor took on the perverse consequences argument and its policy implications, levelling three broad criticisms: “it places blame in the wrong place, it relies upon the wrong definition of racial equality, and it ignores cumulative effects.” Link.

A 2017 study of ban the box that focussed on the public sector—where anti-discrimination enforcement is more robust—found an increase in the probability of hiring for individuals with convictions and “no evidence of statistical discrimination against young low-skilled minority males.” Link.

California’s Fair Chance Act went into effect January 1, 2018, joining a growing list of fair hiring regulations in many other states and counties by extending ban the box reforms to the private sector. The law provides that employers can only conduct criminal background checks after a conditional offer of employment has been made. More on the bill can be found here.

Two posts on the California case, again by Zatz at On Labor, discuss several rich policy design questions raised by the “bright line” standards included in this legislation, and how they may interact with the prima facie standard of disparate impact discrimination: “Advocates fear, however, that bright lines would validate the exclusion of people on the wrong side of the line, despite individualized circumstances that vindicate them. But of course, the opposite could also be the case.” Link.

Tangentially related, Ben Casselman reports in the New York Times that a tightening labor market may be encouraging some employers to hire beyond the box—without legislative guidance. Link.

THE WAGE EFFECT

Higher minimum wages and the EITC may reduce recidivism

“Using administrative prison release records from nearly six million offenders released between 2000 and 2014, we use a difference-in-differences strategy to identify the effect of over two hundred state and federal minimum wage increases, as well as 21 state EITC programs, on recidivism. We find that the average minimum wage increase of 8% reduces the probability that men and women return to prison within 1 year by 2%. This implies that on average the wage effect, drawing at least some ex-offenders into the legal labor market, dominates any reduced employment in this population due to the minimum wage.”

Jennifer Doleac responds, “The results in this new paper…definitely surprised me—my prior was that raising the min wage would increase recidivism.” She explains: “Those coming out of prison are very likely to be on the margin of employment (last hired, first fired). Given some disemployment effects, marginal workers are the ones who are going to be hurt. Amanda and Mike find that the positive effects of pulling some (higher-skilled?) offenders into the legal labor market outweigh those negative effects.” Link to Doleac’s Twitter thread.

One of the co-authors, Makowsky, adds, “The EITC, dollar for household dollar, generates larger effects [than minimum wage increases], but is hampered by its contingency on dependent children. This is one more reason to remove the contingency, extending it to everyone.” Link to Makowsky’s Twitter thread.

Another piece sourced from Makowsky’s thread: “The Impact of Living-Wage Ordinances on Urban Crime.” “Using data on annual crime rates for large cities in the United States, we find that living-wage ordinances are associated with notable reductions in property-related crime and no discernable impact on nonproperty crimes.” Link.

Noah Smith rounds up recent studies on increasing the minimum wage, many of which come to contradictory conclusions. "At this point, anyone following the research debate will be tempted to throw up their hands. What can we learn from a bunch of contradictory studies, each with its own potential weaknesses and flaws?" Link.

Main finding: Living standards may be growing faster than GDP growth.Nominating economist: Diane Coyle, University of ManchesterSpecialization: Economic statistics and the digital economyWhy?: “This paper tries to formalize the intuition that there is a growing gap between the standard measure of GDP, capturing economic activity, and true economic welfare and to draw out some of the implications.”

Main finding: The average worker does not value an Uber-like ability to set their own schedule.Nominating economist: Emily Oster, Brown UniversitySpecialization: Health economics and research methodologyWhy? “This paper looks at a question increasingly important in the current labor market: How do people value flexible work arrangements? The authors have an incredibly neat approach, using actual worker hiring to generate causal estimates of how workers value various employment setups.”

## INCOME SHARE AGREEMENTS Purdue, BFF, the national conversation “Long discussed in college policy and financing circles, income share agreements, or ISAs, are poised to become more mainstream.” That's from a September Wall Street Journal article . 201 …

HOW TO HANDLE BAD CONTENT

Two articles illustrate the state of thought on moderating user-generated content

Ben Thompson of Stratechery rounds up recent news on content moderation on Twitter/Facebook/Youtube and makes a recommendation:

“Taking political sides always sounds good to those who presume the platforms will adopt positions consistent with their own views; it turns out, though, that while most of us may agree that child exploitation is wrong, a great many other questions are unsettled.

“That is why I think the line is clearer than it might otherwise appear: these platform companies should actively seek out and remove content that is widely considered objectionable, and they should take a strict hands-off policy to everything that isn’t (while — and I’m looking at you, Twitter — making it much easier to avoid unwanted abuse from people you don’t want to hear from). Moreover, this approach should be accompanied by far more transparency than currently exists: YouTube, Facebook, and Twitter should make explicitly clear what sort of content they are actively policing, and what they are not; I know this is complicated, and policies will change, but that is fine — those changes can be transparent too.”

“… If we want to really make progress towards solving these issues we need to recognize there’s not one single type of bad behavior that the internet has empowered, but rather a few dimensions of them.”

Michael comments: The discussion of content moderation--and digital curation more broadly--conspicuously ignores the possibility of algorithmic methods for analyzing and disseminating (ethically or evidentiarily) valid information. Thompson and Social Capital default to traditional and cumbersome forms of outright censorship, rather than methods to “push” better content.

We'll be sharing more thoughts on this research area in future letters.

THE FUTURE OF UNDERGRADUATE EDUCATION

A new report argues that quality, not access, is the pivotal challenge for colleges and universities

From the American Academy of Arts and Sciences, a 112-page report with "practical and actionable recommendations to improve the undergraduate experience":

"Progress toward universal education has expanded most recently to colleges and universities. Today, almost 90 percent of high school graduates can expect to enroll in an undergraduate institution at some point during young adulthood and they are joined by millions of adults seeking to improve their lives. What was once a challenge of quantity in American undergraduate education, of enrolling as many students as possible, is now a challenge of quality—of making sure that all students receive the rigorous education they need to succeed, that they are able to complete the studies they begin, and that they can do this affordably, without mortgaging the very future they seek to improve."

Link to the full report. Co-authors include Gail Mellow, Sherry Lansing, Mitch Daniels, and Shirley Tilghman. ht Will, who highlights a few of the report's recommendations that stand out:

From page 40: "Both public and private colleges and universities as well as state policy-makers [should] work collaboratively to align learning programs and expectations across institutions and sectors, including implementing a transferable general education core, defined transfer pathway maps within popular disciplines, and transfer-focused advising systems that help students anticipate what it will take for them to transfer without losing momentum in their chosen field."

From page 65: "Many students, whether coming straight out of high school or adults returning later to college, face multiple social and personal challenges that can range from homelessness and food insecurity to childcare, psychological challenges, and even imprisonment. The best solutions can often emerge from building cooperation between a college and relevant social support agencies.

From page 72: "Experiment with and carefully assess alternatives for students to manage the financing of their college education. For example, income-share agreements allow college students to borrow from colleges or investors, which then receive a percentage of the student’s after-graduation income."

On a related note, see this 2016 paper from the Miller Center at the University of Virginia: "Although interest in the ISA as a concept has ebbed and flowed since Milton Friedman first proposed it in the 1950s, today it is experiencing a renaissance of sorts as new private sector partners and institutions look to make the ISA a feasible option for students. ISAs offer a novel way to inject private capital into higher education systems while striking a balance between consumer preferences and state needs for economic skill sets. The different ways ISAs can be structured make them highly suitable as potential solutions for many states’ education system financing problems." Link.

Meanwhile, Congress is working on the reauthorization of the Higher Education Act: "Much of the proposal that House Republicans released last week is controversial and likely won’t make it into the final law, but the plan provides an indication of Congressional Republicans’ priorities for the nation’s higher education system. Those priorities include limiting the federal government’s role in regulating colleges, capping graduate student borrowing, making it easier for schools to limit undergraduate borrowing — and overhauling the student loan repayment system. Many of those moves have the potential to create a larger role for private industry." Link.

ARTIFICIAL AGENCY AND EXPLANATION

The gray box of XAI

A recent longform piece in the New York Times identifies the problem of explaining artificial intelligence. The stakes are high because of the European Union’s controversial and unclear “right-to-explanation” law, which will become active in May 2018.

“Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.”

Full article by CLIFF KUANG here. This page provides a short overview of DARPA's XAI (Explainable Artificial Intelligence) program.

An interdisciplinary group addresses the problem:

"Contrary to popular wisdom of AI systems as indecipherable black boxes, we find that this level of explanation should often be technically feasible but may sometimes be practically onerous—there are certain aspects of explanation that may be simple for humans to provide but challenging for AI systems, and vice versa. As an interdisciplinary team of legal scholars, computer scientists, and cognitive scientists, we recommend that for the present, AI systems can and should be held to a similar standard of explanation as humans currently are; in the future we may wish to hold an AI to a different standard."

Full article by FINALE DOSHI-VELEZ et al. here. ht Margarita For the layperson, the most interesting part of the article may be its general overview of societal norms around explanation and explanation in the law.

Michael comments: Human cognitive systems have generated similar questions in vastly different contexts. The problem of chick-sexing (see Part 3) gave rise to a mini-literature within epistemology.

From Michael S. Moore’s book Law and Society: Rethinking the Relationship: “A full explanation in terms of reasons for action requires two premises: the major premise, specifying the agent’s desires (goals, objectives, moral beliefs, purposes, aims, wants, etc.), and the minor premise, specifying the agent’s factual beliefs about the situation he is in and his ability to achieve, through some particular action, the object of his desires.” Link. ht Margarita

A Medium post with an illustrated summary of some XAI techniques. Link.

PREDICTIVE JUSTICE

How to build justice into algorithmic actuarial tools

Key notions of fairness contradict each other—something of an Arrow’s Theorem for criminal justice applications of machine learning.

"Recent discussion in the public sphere about algorithmic classification has involved tension between competing notions of what it means for a probabilistic classification to be fair to different groups. We formalize three fairness conditions that lie at the heart of these debates, and we prove that except in highly constrained special cases, there is no method that can satisfy these three conditions simultaneously. Moreover, even satisfying all three conditions approximately requires that the data lie in an approximate version of one of the constrained special cases identified by our theorem. These results suggest some of the ways in which key notions of fairness are incompatible with each other, and hence provide a framework for thinking about the trade-offs between them."

Full paper from JON KLEINBERG, SENDHIL MULLAINATHAN and MANISH RAGHAVAN here. h/t research fellow Sara, who recently presented on bias in humans, courts, and machine learning algorithms, and who was the source for all the papers in this section.

In a Twitter thread, ARVIND NARAYANAN describes the issue in more casual terms.

"Today in Fairness in Machine Learning class: a comparison of 21 (!) definitions of bias and fairness [...] In CS we're used to the idea that to make progress on a research problem as a community, we should first all agree on a definition. So 21 definitions feels like a sign of failure. Perhaps most of them are trivial variants? Surely there/s one that's 'better' than the rest? The answer is no! Each defn (stat. parity, FPR balance, contextual fairness in RL...) captures something about our fairness intuitions."

Jay comments: Kleinberg et al. describe their result as choosing between conceptions of fairness. It’s not obvious, though, that this is the correct description. The criteria (calibration and balance) discussed aren’t really conceptions of fairness; rather, they’re (putative) tests of fairness. Particular questions about these tests aside, we might have a broader worry: if fairness is not an extensional property that depends upon, and only upon, the eventual judgments rendered by a predictive process, exclusive of the procedures that led to those judgments, then no extensional test will capture fairness, even if this notion is entirely unambiguous and determinate. It’s worth consideringNozick’s objection to “pattern theories” of justice for comparison, and (procedural) due process requirements in US law.