Even as he sees great economic gains from new technology, he worries that machine learning may become so effective that it may change the work environment so dramatically that income disparity explodes even more. Where This Time is Different emphasized the disparity in knowledge between consumers/workers and corporations and a key problem, Kholsa highlights why the ongoing Luddite fear of machines replacing humans may finally be coming true with this round of tech.

As Khosla notes "In past economic history, each technology revolution—while replacing some jobs—has created more new types of job opportunities and productivity improvements, but this time could be different." Instead of augmenting human capabilities, this technology may be so superior in intelligence and knowledge than humans that it will relegate them not to higher levels of work but to lower levels where they "command lower pay."

While such technology may generate new wants and needs to generate new kinds of jobs, Khlosa questions defenders of the prospect of infinite job creation like Steve Rattner and Marc Andreessen on the grounds that more training for people may not necessarily create a place for them in servicing those needs-- or at least not in a position of greater skill and compensation. He goes so far as to quote Karl Marx on the idea that extrapolating the past is often a fallacy: “when the train of history hits a curve, the intellectuals fall off.”

He notes that he was himself an advocate of some degree of income disparity where incentives for education would give everyone an opportunity for social mobility. But if machine learning takes all the "best jobs" requiring the most skill, "an avenue of personal growth through education that previously has always been open for labor advancement may be closed.":

It seems likely that the top 10 to 20-percent of any profession, be they computer programmers, civil engineers, musicians, athletes or artists, will continue to do well. What happens to the bottom 20-percent or even 80-percent, if that is the delineation? Will the bottom 80-percent be able to compete effectively against computer systems that are superior to human intelligence?

The outcome could be (as recent growth as delivered in the US) a situation where overall wealth increases but the incomes of the median family sees none of it.

Being a free market capitalist at heart, Khosla shies away from the obvious solution of greater economic redistribution of that wealth being generated by technology, but he offers little as an alternative to his bleak assessment of exploding inequality. But his essay does reflect a growing admission even among elite sectors that while new technology may be generating great wealth, older nostrums of a rising tide lifting all boats are threadbare and increasingly irrelevant.

ContentIssues

Big data is shifting power over our idea of the social not just from the theorists and qualitative researchers to the data scientists, but also from academic social scientists to research labs controlled by corporations.

Wired’s Chris Anderson argued a few years ago that the social sciences are being replaced by big data and algorithmic techniques, where associations emerging from data has replaced the model of theoretical hypotheses being tested in deliberate manner by social scientists. Williamson cites such positions as "exaggerated obituaries for the social sciences,” but what is clear is that big data is challenging how and who controlled the institutional practices associated with social science “knowledge production.”

New players, often associated with corporate research labs like Facebook’s Data Science Team, are using internally generated corporate data to promote their understanding of how people behave. Studying social media is a seen as an especially rich source of behavioral data, where analyzing the use of Twitter and blogs to document everyday activities will ideally highlight massive population trends and social patterns.

However, the dark side of this shift is that such data analysis tends to focus on graphical visualization, creating in the words of researcher Lisa Gitelman a “database aesthetics” that amplifies the rhetorical function of data over more qualitative and deeper mathematical analysis as well.

As importantly, power over that knowledge has shifted increasingly to corporate R&D labs at technology companies and away from the academic researcher at universities. In fact, philosophy researcher Gary Hall sees this as a shift in the “epistemic environment” where we are abandoning the Romantic view of single authorship in favor of a corporate understanding of knowledge production.

Williamson does not emphasize this point but this shift of power over knowledge to production is also a shift from the (admittedly odd and sometimes egocentric) interests of academics to the more coldly monetary self-interest of corporate research labs in promoting data results that favor those corporation’s self interest. And since much of the data is proprietary, companies can bury results they don’t like — much as pharmaceutical companies are known to bury research results that don’t favor their prescription drug products.

Williamson outlines a social science landscape where our public discussions of how society operates will increasingly be based on results cherry-picked from corporate data sources, with other researchers often unable to challenge those results without access to the original proprietary sources.

ContentIssues

Ashlin Lee at Australia's Lifehacker site argues that issues like data retention and government surveillance are just the tip of the iceberg of worries the public should have over Big Data-- and nicely creates a linked bibliography of resources to further explore the problems he highlights:

In reality, people are leaving behind a whole "emerging ecosystem of digital traces, fragments and identifiers that are created as a part of digitally-mediated social interactions," what sociologists Mike Savage and Roger Burrows describe in their The Coming Crisis of Empirical Sociology as a growing array of digital traces that flow from “transactional data”, as they are born from the routine transactions and interactions of a modern society. Savage and Burrows also note that the whole methodology of sociology focused on surveys and interviews is likely to be challenged as authoratatively representing the social with big data resources available.

While many promise great social advances from big data, others are highlighting how the private control of these databases are birthing whole new issues of inequality:

Mark Andrejevic inBig Data, Big Questions| The Big Data Divide​highlights "the asymmetric relationship between those who collect, store, and mine large quantities of data, and those whom data collection targets." This increases power imbalances, especially with most of the public unable to anticipate how such data will be later used to target and sort people for private corporate goals.

Frank Pasquale in his Black Box Society emphasizes how much social control we are delegating to data driven systems of algorithms and the negative social impacts of that delegation.

Our ignorance of these algorithms leaves us vulneral to a whole range of discrimination and surveillance that we are usually not aware of nor can fully anticipate.

Mark Burdon and Paul Harper lay out how workplace discrimination law is being upended by big data options that "challenge the very basis of our anti-discrimination and privacy laws, since "it is often impossible to connect discrimination to the inequalities that flow from data analytics...Establishing a link between a protected attribute and a big data discriminatory practice is likely to be evidentially insurmountable."

José van Dijck in Datafication, dataism and dataveillance highlights that we are entering a dangerous new world of allowing such discrimination by algorithms and machines often acting without human oversight.

As people collect more and more data on themselves and share it, it raises the question of how people are handling that data and what protections they need.

Kate Crawford in When Fitbit Is the Expert Witness details how self-tracking may increasingly play a role in litigation and people may find their personal devices "used against you in court...wearables data could just as easily be used by insurers to deny disability claims, or by prosecutors seeking a rich source of self-incriminating evidence."

Parmy Olson in Wearable Tech Is Plugging Into Health Insurance says such wearables will "play a bigger role in how individual-and-group health insurance costs are decided." The result could be a two-tier system where those with the best health tracking devices get access to lower premiums, but with the "risk that data could leak, and be used by marketers peddling diabetes medication or as extra fodder for insurers seeking to deny coverage."

Laying out all these issues, Lee worries that public focus on the most immediate fears aroud data, such as the metadata retention issue, could distact the public and policymakers from dealing with these longer-term social problems raised by all these writers and thinkers.

ContentIssues

Harvard's Ben Edelman responded to the leak of the FTC Google staff report with an indepth memo comparing available materials (particularly the staff memorandum's primary source quotations from internal Google emails) with the company's public statements on the same subjects. The result as Edelman details is illuminating:

Google's public statements typically emphasize a lofty focus on others' interests, such as giving users the most relevant results and paying publishers as much as possible. Yet internal Google documents reveal managers who are primarily focused on advancing the company's own interests, including through concealed tactics that contradict the company's public commitments.

The whole memo is worth reading but it's worth highlighting a few key points and evidence cited by Edelman.

Google had publicly claimed that its restrictions on third party software being created to facilitate moving data from Google's AdWords platform to competing platforms was for the benefit of advertisers themselves. However

In internal email, Google director of product management Richard Holden affirmed that many advertisers “don't bother running campaigns on [Microsoft] or Yahoo because [of] the additional overhead needed to manage these other networks..Holden indicated that removing AdWords API restrictions would pave the way to more advertisers using more ad platforms, which he called a “significant boost to … competitors” (id.). He further confirmed that the change would bring cost savings to advertisers...n a 2006 document not attributed to a specific author, the FTC quotes Google planning to “fight commoditization of search networks by enforcing AdWords API T&Cs” (footnote 546, citing GOOGKAMA-0000015528), indicating that AdWords API restrictions allowed Google to avoid competing on the merits.

When Google began favoring its own services in search results over competing "verticals" Google had publicly stated that this was for the benefit of users to give them more relevant results. Yet internally, the change was seen as for Google's benefit:

Far from assessing what would most benefit users, Google staff examine the “threat” (footnote 102, citing GOOG-ITA-04-0004120-46) and “challenge” of “aggregators” which would cause “loss of query volumes” to competing sites and which also offer a “better advertiser proposition” through “cheaper, lower-risk” pricing (FTC staff report p.20 and footnote 102...Moreover, the staff report documents Google's willingness to worsen search results in order to advance the company's strategic interests. Google's John Hanke (then Vice President of Product Management for Geo) explained that “we want to win [in local] and we are willing totake some hits [i.e. trigger incorrectly sometimes]” (footnote 121)...Preferred placement of Google's specialized search services was deemed important to avoid “ced[ing] recent share gains to competitors” (footnote 121) or indeed essential: “most of us on geo [Google Local] think we won't win unless we can inject a lot more of local directly into google results” (footnote 121)

The FTC memorandum quotes Google co-founder Sergey Brin: “Our general philosophy with renewals has been to reduce TAC across the board” (footnote 517)..The FTC's investigation revealed the reason why Google was able to impose these payment reductions and fee increases: Google does not face effective competition for small to midsized publishers.

As Edelman argues, "Google's broadest claims of lofty motivations and Internet-wide benefits were always suspect, and Google's public statements fall further into question when compared with frank internal discussions."

Which makes the FTC Commissioners burying of the staff report all the more a scandal.

ContentIssues

A series of articles in the media across the political spectrum spotlighted the rising political power of Silicon Valley— and Google in particular — in the wake of revelations of the Federal Trade Commission ignoring most of the recommendations and analysis of why antitrust action was needed against Google.

Aside from the specifics of the FTC staff report being buried, the reports noted how top technology positions throughout the Obama administration are now staffed by former Google and other Silicon Valley personnel, including as we noted a couple of weeks ago, the particular role of Google’s executive chairman Eric Schmidt working directly on designing Obama’s election and reelection digital turnout machine.

The articles note not just Google’s economic power and political prominence but the way it has sought to shut down other legal investigations, from running “a ferocious campaign against European data protection laws” to using the arcana of Internet law to shut down an investigation of the company by the Mississippi Attorney General.

Probably the most interesting analysis comes from Danny Crichton at TechCrunch, who puts the Google story in the larger context of the rise of Silicon Valley as the central political and economic force in the global economy. Wall Street is no longer the place top business talent wants to work as Valley firms scoop up major names trekking to the West Coast and as fo Washington, D.C.” “The revolving door between Goldman Sachs and the White House has now been supplanted by a much more technologically-sophisticated revolving door with Silicon Valley.”

Crichton worries that this accumulation of financial and political power is that the incumbents in the technology field will use that power to squash any challengers. Instead of an ecosystem of lots of different companies offering different services on an open Internet, we are seeing the rise of “closed gardens and API access agreements, designed to keep you within the limited experience of our devices and software.”

Without greater political and economic accountability, including a real antitrust watchdog, we are likely to see Silicon Valley growth come at the expense of the rest of the economy and endanger its own dynamic innovation.

Monsanto is usually thought of (and sometimes reviled) for its sale of genetically modified seeds, but the company is increasingly becoming the key big data technology firm providing real-time data to farmers as they plant their fields. Since it purchased technology firm Climate Corp. in 2013 for $930 million, Monsanto has begun providing real-time data through a cell phone app to farmers cultivating 60 million out of the 161 million acres of U.S. farmland— meaning more than a third of U.S. farmland is under the guidance of Monsanto’s climate and cultivation data. Basic data is freely provided and farmers pay a premium for more specialized data and help.

There are 30 million agricultural fields in America and Monsanto has mapped them all with soil and climate data to a 10-meter by 10-meter resolution. It provides real time temperature, weather, soil moisture, and other metrics to guid farmers on what to expect, even telling them the best days to work their fields and, with the premium version, how much water and fertilizer to use.

The company uses satellite data to show farmers trouble spots in their fields and with auto-steering technologies in place, help farmers drive equipment in straight rows. Monsanto is, in the words of Robb Fraley, now Monsanto’s chief technology officer, "modeling microclimatic conditions, so you can become predictive on not only which field, but which part of a field should someone be looking at.” Another product provided by acquisition 640 Labs grabs geo-tagged data from tractors, combines and other equipment and allows farmers to store it for real-time and future analysis.

Of course, Monsanto also gains as its reach expands, since every new farmer using its Climate Corp. software is new information about its customers for Monsanto— detailing what products they use, what they are farming and how much money they are making. This puts Monsanto in the position to control more real-time data about farming in the nation than anybody else by far.

All of which is frightening, given Monsanto’s track record, but farmer organizations have already recognized the danger of losing control of such vital data to a single company and have organized to negotiate a set of principles on data sharing that could be a model for many other sectors. Last fall, led by the American Farm Bureau, Monsanto, the American Soybean Association, Beck’s Hybrids, Dow AgroSciences LLC, DuPont Pioneer, John Deere, National Association of Wheat Growers, National Corn Growers Association, National Farmers Union, Raven Industries, and the USA Rice Federation all agreed last November to a set of principles to help protect farmer control of their own data as they negotiate with large agribusiness data companies like Monsanto.

The principles of the agreement, which could be a model for other groups and legislation, include:

Ownership: The idea that farmers will retain ownership of all information generated in their farming operations; a

Control: Any access to that data requires affirmative and explicit consent by the farmer.

Notice: Farmers must be notified if data is collected and how it will be used, with office in a readily accessible format.

Transparency: Farmers must have clear understanding of which third parties are using the data and choices to limit that sharing. No contract may be changed without the farmer’s explicit agreement.

Portability: Farmers should be able to retrieve their data for storage and use in competing systems.

The effort will include farmer eduction initiatives, including developing easy-to-use transparency evaluation tools to clearly compare and contrast specific issues within data contracts and see how they align with these principles.

“The principles released today provide a measure of needed certainty to farmers regarding the protection of their data,” said American Farm Bureau President Bob Stallman at the time. “The privacy and security principles that underpin these emerging technologies, whether related to how data is gathered, protected and shared, must be transparent and secure.”

Potentially adding pressure to Monsanto to abide by the principles are emerging firms like Kansas-based Farmobile, which is developing a parallel data system that it highlights as “farmer-owned” which is designed to help farmers generate revenue from their own data by picking what information they are willing to disseminate to potential buyers from pesticide producers to commodity traders.

As Ad Age, noted in an article describing the firm, "That notion of control and revenue streams for those creating the data may not have found a place in the world of consumer data yet, but it is becoming a reality for farmers."

For that reason, both the Monsanto-Farm Bureau led set of privacy principles and the emerging models of farmers controlling monetization of their data may be a model privacy and those promoting economic justice models around data use should be paying close attention to. While other groups may not have the organized power of the Farm Bureau to negotiate such deals for consumers, they lay out one model for regulators to require for all data collectors in the economy.

That the market will likely not protect privacy in the absence of regulation for most consumers is highlighted by the fact that Monsanto is rapidly moving its data-driven technology out to developing nations with little discussion of similar data rights for those consumers. Developing world farmers are prime customers for Monsanto’s cell phone app, since most cell phones are often the primary technology that they do own.

Monsanto already is providing data services to 3 million smallholder farmers in India, who receive text message updates in a simplified version of the company’s Climate Basic app. Monsanto is explicit that it intends to use the data for marketing purposes to tailor sales of its genetically modified seeds to specific African fields— with little mention of those farmers controlling that data.

Still, the U.S. model of data control by American farmers negotiated with Monsanto should become a core touchstone for discussions of where big data policy should move in the future.

ContentIssues

When the Wall Street Journal accidently got their hands on an original staff report from the Federal Trade Commission antitrust investigation of Google, many in the media focused on where the FTC staff report on Google differed with the conclusions of the FTC Commissioners on whether Google’s favoring of its own properties in its search engine unfairly hurt rival “vertical” content providers.

Now, politically accountable officials can and should disagree with often overeager staff on the conclusions of their work, but it is problematic that the shockingly short original decision of the FTC ignored so much of the evidence supporting the staffers argument for action against Google.

But the real scandal out of the revelations of the staff report is that the FTC Commissioners didn’t even address the staff arguments that Google was not just undermining competition in search engines, but that it so dominated search advertising that no rival could viably challenge it.

Originally I like many others thought the FTC Commissioners' decision was weak because they had ordered too narrow an investigation (just looking at search dominance versus competing verticals and ignoring the advertising side of the business model). But no, the staff report has extensive analysis of the advertising side and how Google's exclusive contracts, higher monetization rates and other actions make it impossible for competitors to challenge it.

The FTC Commissioners Barely Acknowledged Google Was in the Advertising Business

Reading the original FTC decisions, you’d barely know Google made all its money from advertising customers, not from users of its search engine. And aside from condemning a practice of Google’s of not allowing third party software to be used with its own AdWords site along with rival sites, there was essentially no discussion by the Commissioners of anticompetitive actions by Google in the advertising side of its operations.

But then you read the staff report and a gusher of analysis of the search advertising marketplace emerges. Questions that antitrust scholars like myself had asked fruitlessly for lack of data were addressed by FTC research staff, yet the data was neither made public nor even discussed by the Commissioners.

In fact, this kind of research for the public is the least of what the FTC should be doing, even if it doesn’t take direct action, yet instead the Commissioners buried the report and its data.

That is the real outrage.

The original decision by FTC focused almost solely on whether Google’s practices gave it a monopoly in search without saying how it impacted its search advertising dominance. In fact, it’s quite possible that it’s actions in search could benefit users of its search engine while systematically giving it such dominance that advertisers would have no choice but to advertise on Google. That would diminish competition and hurt advertisers — who are consumers of Google’s advertising product — and constitute clear anticompetitive behavior, as the FTC staff report argued, yet the Commissioners did not even address that key issue.

FTC Staff Found Google Had Locked Rivals Out of Syndicated Publishing

One area where the staff report provided extensive data previously unavailable was the extent to which Google locked down publishers across the Internet to use only its search and advertising products to the exclusion of its rivals. As the report detailed:

Google's exclusive AFS agreements effectively prohibit the use of non-Google search and search advertising within the sites and pages designated in the agreement. Some exclusive agreements cover all properties held by a publisher globally; other agreements provide for a property-by-property (or market-by-market) assignment.(p. 54)

Tellingly, even where agreements were not officially exclusive, the details “favoring” Google essentially excluded those rivals in any case. One example cited was eBay, “Google's largest search and search advertising syndication partner,” which accounted for over 27 percent of all syndicated U.S. queries answered by Google in 2011. eBay itself characterized its contract with Google “as equivalent to exclusivity.”(p. 58) Details of that contract included:

requirements that eBay show as many Google Ad Sense ads on each page as third-party advertisements

that no third party advertisements appear above the Google AdSense advertisements

that Google AdSense advertisements cannot be interspersed with third party advertisements, and

that Google AdSense advertisements cannot be less prominently displayed than third parry advertisements.

Overall, the FTC staff found that Google had exclusive or restrictive agreements with 12 of the top 20 companies, 60 percent of that group which in turn accounted for 94 percent of total search query volume.(p. 104)

When a company with an overwhelming dominance of search and search adverting is locking rivals out of so much of the marketplace, it seems that the FTC Commissioners should at least discuss why this isn’t a problem.

The Barriers to Entry for Google’s Rivals

The FTC staff report also detailed how unlikely it is that rival companies are likely to challenge Google in search advertising since they can’t get the scale or data to create a comparable product. Google already has 71% of all U.S. search, according to the staff report (p. 68), with only one significant competitor, the Bing/Yahoo! search alliance. But when they surveyed advertisers themselves, they found that essentially none were using Bing/Yahoo! over Microsoft, while many were almost exclusively using Google. In a telling quote, the staff report noted:

A smaller publisher reported that, essentially, the only websites exclusively using Bing's search syndication service today are those that have been kicked out of Google's syndication network for violating its terms of service. While we know from other interviews that this comment is an exaggeration, it does capture the general tone of the comments we received about the relative quality of Microsoft's search and search advertising syndication product.(p. 56)

The problem for challengers is that they not only get far fewer queries than Google but that when one of their users clicks on an ad, those advertisers make less money on each click. Amazon is second largest advertiser after eBay and has no similar exclusionary contract with Google. However, Amazon reported to the FTC that the Bing and Yahoo!'s advertisements monetize at about 46 percent the rate of Google's advertisements. Because of this “large monetization gap,” Amazon told the FTC that it only used Bing and Yahoo! for a very small percentage of its total search syndication needs.(p. 60)

More users also leads to an increased number of advertisers...as the number of advertisers that place ads- and the number of consumers who click on those ads - increases, the ad-serving algorithms improve their ability to predict what advertisements stimulate consumer "clicks.” This in turn increases monetization . Which leads to the cyclical effect of greater participation by both advertisers and publishers. This effect, which bas been termed the "virtuous cycle," represents a significant barrier for any potential entrant.(76)

The FTC noted that this was not their analysis but came from Google itself: ““Google documents are replete with references to the ‘virtuous cycle’ among users, advertisers, and publishers; and testimony from Google executives confirms the continuing viability of the 'cycle.’” (page 16)

Just the sheer scale and cost of buying servers and continual research costs means that any rival is having to support fixed costs with a far lower rate of monetization and fewer users, creating almost no path to financial viability for a Google competitor.

The Legal Basis for Challenging Google’s Dominance of Search Advertising

The FTC staff addressed one other key issue that is most clearly in their area of competence, namely the legal basis for taking on Google’s dominance of search advertising. One question raised by opponents of such action is whether the proper market analysis should be of Google’s role in the overall advertising marketplace, within online advertising as a whole, or just within search advertising as a discrete market.

The FTC staff argued strongly that the proper analysis was of Google’s role within search advertising, based on the testimony of advertisers themselves of how different the role of search advertising was compared to other forms of advertising.

First, “advertisers believe that search advertising provides unprecedented precision in identifying potential customers, measurability, and the highest return on investment.”(p. 10). Functionally, "'search ads help satisfy demand' while 'brand advertising helps to create demand.” This latter distinction was lifted from none other than Google’s own Chief Economist, Hal Varian. Notably, search advertising is priced quite differently from other forms of advertising (ie. advertisers get paid only when they click on a link), while advertisers pay for other online ads whenever a user sees them.(p. 70)

The FTC staff report quoted from Google’s own internal documents and testimony that there was “no viable substitute for search advertising. Both Ad Words vice-president of product management Nick Fox and chief economist Hal Varian have previously stated that search advertising spend does not come at the expense of other advertising dollars.”(72)

The FTC also noted that in other cases, the FTC and Department of Justice had argued themselves that search advertising was a distinct product market, including when Google benefitted from the distinction in arguing that it’s purchase of the display advertising company DoubleClick had no antitrust implications since it was in a distinct market from Google’s search advertising business.

The FTC Commissioners Should Not Bury Facts They Don’t Like But Instead Provide Counter Arguments

As noted above, FTC Commissioners have no obligation to agree with their staff, but they do have a political obligation to the public not to bury facts they disagree with. The public has the right to know the results of investigations conducted with public dollars and the right to have officials such as the FTC Commissioners respond publicly to the facts uncovered by such staff reports.

If they need to cover up a company name here or there to protect proprietary secrets, so be it (although that excuse is used far too much), but public officials like the FTC should be obligated to respond to the broad substance of staff reports. They can disagree but they should provide alternative facts or analyses to win their arguments, not bury facts and analyses that are too inconvenient and too challenging to respond to or refute.

Instead of apologizing to industry, as they did, for supposedly violating corporate confidentiality, they should be apologizing to the public for breaching our trust that they won’t bury such inconvenient data.

ContentIssues

Siva Vaidhyanathan has a must-read analysis in the Hedgehog Review of how to think of “privacy” in the modern era of big data. He starts by invoking an older trope of the Panopticon, Michel Foucault’s model of how the rise of visible, threatening surveillance by government and corporate actors changed the nature of modern patterns of life. Prison towers tracked prisoners throughout a prison yard, foremen tracked workers with stopwatches, and governments induced obedience with cops on the beat.

Privacy in that older mode of existence was about hiding from that surveillance and protecting freedom to act without retaliatory discipline. One feature of that older form of Panopticonic surveillance was it didn’t matter whether the employer or government was actually watching you; people changed their behavior to either obey for fear of being caught or hid from surveillance in certain ways induced by the margins of freedom left out of that surveillance.

The Stasi police state is the prototypical Panopticon of obedience induced by fear of surveillance. However, in the modern, democratic world of data-driven overlapping worlds of family, work, finance, and public life, we are now less fleeing specific centralized surveillance than, in Vaidhyanathan’s words, trying to "manage our various reputations within and among various contexts.”

Visible surveillance gives way to endless data collection, but he argues that this rise of Big Data is less driven by specific technological opportunities than economic imperatives which have increased "incentives to target, trace, and sift” for profit opportunities.

Instead of the scary Panopticon, Vaidhyanathan argues for the emergence of an almost invisible “Cryptopticon", where instead of being frightened by surveillance, we are induced to share as much of ourselves, our data, our relationships as possible. "the Cryptopticon is not supposed to be intrusive or obvious. Its scale, its ubiquity, even its very existence, are supposed to go unnoticed.” We may know we are being tracked but we cease to care.

Where the Panopticon depended on forcing people to act in ever more uniform ways to induce obedience along particular lines demanded by those in economic and political power, the goal of the Cryptopticon is to encourage people to voluntarily "sort themselves into “niches” that will enable more effective profiling and behavioral prediction."

The key here is that the old Panopticon was dependent on scaring people into uniform action, but the Cryptopticon has the data to exploit the myriad differences among people for greater power and profit. Every incentive is given to individuals to share the maximum amount of data and the market is designed to reward ever more sharing of that personal data, even as society as a whole may be losing out — a point I argued recently in This Time is Different: How Big Data Has Left the Middle Class Behind.

With the market lacking any incentives protecting personal data, Vaidhyanathan argues that government regulation has to step in to "mandate a default 'opt-in status that would require firms and governments to convince us we should be watched and tracked because there would be some clear reward."

What Vaidhyanathan highlights is that “freedom” in classical thought was so much defined by valorizing the ability to evade the Panopticon that we are ill-equipped in public debate to discuss freedom in terms of resisting Big Data exploitation of our own desire to assert our individuality.

ContentIssues

A “big data revolution” is afoot in the social sciences. The increasing volume, variety, and velocity of data are irresistible raw material for inquiry. For its most optimistic exponents, the “datistic turn” renews social science by focusing inquiry on objective, verifiable, and measurable facts. Explicit models of behavior premised on (quasi-)experimental evidence may render once-soft fields as hard as biology, chemistry, or physics. On this account, economics has led the way, and the rest—ranging from philosophy to anthropology—must follow.

The datistic turn should revive interest in a neglected meta-field: the philosophy of social science. Lively debates raged in mid-20th century between some forerunners of today’s big data devotees (behaviorists), and interpretive social scientists committed to more narrative, normative, and holistic inquiry. The behaviorists’ tendency to treat mental processes as a “black box” is uncannily echoed in many current researchers’ uncritical acceptance of extant corporate data sets (and limits imposed on their use) as objective records.

Given firms’ triple layers of real and legal secrecy, and obfuscation, journals should be wary of such research until it is truly reproducible. Moreover, given the importance of key firms themselves to understanding our society, their internal decisionmaking should be archived for eventual release (even if it is decades in the future). Social scientists might consider going beyond analysis of extant data, and joining coalitions of activists, to assure a more expansive, comprehensible, and balanced set of “raw materials” for analysis, synthesis, and critique. In short, rather than solely watching society, social science must now commit to assuring the representativeness and relevance of what is watched. The only alternative to “future-forming” research is to let the most powerful pull the strings in comfortable obscurity, while scholars’ agendas are dictated by the information that, by happenstance or design, is readily available.

The same cautions should govern legal scholarship on the platform economy. Digital labor remains highly controversial. For example, Uber has very creatively orchestrated a series of studies and alliances purporting to demonstrate the value and importance of its services. However, in order to truly understand its social costs, as Brishen Rogers shows, we would need to have access to far more information, which is now proprietary and hidden. For example, who approved its fake ride requests to undermine its competitor, Lyft? What types of returns are investors being promised? How much of the firm’s success is due to real, productive innovation, and how much simply reflects regulatory arbitrage (akin to Amazon’s famous tax advantages over brick-and-mortar retailers)?

Similarly, the extraordinary controversy over the only partially-available FTC staff report on the agency’s antitrust investigation of Google shows how even innovation policy itself can remain “in the dark” when it is politically convenient for it to remain so. I called for release of the report in 2013, only to be met with stony silence by the agency. Now, every other page of the report has been inadvertently released, and even this partial disclosure has several damning allegations and pieces of evidence. Until the full report is released (as well as some indication of the scope and nature of the controversy between the enforcement and economics divisions over the case), competition policy in the US remains opaque. Given what we have now, it’s hard to resist the conclusion that brute political calculations overrode the agency’s expert judgment.

When state and trade secrecy impose severe limits on the availability and use of sources, we must be very cautious about drawing conclusions too quickly about the nature of the digital economy. Leading firms have an agenda, which researchers can unwittingly advance when they focus inquiry on data which (executives have decided) are innocuous enough to be disclosed. A diverse coalition of watchdog groups, archivists, open data activists, and public interest attorneys are now working to assure a more balanced and representative set of “raw materials” for analysis. The critical and emancipatory potentials of social science and legal scholarship depend on the success of such efforts.

ContentIssues

What if innovation is driving economic stagnation and inequality? That’s the question Charles Leadbetter analyzes in “The Whirlpool Economy” over at the UK Innovation Foundation’s Long+Short site. He makes key points about the current relationships between innovation and the economy, but misses partly what may make the new technology of big data a particularly toxic driver of current economic inequality. That stagnation haunts the U.S. and especially Europe is a common observation, but as Leadbetter notes, it’s a "a very strange one, for it comes at time when our lives are in the midst of incessant change, much of it brought on by what claim to be radical innovations."

In past periods of stagnation, he argues "the economy stagnated because there was little underlying dynamism, few new ideas and limited opportunities for entrepreneurship. He nods in the direction of basic Keynesian analyses of the problem, such as from Larry Summers who sees a deficit in demand driven by lower wages and austerity public policy. But Leadbetter argues that current innovation itself is a key driver of stagnation since so much new innovation “is aimed at eliminating jobs and lowering costs”:

The economy is creating jobs but many of them are low-productivity, low-pay service jobs. The result is that many young people find themselves doing work for which they are over-qualified: a quarter of all ‘entry level’ jobs in London are filled by someone with a degree, quite possibly one they have paid for themselves with debts they may never pay off."

He argues that this problem of the automation of jobs and deskilling of middle class households needs to be addressed by policy that raises wages and kickstarts the virtuous cycle of higher incomes, higher demand and higher production. And we need less “disruptive” innovation and more innovation that "generates new jobs and augments existing ones; while addressing the spiralling costs of things like energy, health and social care that matter to median-income households."

Leadbetter’s argument is very on point as far as it goes, but what he doesn’t fully address is why automation now is so different from past cycles of boom-and-bust. Analysts have been worrying about mass unemployment and impoverishment of the working classes at least since the Luddites in the early 19th century saw new textile technology endangering skilled textile jobs. The rise of mass production and the assembly line were seen as replacing skilled craft workers with only semi-skilled automatons working at the behest of the production line machinery. Yes, robotics threatens to add to the cycle of displacement but new jobs not even imagined before were created in the past and will likely be created in the future; heck, IBM just announced that they intend to train 10,000 engineers in analyzing Twitter data as part of their business services division, a kind of job that didn’t even exist in the past.

But the kind of “big data” jobs IBM is developing as part of this cycle of job destruction/creation may highlight what IS different about this technological cycle and why new innovation is not being channeled into new income for middle income families. In past rounds of technological job destruction, after the initial pain of unemployment and skills redeployment, workers would organize to demand a share of the new wealth created by the new machines and consumers would benefit from lower prices. However, with new “disruptive” technology today designed to help corporate America profile workers and consumers to better increase corporate profits, the “wealth” being created is by its nature more of a zero-sum game. The industrial age created at least some degree of shared wealth where Henry Ford could argue paying higher wages for workers would in turn create demand for his cars, but subprime mortgage companies profiling consumers to sell them bad loans depend on the immiseration of working families as their profit source.

What we have seen over the last fifty years is that as every recession disrupts and rearranges the economy, when growth does return less and less of the income generated goes to middle class families. This following chart by Levy Institute economist Pavlina R Tcherneva highlights how where most increased wealth during economic recoveries went to the bottom 90% of the population immediately after World War II, each successive recovery has seen more and more going to the wealthiest 10% to the point where the current recovery has seen the lower 90% actually losing income during a recovery — an unprecedented event — even as the income of the wealthiest Americas has soared.

There are no doubt a number of factors contributing to this dynamic but as we argued in our initial report at Data Justice, Taking on Big Data as an Economic Justice Issue, big data technology means that corporations know so much about every person that during every hiring decision, every sale to a consumer, and every loan to a family, that they can increasingly extract the maximum profit from each of those transactions. This big data dynamic seems like a key story in the current rise in economic inequality.

More and more companies scan social media and administer personality tests before hiring anyone and not only does this hurt many individual people, it allows companies to use algorithms to decide how to systematically weed out people who might agitate on behalf of all employees for higher wages. With big data, the best way to defeat a drive to organize a union in a company’s workplace is to never hire people willing to stand up to their boss in the first place. At the time, data analysis allows companies to decentralize their operations around the globe and within the United States into far flung locations and even spinning off most workers to be on their own as “independent contractors.” As a New York Times writer described:

Just as Uber is doing for taxis, new technologies have the potential to chop up a broad array of traditional jobs into discrete tasks that can be assigned to people just when they’re needed, with wages set by a dynamic measurement of supply and demand, and every worker’s performance constantly tracked, reviewed and subject to the sometimes harsh light of customer satisfaction."

The result is a data-driven pressure to push down wages with workers so fragmented that they have less and less ability to act collectively to demand higher pay.

On the consumer side, companies like Google and Facebook collect ever increasing reams of personal data. Companies then can place ads or target consumers with offers not just based on what those consumers may be interested in but using the profile and algorithms to estimate the maximum price the consumer is likely to pay. Offering different prices to different people for the same product or service — what economists call price discrimination — allows companies to maximizing their profit on each transaction. Researchers Rosa-Branc Esteves and Joana Resende found that average prices under the traditional regime of mass advertising were lower than with targeted online advertising.. Similarly, researcher Benjamin Reed Shiller found that where advertisers know consumers willingness to pay different prices, economic models show companies can use price discrimination to increase profits and raise prices overall, with many consumers paying twice as much as others for the same product.

Subprime mortgages were the extreme example of this where predatory companies used algorithms to identify the most likely victims and offered them worse deals than people with the exact same credit ratings, who just knew enough to refuse the bad deals. Similarly, payday lending and other exploitive financial companies use big data profiling extract the most profit possible from economically struggling families.

What is different, then, in this round of technology is that it’s not so much changing our physical processes of production, although that is happening as well, but it’s changing the informational relationship between companies and the population.

Big data converts increasing information inequality into economic inequality. Taking on that big data power is therefore a key step in taking on the broader economic stagnation and inequality that has left the middle class behind in the current recovery.