In a special invited lecture titled “The First Amendment in the Second Gilded Age,” published in the Buffalo Law Review in 2018, Jack M. Balkin, Yale University law professor, draws parallels between the first Gilded Age that occurred 100 years ago and our “new Gilded Age.” He notes that the first Gilded Age “produced huge fortunes, political corruption and vast inequalities of wealth, so much so that people became concerned that they would endanger American democracy” (digitalcommons.law.buffalo.edu/cgi/viewcontent.cgi?article=4716&context=buffalolawreview).

Balkin goes on to suggest that the Second Gilded Age “begins more or less, with the beginning of the digital revolution in the 1980s, but it really takes off in the early years of the commercial Internet in the 1990s, and it continues to the present day. It is characterized by the rise of social media and the development and implementation of algorithms, artificial intelligence, and robotics. For this reason I call our present era the Algorithmic Society.”

Not everyone benefits from this Algorithmic Society. As Massimo Ragnedda and Bruce Mutsvairo, editors of Digital Inclusion: An International Comparative Analysis (Lexington Books, 2018), note, “More and more services, resources, opportunities, knowledge and social relations are migrating into the digital realm” (p. vii). One consequence of so much going digital is that, across the globe, people “at the margins of the digital—often those with lower incomes, immigrants, less educated people, those living in rural areas, seniors or individuals with disabilities—are missing out on the potential benefits.” These are people that librarians care about. And what about the future of serendipity—the ability to find information that you never knew you needed or wanted?

UNREGULATED BIG DATA + AI = BIG TROUBLE

Generating data, by itself, isn’t a problem. However, with the rise of AI, we are seeing the development of systems that are able to quickly connect user activity or characteristics and use these for purposes that may incriminate, discriminate, or otherwise harm innocent people who are unaware of any connections that may be created. As Privacy International has noted: “AI-driven consumer products and autonomous systems are frequently equipped with sensors that generate and collect vast amounts of data without the knowledge or consent of those in its proximity. AI methods are being used to identify people who wish to remain anonymous; to infer and generate sensitive information about people from non-sensitive data; to profile people based upon population-scale data; and to make consequential decisions using this data, some of which profoundly affect people’s lives” (privacyinternational.org/learning-topics/artificial-intelligence).

A 2019 Brookings Institution report, “How to Address New Privacy Issues Raised by Artificial Intelligence and Machine Learning” by Mark MacCarthy, explains that AI’s machine learning capability is making possible connections unimaginable as it “increases the capacity to make these inferences. The patterns found by machine learning analysis of your online behavior disclose your political beliefs, religious affiliation, race, ethnicity, health conditions, gender and sexual orientation, even if you have never revealed this information to anyone online” (brookings.edu/blog/techtank/2019/04/01/how-to-address-new-privacy-is sues-raised-by-artificial-intelligence-and-machine-learning).

THE UNIQUE NATURE OF AI

The World Intellectual Property Organization (WIPO) published a Technology Trends 2019 report on AI (wipo.int/edocs/pubdocs/en/wipo_pub_1055.pdf). The 185-page analysis found that “nearly 340,000 patent families and more than 1.6 million scientific papers related to AI were published from 1960 until early 2018. … The AI application fields most commonly mentioned in patent literature include telecommunications, transportation, and life and medical sciences, but almost all fields show a growth in patenting activity in recent years.”

A posting on Lavery Lawyers’ Lexology blog notes, “The value of these technologies relies mostly on the ability to protect the intellectual property related to these technologies, which may lie, in some cases, in the innovative approach of such technology, in the work performed by the AI system itself and in the data required to train the system” (lexology.com/library/detail.aspx?g=d2be2a38-7b1d-4d39-b894-a48410fbaf5c).

At the same time, the report continues, AI can be as prone to errors as human decision-making:

AI poses unique challenges, as it informs and sometimes replaces human decision-making. AI guides decisions on loan eligibility, insurance coverage, medical procedures, and other significant issues. On the one hand, removing or reducing the elements of human decision-making, such as bias and errors, leads to more objective decisions. However, algorithms are not inherently more fair than human decision-makers. AI often depends on training data to arrive at a decision model. Such training can lead to biased results if the training data itself incorporates bias or prejudice, whether intentional or unintentional. Algorithms also depend on choices made by developers, for example, about which features should be included in decision-making. The inclusion or exclusion of certain features may lead to discriminatory results.

Intellectual property laws, patents, and other forms of ownership and secrecy increasingly underlie much of our web-based technologies and products. Huge collections of data are being amassed each day on virtually everyone around the world. At the same time, these databases and services are constantly at risk of being attacked or corrupted for profit or political reasons. These potential dangers are becoming common realities for companies, governments, and even libraries.

BIG DATA IS HERE

Beyond this, we have Google and other companies offering “free” enterprise software as the price of data collection. Examples include G Suite for Education and G Suite for Nonprofits. In a time of fiscal stringency, having access to key tools such as those offered by Google represents a major institutional savings. Having a mail service, calendaring, hangouts, cloud storage, website hosting, and performance suites for creating documents, presentation slides, forms, spreadsheets, and other options is a major convenience.

However, it comes at a cost. The “price” for this is that Google retains the right to capture the information on our searching behavior and use of these systems—admittedly useful for product development—and this has given Google a private window on the searching and other habits of millions (if not billions) of people. And, through sign-ins, web searching, and other behaviors, Google clearly has huge stores on individual behaviors that can easily impact the marketing and sales data that it sells to other companies and uses itself. Tom Gilson, writing in Against the Grain, April 15, 2015, asked about what we have been sold when we look at Google deals and privacy (against-the-grain.com/2015/04/atg-originals-google-deals-privacy-what-have-we-been-sold-part-1-of-2-parts).

23andMe and other personal genomics companies offer testing of your DNA to determine your genetic background. However, this data is also available (with legal court warrants) to police agencies. Thus, for all of the close relatives of the estimated 10 million clients in 76 countries, you and your relatives (now or in the future) can be easily identified from the genetic material. NBC reporter Maggie Fox observed that fewer than 10% of the users of 23andMe even read the contracts that they sign when agreeing to the testing (nbcnews.com/health/health-news/what-you-re-giving-away-those-home-dna-tests-n824776). Genetic genealogy is now a standard method for crime site analysis and responsible for arrests in what had been “cold cases.” DNA has been captured on used napkins, discarded gum, and other surfaces as a way to identify or match potential perpetrators to some crime.