From his perch in Silicon Valley, cub economist Marc Andreessen offers a brilliant new argument in favor of income inequality:

You see, it’s ok to give raises to the wealthy, because the wealthy don’t produce the “things” that “lower-income consumers” need to buy. But you shouldn’t increase the wages of lower-income workers involved in the production of “things,” since they’re going to spend most of their money on those “things.” In other words: Pay the poor less, and they’ll feel richer. Sweet!

Promulgate:

Machines that think think like machines. That fact may disappoint those who look forward, with dread or longing, to a robot uprising. For most of us, it is reassuring. Our thinking machines aren’t about to leap beyond us intellectually, much less turn us into their servants or pets. They’re going to continue to do the bidding of their human programmers.

Much of the power of artificial intelligence stems from its very mindlessness. Immune to the vagaries and biases that attend conscious thought, computers can perform their lightning-quick calculations without distraction or fatigue, doubt or emotion. The coldness of their thinking complements the heat of our own.

Where things get sticky is when we start looking to computers to perform not as our aids but as our replacements. That’s what’s happening now, and quickly. Thanks to advances in artificial-intelligence routines, today’s thinking machines can sense their surroundings, learn from experience, and make decisions autonomously, often at a speed and with a precision that are beyond our own ability to comprehend, much less match. When allowed to act on their own in a complex world, whether embodied as robots or simply outputting algorithmically derived judgments, mindless machines carry enormous risks along with their enormous powers. Unable to question their own actions or appreciate the consequences of their programming — unable to understand the context in which they operate — they can wreak havoc, either as a result of flaws in their programming or through the deliberate aims of their programmers.

We got a preview of the dangers of autonomous software on the morning of August 1, 2012, when Wall Street’s biggest trading outfit, Knight Capital, switched on a new, automated program for buying and selling shares. The software had a bug hidden in its code, and it immediately flooded exchanges with irrational orders. Forty-five minutes passed before Knight’s programmers were able to diagnose and fix the problem. Forty-five minutes isn’t long in human time, but it’s an eternity in computer time. Oblivious to its errors, the software made more than four million deals, racking up $7 billion in errant trades and nearly bankrupting the company. Yes, we know how to make machines think. What we don’t know is how to make them thoughtful.

All that was lost in the Knight fiasco was money. As software takes command of more economic, social, military, and personal processes, the costs of glitches, breakdowns, and unforeseen effects will only grow. Compounding the dangers is the invisibility of software code. As individuals and as a society, we increasingly depend on artificial-intelligence algorithms that we don’t understand. Their workings, and the motivations and intentions that shape their workings, are hidden from us. That creates an imbalance of power, and it leaves us open to clandestine surveillance and manipulation. Last year we got some hints about the ways that social networks conduct secret psychological tests on their members through the manipulation of information feeds. As computers become more adept at monitoring us and shaping what we see and do, the potential for abuse grows.

During the nineteenth century, society faced what the late historian James Beniger described as a “crisis of control.” The technologies for processing matter had outstripped the technologies for processing information, and people’s ability to monitor and regulate industrial and related processes had in turn broken down. The control crisis, which manifested itself in everything from train crashes to supply-and-demand imbalances to interruptions in the delivery of government services, was eventually resolved through the invention of systems for automated data processing, such as the punch-card tabulator that Herman Hollerith built for the U.S. Census Bureau. Information technology caught up with industrial technology, enabling people to bring back into focus a world that had gone blurry.

Today, we face another control crisis, though it’s the mirror image of the earlier one. What we’re now struggling to bring under control is the very thing that helped us reassert control at the start of the twentieth century: information technology. Our ability to gather and process data, to manipulate information in all its forms, has outstripped our ability to monitor and regulate data processing in a way that suits our societal and personal interests. Resolving this new control crisis will be one of the great challenges in the years ahead. The first step in meeting the challenge is to recognize that the risks of artificial intelligence don’t lie in some dystopian future. They are here now.

Promulgate:

The UK edition of The Glass Cage comes out tomorrow, sporting a different cover and subtitle:

I’ve been gratified by the early reviews in the British press. Here are some choice bits:

Bill Thompson in BBC Focus magazine: “My copy of this excellent book is so thoroughly scribbled on that I’d simply never be able to get rid of it. I’ve circled lots of stuff I agree — or disagree — with, and added exclamation marks to insights that I want to explore more deeply. … The Glass Cage is infused with a humanist perspective that puts people and their needs at the centre of the argument around automation and the alienation created by many modern systems. … So put down your phone, take off your Google Glass and read this.”

Ian Critchley in The Sunday Times: “[Carr] recognizes that machines have freed us from the burden of many mundane tasks. His argument, though, is that the balance has tipped too far. Automation has taken over some of the activities that challenged us and strengthened our connection to the environment. … His book is a valuable corrective to the belief that technology will cure all ills, and a passionate plea to keep machines the servants of humans, not the other way around.”

Richard Waters in The Financial Times: “Nicholas Carr is not a technophobe. But in The Glass Cage he brings a much-needed humanistic perspective to the wider issues of automation. In an age of technological marvels, it is easy to forget the human. … How to achieve a more balanced view of progress when all of today’s incentives are geared towards an ever-faster cycle of invention and deployment of new technologies? There is no room for an answer in this wide-ranging book. As ever, though, Carr’s skill is in setting the debate running, not finding answers.”

John Preston in The Telegraph: “What exactly has automation done for us? Has it freed people from drudgery and made them happier? Or has it, as Nicholas Carr wonders in this elegantly persuasive book, had the opposite effect, transforming us into passive zombies, helplessly reliant on machines to tell us what to do? … [Carr is] no Luddite who thinks that we would all be better off living in holes in the ground and making our own woad. Instead, in his thoughtful, non-strident way, he’s simply pointing out that the cost of automation may be far higher than we have realised.”

Giles Whittell in The Times: “An important book that a lot of people won’t want to take seriously, but should. … [Carr] has a deep and valuable fear of techno-emasculation. It’s a fear based on evidence but also intuition.”

Carole Cadwalladr in The Observer:“Provocative … Who is it serving, this new technology, asks Carr. Us? Or the companies that make billions from it? Billions that have shown no evidence of trickling down. The question shouldn’t be ‘who cares?’ he says at one point. It should be: how far from the world do we want to retreat?”

Jasmine Gardner in the Evening Standard: “Carr argues, very convincingly, that automation is eroding our memory while simultaneously creating a complacency within us that will diminish our ability to gain new skills.”

The Bookseller: “An eye-opening exposé of how automation is altering our ability to solve problems, forge memories and acquire skills.”

Promulgate:

“Who cares about science? This is music. We’re talking about how you feel.” So said Neil Young in introducing his high-resolution Pono player. Good luck, Neil, but I fear you’re a little downstream. In the end it’s more about the recording than the playback. This is from Tom Whitwell’s article “Why Do All Records Sound the Same?”:

What makes working with Pro Tools really different from tape is that editing is absurdly easy. Most bands record to a click track, so the tempo is locked. If a guitarist plays a riff fifty times, it’s a trivial job to pick the best one and loop it for the duration of the verse.

“Musicians are inherently lazy,” says John [Leckie]. “If there’s an easier way of doing something than actually playing, they’ll do that.” A band might jam together for a bit, then spend hours or days choosing the best bits and pasting a track together. All music is adopting the methods of dance music, of arranging repetitive loops on a grid. With the structure of the song mapped out in coloured boxes on screen, there’s a huge temptation to fill in the gaps, add bits and generally clutter up the sound.

This is also why you no longer hear mistakes on records. Al Kooper’s shambolic Hammond organ playing on “Like A Rolling Stone” could never happen today because a diligent producer would discreetly shunt his chords back into step. Then there’s tuning. Until electronic guitar tuners appeared around 1980, the band would tune by ear to the studio piano. Everyone was slightly off, but everyone was listening to the pitch of their instrument, so they were musically off.

Promulgate:

I am obsessed by the ugliness of the self-driving concept car that Mercedes showed off at the Consumer Electronics Show in Las Vegas last week:

Its design seems to have been inspired by the head of the monster from the movie Alien. The car’s official name is the F015 Luxury in Motion. But I have nicknamed it The Silver Turd.

The ugliness is more than skin deep. The tiny, mirrored windows reflect the carmaker’s vision of the car as a sybaritic isolation chamber, a stately pleasure-dome that shields its occupants from the outside world. “The single most important luxury goods of the 21st century are private space and time,” said Mercedes CEO Dieter Zetsche in unveiling The Silver Turd. “Autonomously driving cars by Mercedes-Benz shall offer exactly that — with the F015 Luxury in Motion, this revolutionary concept of mobility becomes tangible for the first time.” In its post-revolutionary form, mobility becomes indistinguishable from stasis.

The womblike interior borrows the “glass cockpit” design of the modern commercial airliner. The cabin is wrapped in computer monitors, and it’s these screens, not the windows, that provide the primary “interface” for the occupants.

“A central idea of ​​the concept,” explains a reporter from Motor Authority, “is a continuous exchange of information between vehicles, passengers and the outside world. For interaction within the vehicle, the passengers rely on six display screens located around the cabin. They also interact with the vehicle through gestures, eye-tracking or by touching the high-resolution screens. Outside, the F015 uses laser projection, radio signals and LED displays to communicate with its surroundings.” Responsibility for perceiving and acting in the world is transferred to the computers, freeing the passengers to enjoy a fully simulated experience.

I’m reminded of the Thom Gunn poem “On the Move,” these two lines in particular:

Much that is natural, to the will must yield.

Men manufacture both machine and soul.

“When the car reaches its destination,” the reporter tells us, “the seats then rotate towards the door for an easy exit for passengers.”

Promulgate:

In “HAL, Mother, and Father,” an essay in the Paris Review, Jason Resnikoff remembers how his father, a computer scientist, reacted to the visions of the future presented in science fiction movies of the Sixties and Seventies. First came 2001:

My father was so buried in computers that when he saw 2001 he very much liked HAL, the spaceship Discovery’s villainous central computer. To this day, he enjoys quoting the part of the movie where HAL tries to explain away his own mistake — the supposed fault in the AE35 unit — by saying, “This kind of thing has cropped up before, and it has always been due to human error,” an excuse that more or less sums up my father’s considerably erudite understanding of computers. According to my father’s interpretation of the film, HAL wanted to become something more than he was. Becoming, always and ever becoming, is in my father’s eyes a worthy, nay, a noble way to go through life, always trying finally to be yourself, that most elusive of goals. The mission to Jupiter was a mission to take the next step in evolution, and HAL wanted to be the one to evolve. My father made this sound like a very reasonable desire, one that makes HAL the hero of the movie.

Put on the film now and you see the physical metaphor of evolution as Kubrick and Clarke imagined it: a perfectly symmetrical monolith, its facets immaculately smooth, the most ordered object imaginable. And there I see how my father was in the thick of it. He thought his work with computers was in a small way helping to liberate humanity, to allow people to think beyond what had until then been the limits of cognition. When those right angles appeared in the shape of a monolith, my father saw freedom, but I doubt he saw what else they stood for: that they were the same right angles of urban renewal displacing working-class neighborhoods and erecting in their ruins other kinds of monoliths, housing projects like prisons, expressways that gutted street life. Or the monolith of an office building somewhere in Thailand, where as a part of Operation Igloo White all the might of the United States military was mobilized in a truly insane attempt to automate “strategic” bombing in Vietnam via a dense network of computers, but only managed to drop bombs on random people. I very much doubt he realized how his work, the very systems of command and control he was helping to develop, would in the hands of the greedy and inhuman come to destroy the world he thought was on the verge of being born.

Promulgate:

All revolutions exaggerate, and the digital revolution is no different. We are still in the middle of the great transformation, but it is not too early to begin to expose the exaggerations, and to sort out the continuities from the discontinuities. The burden of proof falls on the revolutionaries, and their success in the marketplace is not sufficient proof. Presumptions of obsolescence, which are often nothing more than the marketing techniques of corporate behemoths, need to be scrupulously examined. By now we are familiar enough with the magnitude of the changes in all the spheres of our existence to move beyond the futuristic rhapsodies that characterize much of the literature on the subject. We can no longer roll over and celebrate and shop. Every phone in every pocket contains a “picture of ourselves,” and we must ascertain what that picture is and whether we should wish to resist it.