I’ve written a few posts now about the social and ethical implications of algorithmic governance (algocracy). Today, I want to take a slightly more general perspective on the same topic. To be precise, I want to do two things. First, I want to discuss the process of algorithm-construction and the two translation problems that are inherent to this process. Second, I want to consider the philosophical importance of this process.

Anyway, one thing that has bothered me about these past discussions is their relative lack of nuance when it comes to the different forms that algocratic systems could take. If we paint with too broad a brush, we may end up ignoring both the advantages and disadvantages of such systems. Cognisant of this danger, I have been trying to come up with a better way to taxonomise and categorise the different possible forms of algocracy.… Read the rest

Abby Martin explores the public distrust of the US financial system, discussing the controversies surrounding ‘High-Frequency Trading’, a practice that involves advanced computer algorithms that gives buyers and sellers on Wall Street an advantage over the general public.

In ten years, how will the machines that run your daily existence respond when confronted with life-or-death decisions? Matthieu Cherubini at the Royal College of Art offers prototypes of Humanist, Protector, and Profit-Based moral parameters for self-driving cars:

Many car manufacturers are projecting that by 2025 most cars will operate on driveless systems. How can such systems be designed to accommodate the complicatedness of ethical and moral reasoning? Just like choosing the color of a car, ethics can become a commodified feature in autonomous vehicles that one can buy, change, and repurchase, depending on personal taste.
Three distinct algorithms have been created - each adhering to a specific ethical principle/behaviour set-up - and embedded into driverless virtual cars that are operating in a simulated environment, where they will be confronted with ethical dilemmas.

In brief, modern technology has made it possible for pretty much all of our movements, particularly those we make “online”, to be monitored, tracked, processed, and leveraged. We can do some of this leveraging ourselves, by tracking our behavior to improve our diets, increase our productivity and so forth. But, of course, governments and corporations can also take advantage of these data-tracking and processing technologies.

Data-mining [could create] a system of algorithmic regulation, one in which our decisions are “nudged” in particular directions by powerful data-processing algorithms. This is worrisome because the rational basis of these algorithms will not be transparent:

Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own.

Can all digitally-created music really just be thought of as humans manipulating algorithms? If so, why not get to the heart of things? A burgeoning, extremely nerdy subculture called algorave revolves around generating, altering, and combining electronic sound loops via on-the-spot coding, using languages such as SuperCollider, with the coding projected on a large screen. Could this be the worst new form of music, or the most honest? Wikipedia writes:

An algorave is an event where people dance to music generated from algorithms, often using live coding techniques. Algoraves can include a range of styles, including a complex form of minimal techno, and has been described as a meeting point of hacker philosophy, geek culture, and clubbing.
The first self-proclaimed "algorave" was held as a warmup concert for the SuperCollider Symposium 2012. The first North American algorave took place in Hamilton, Ontario during the artcrawl of 9 August 2013.

A preview of how our society eventually crumbles – subtle sabotage by algorithms in everyday machines? The FontFeed on a mind-boggling discovery:

Last Wednesday German computer scientist David Kriesel had a bizarre discovery. After scanning a construction plan on a Xerox Workcentre and printing it, he noticed the plan suddenly contained incorrect numbers. The Xerox Workcentre somehow changed the numbers whilst scanning.

On his website Kriesel analyses what causes the problem in Xerox Workcentre 7535 and 7556 machines – a compression algorithm randomly replaces patches of pixel data in an almost unnoticeable way.

Apparently Xerox machines use JBIG2, an algorithm that creates a dictionary of image patches it considers similar. As long as the error generated by these patches is not too high, the machine reuses them instead of using the original image data.

Why is this issue so crucial? First of all, these are widespread machines, commonly used in service centres and copy shops, and Xerox seemed to be unaware of the issue until David Kriesel notified them last Wednesday.

If you were expecting some kind warning when computers finally get smarter than us, then think again.

There will be no soothing HAL 9000-type voice informing us that our human services are now surplus to requirements.

In reality, our electronic overlords are already taking control, and they are doing it in a far more subtle way than science fiction would have us believe.

Their weapon of choice – the algorithm.

Behind every smart web service is some even smarter web code. From the web retailers – calculating what books and films we might be interested in, to Facebook’s friend finding and image tagging services, to the search engines that guide us around the net.

It is these invisible computations that increasingly control how we interact with our electronic world.

Thirty-six years ago a new toy, a logic game, was invented by Erno Rubrik. All that time spent trying to get 26 cubes in the correct position seemed like a waste, until now. Discovery News reports:

An international team of researchers using computer time lent to them by Google has found every way the popular Rubik’s Cube puzzle can be solved, and showed it can always be solved in 20 moves or less.

The study is just the latest attempt by Rubik’s enthusiasts to figure out the secrets of the cube, which has proven to be altogether far more complicated that its jaunty colors might suggest.

At the crux of the quest has been a bid to determine the lowest number of moves required to get the cube from any given muddled configuration to the color-aligned solution.

“Every solver of the Cube uses an algorithm, which is a sequence of steps for solving the Cube,” said the team of mathematicians, who include Morley Davidson of Ohio’s Kent State University, Google engineer John Dethridge, German math teacher Herbert Kociemba and Tomas Rokicki, a California programmer.