…

When post-humanists, trans-humanists, accelerationists and singultarians talk about “the singularity”, they are referring to a possible event in human history where technological change proceeds so rapidly, the rate of change approaches instantaneous. The idea is that wherever you look in nature you can find trends of exponential growth. Populations expand exponentially, evolution (even though it’s slow) also seems to operate on an exponential scale. Technology progresses at an exponential rate.

With computer technology this became much easier to measure, since we know how much computing power a chip has. Moore’s Law estimates that computing power for the same cost roughly doubles every 18 months. Interestingly, Bell’s law predicts the rate of emergence of new use classes of electronic technology based on More’s Law (and humorously, Wirth’s Law predicts that software gets slower faster than hardware gets faster, thus negating all benefits.).

As the rate of change accelerates, our ability to predict social, environmental, and technological consequences decreases as the inverse of the rate of acceleration. So the ability to see further and further ahead actually gets us less and less foreknowledge. An interesting thing happens with exponential growth functions. Fairly early on in their growth phase there is a sharp transition where the slope of the function approaches vertical rapidly. If we graph the function, with change on the vertical axis and time on the horizontal axis like so:

As should be apparent from the graph, there is a certain time (and for all time after) at which the rate of change is essentially a vertical line. It never actually is a vertical line, just really close. As the slope approaches vertical, we approach an infinite amount of change instantaneously; so the future approaches causal disconnection with the past. Looking at it from the other direction, if you are on the other side of the singularity, it is almost impossible to say what events produced your present, since there were (approaching) an infinite number of possible causes.

So when people say it will be hard to predict what life will be like after the singularity, they mean it will be damn near impossible, if not actually impossible. Since the rate of change is also subject to the laws of physics (at least it seems that way, but if the laws of physics are mutable… then all bets are off. This sounds far fetched, but it isn’t when you consider that the laws of physics didn’t always exist. They came into being in the first few fractions of a second at the birth of the universe. Or so the story goes), it seems as though there should also be a point at which human history is subjected to Einsteinian relativistic effects (Charles Stross mentions this in his book “Accelerando”).

So the question remains: what could it possibly be like to live in a world where past and future have causal disconnects, and your personality is affected by Einsteinian relativity? This is the whole point; the consensus so far is that we have no idea, and anyone who does is probably lying.

Singularity Counter Arguments

As Anonymouse has pointed out, technological singularity is hardly a rigorous scientific idea. So in the interest of fairness, here are some links to criticisms of Kurzweil’s work in this area, and the singularity in general.

Even now, there seems to be this belief that specialization is a good thing and that this is what complex society is based on. In my mind, this could not be more wrong. If you want to make this claim at all, it should be made in terms of differentiation. The problem with the term “specialization” is that it implies permanence. As McLuhan has said, “Anyone who wants to be taken over by a computer, should just specialize.”. The reason I prefer the term “differentiation” is that it has no implications of permanence. It’s also less purposeful, and to me at least, doesn’t have this built in sense of teleology. Differentiation itself is arbitrary, so it admits the possibility of changing the boundaries.

Of course, it’s also possible that the term “specialization” seems so loaded to me simply because it’s been used in such a loaded way ever since the discipline of history emerged.

The other thing I like better about the term differentiation, is that it isn’t contrary to generalization. So being a differential generalist isn’t a contradiction. Which is a good thing, because this is exactly what society needs. Even where we acknowledge that it is common for people to have multiple careers within a single lifetime (I think the number is at 7 now?), we still look at these from a relatively serial point of view. McLuhan recognized this was wrong even in the 1960’s.

He pointed to the importance of “roles” over “jobs”. The idea being that roles are more all encompassing. They embody a pattern of interaction, not just a single type of interaction, or even a set of specific interactions. You might see the same roles in multiple different fields that are seen as unrelated. This means roles have the potential for stability independent of other changes. Jobs on the other hand are specific. So training for a “job” was like asking to be taken over by a machine. Since machines are better at doing specific things anyway. We’ve yet to be successful in making machines with truly “general” capabilities.

Personally, I’m with McLuhan here. The kind of jobs that machines take over (factory work, manual filing, etc) are likewise not suited to human beings. They are tedious at best and dehumanizing at worst. Anything that can be automated should be automated. This way humans can spend their time doing things they are good at, and things that are better for them. Since the advent of agriculture, people have been doing things that are bad for them, and reducing their general health. Stephen Baxter talks about the consequences that agriculture had on early human civilization in his book “Evolution” (a work of fiction, but you could see where he was going with it), and the tale wasn’t exactly pretty. Perhaps the transition at this stage might be rough, but then so is life saving surgery.

It might seem that automation means giving up control, but this is not the case. The point of automation is to make it so that you no longer have to do the things you didn’t want to do in the first place. Take “watching a movie”, or “getting a massage” for examples. These are things that you would never automate, because that would defeat the purpose. You want to experience these things. For the other side, consider things like “making your heart beat” or “searching the Internet”. Would anyone seriously contend that automating these processes is a bad thing? If your heart didn’t beat automatically you would probably die because you forgot to make it beat while you were sleeping. As for searching the Internet, would anyone seriously want to crawl through sites manually having to guess at what they might find? The Internet would be next to useless.

Generalization is also better for progress in general, because it brings different disciplines together in cross fertilization. There is a combinatorial explosion of possibilities in combining different ideas (allowing exponential growth!). Whereas specialization essentially limits possibilities to a linear function. The only way progress is made is by extending or expanding an existing idea. All paths diverge, but never converge, so ideas can’t be combined. It should be fairly obvious that this is a very limited approach.

Perhaps this bias is partially technologically mediated though. An interesting correlation is that the first technologies were of necessity very general. With pretty much any technology you can see this progression. Take computers for example. The first ones were built out of relatively simple generic parts. There was no other way, because other kinds of parts didn’t exist. Now that restriction doesn’t even apply. Every chip used is application specific (ASIC), because they are so easy to make (even this university can get you an ASIC made if you supply a schematic).

The next thing down the pipe is field programmable gate arrays (FPGA), which can be configured to act like anything you want through software. They are much slower than ASICs, but can be faster if more suitable wired for a task than a given asic. This allows for the possibility of dynamically reconfigurable hardware, which is kind of a neat concept really. When you look at the affect that software running on specific devices, and communication between them has had on society, it’s hard to say what might happen if you introduce general anything devices that are self configuring. While this is a ways off, its not infeasible.

The point I’m making here is that with generalist technology, we seem to have specialist people; but as technology becomes specialized, generalist people are important in order to integrate them. But people adjust more slowly, and going from specialist to generalist is a hard switch – especially when society and education are pushing them in the opposite direction! Not only that, but these phenomena don’t simply move in straight lines. They oscillate, and move in cycles. I’m not sure if I’ve completely characterized these cycles here (probably not), but it does seem to me that they operate along the length of a technological paradigm. With the introduction of a new technological paradigm the technology is general, and so requires specialists to work with it. As the technological paradigm matures, it becomes branched and specialized, and so requires generalists to integrate and apply it.

As you will notice, there are two wave forms here:

The Generalization to specialization cycle of technology.

The specialization to generalization cycle of human users/developers/scientists etc.

They are linked but not synced. If anything, they are at least 1/2 cycle out of phase with each other. The relation here reminds me of current to voltage in an oscillating circuit. The wave form for voltage is always at least 1/2 cycle out of phase with the wave form for current. Of course the situation is more complicated than that, because these two cycles aren’t exactly synced, and it isn’t clear to me exactly how they are linked. I am fairly sure that they are linked though.

The half-bakery is a neat site (http://www.halfbakery.com/) that is very much in the spirit of open source… except that they don’t necessarily produce anything… and it’s not necessarily any good. But as the saying goes, it’s “right at least twice a day” (imagine a broken clock). Some of the ideas are just really funny. Others have legitimate potential for invention, or in the local dialect, are “bakeable”. Things that have already been done are referred to as “baked”.

The user community gives feedback on ideas through comments and a voting system. Good ideas get “BUN” (baked good), and bad ideas get “BONE” (yucky fish-bone). The idea being that you keep fresh baked buns, and throw away yucky fish-bones. This site is similar to the original idea behind RFCs

I remember watching a movie years ago about a man with x-ray vision. at first, he could use this ability selectively, and had fun with it. but over time, his power grew. eventually, he couldn’t not use it, and it overpowered and devoured him. he couldn’t sleep because even if he closed his eyes, it was the same as leaving them open. he became a constant unwilling witness to the universe. everywhere he looked there was no relief, and no respite. It would have been like staring at the sun until your eyes burned out, but he was denied even this, because his x-ray vision would only get better and better. Finally, he receives the answer to his problems as a bible quote from a fortune teller. “If thine eye offends thee, then pluck it out”(Mathew 18:9). Thrilled at the simplicity and obviousness of this solution he then proceeds to gleefully gouge out his eyeballs.

Prophetic as always, McLuhan made repeated allusions to electronic media (specifically TV in this case. incidentally, McLuhan also repeatedly referred to the “X-ray generation”. I’m not sure if he ever explicitly used the term “generation X”, but to me the connection is fairly obvious. I also think this is a much more appropriate genesis than Douglas Coupland’s telling of it as “the lost generation”) as having X-Ray like properties, both literally and figuratively.

I think this was also an indirect comment on information density. By shining through and not on, the X-ray provides information at arbitrary levels of depth, which leaves us in the same unenviable position as the man with x-ray eyes. Just like him, we may be unable to turn our penetrating vision off, unless we gouge out our eyes. In some (not uncertain) sense, McLuhan’s interpretation of the myth of Narcissus may be just that. He also talked about information overload, and the numbing effects of the technological extension of our capabilities.

Paradoxically, more information generates less meaning. The requirement of depth encourages surface readings (or maybe not. McLuhan didn’t say this, just me.). So what does this mean in terms of the increased immediacy enabled by the Internet? I don’t think it’s by any means clear. Two simplistic readings are:

That that it enriches interaction by making it easier over arbitrary distances, and out of sync with time. It provides an additional channel (probably counts as multiple channels, but oh well) over which we can interact.

By bombarding us with information we lose the ability to discern what is important. we go into information overload and shock. As a result we are forced to amputate, and detach ourselves from the world.

McLuhan might have said it was both (and possibly other things besides). Extended capability is a double edged sword. Every solution creates problems of its own that are difficult to foresee. Ironically, the increased connectivity and connect-ability enabled by extensions such as the Internet, cell phones, and convergence on all levels also necessitates a certain level of detachment. Otherwise the connection will burn out, and this provides an analogy to electronics that can be drawn.

In electronics, there are 3 fundamental quantities any electrical device (the simplest devices being things like wires and insulators) has (and 2 properties of electricity):

Resistance – slows the flow of current through the device.

Inductance – slows a change in the amount of current flowing through the device.

Capacitance – slows a change in the amount of voltage dropped between the input and output of a device.

Current – the amount of electrons flowing through a device in a unit of time.

Voltage – the amount of force pushing at the electrons.

Ohms Law can written as I = V/R, where I is current, V is voltage, and R is resistance. If V is non-zero, I approaches ∞ as R approaches zero. If we replace the word “current” with the word “information”, “resistance” with “slowness”, and “voltage” with “stickiness” (I can’t really think of a clear analog for voltage, but this one is somewhat apt), we have a parallel equation for socio-informatics (a made up word. I’m going to define it as “what McLuhan actually meant when he said ‘information theory'”). So as the slowness of a medium approaches zero, information levels approach ∞ so long as there is any force pushing or generating it.

The equation for power dissipation in a circuit is related to Ohms Law, and can be expressed as P = V2/R, so again, as R approaches zero, power dissipation approaches ∞. Since this is power dissipation we are talking about, this means it has to go somewhere. In electronic devices, power is dissipated as heat. So in the current example where R approaches zero, the device will get very hot then burn out and cease functioning. In the analogy I’m making, as the “slowness” of a media approaches zero, information levels approach ∞, and we burn out and cease functioning. This isn’t really a new idea, just a different spin on the old idea of “information overload”. What may be new is that information overload is a function of “sticky-squared” (as if that makes any sense).

If exponentially accelerating rates of change are inevitable as Ray Kurzweil’s “Law of Accelerating Returns” suggests, then our only chance for survival is to come of with the socio-informatic analog of superconductivity. I have no idea what this means, but it does seem to suggest a transition to posthumanity might be in order. Otherwise, we might be stuck with gouging out our eyes as the only alternative.

While it is true that this already happens, it doesn’t happen in the way I have in mind. Usualy sociological and anthropological work is only applied to the design of user interfaces for products that end users will (or poissibly not) use. For some reason though, there doesn’t seem to be much willingness to turn this lens on the process itself. Here the iceburg metaphor applies. 10% of the product/process is visible, but 90% remains beneath the surface. While it is true that interfaces are important, what’s behind them is important too.

McLuhan talked about the importance of mythic forms for understanding an electronicaly mediated society. Intuitively, the software development industry (or more acurately academic research on software development) has grasped this. We see this in the patterns movement, inspired by the architectural philosophy of Christopher Alexander, who was contemporary with McLuhan (although a bit later). I think it would be interesting to see if there are any cross correlations or commonalities in thier work.

In any case, it’s possible that the history of software might have been different if these people had read and understood McLuhan (actualy, it probably would have been different if more people had read Christopher Alexander – few actualy did. I know I didn’t… but maybe later). The goals of the patterns movement is to provide a higher level language to talk/think about the (software) world. This is very much in the same veign as McLuhan’s ideas of how mythic forms relate to “resonant” media.

Design patterns are somewhat less colorfull than what we usualy think of as “myth”, but McLuhan meant this in an abstract sense anyway. The interesting thing is, that pattern languages (calling them “languages” is not entirely acurate, as they are more like an extended vocabulary. you still need some other linguistic framework to embed them within.) are not flat. The things they describe are not flat, and even the atoms of the language are not flat (see Design Patterns).

Relatively recently, we have seen the application of the pattern idea to many different spheres of software development, management and education among others (can’t seem to find any condensed links right now…). But like William Gibson has said, “the future has already happened, it just isn’t that well distributed yet”. From reading I’ve been doing recently on scale free networks, it seems like it should be distributed alot faster than it is, so obviously there is something else going on here. Again, I think this ties in with information overload. The responsiveness of nodes might not decay simply as a linear function of the amount of information it process (or number of connections).

Anyways, I supose I’ve deviated from the point I was originaly making by a large margin. The closest I have seen to what I am trying to get at here is the feild of “Science Studies”, which is an encouraging sign, but the idea requires greater difusion. Even though corporations have as much to gain from these efforts as society at large, they will probably be the most resistant. All in the name of the protection of intellectual property.

<diversion>

While it might seem like this is increasingly relevant in an information based society, it is increasingly irrelevant in light of the fact that information society is economicaly based on services (again, the idea of “software as a service” might have come a hell of alot sooner if engineers and managers were reading McLuhan… or maybe not. it’s been known within the industry for a long time that the most expensive phase of the software life cycle is the maintainance phase. development is relatively cheap. but even still, for some reason people didn’t put 2 and 2 together untill relatively recently. And even now, the idea isn’t that well distributed). The internet helped with this of course, and I think this points to an interesting relation to products and services.

McLuhan talked about the industrial revolution as the culmination of the print era. In essense, products are like books. Thier lack of maleability and concreteness makes them slow, in distribution, uptake, and use. Without the advent of the internet, the relevant focus of software products would have remained as a product. However with the internet, distribution and uptake are much more rapid processes. So rapid that distribution and uptake are incorporated into the use patterns of the product. Product and process become blurred, and we have the advent of Service Oriented Architecture. Open source software is also smart for alot of the same reasons. Open source software companies give you the product for free because they have realized that it is irrelevant. Instead they sell support and services, which are processes, not products.

</diversion>

Back to my original point…. again. For sociologists, understanding the processes that shape technology will help to understand how technologies shape people. For industry, understanding the social processes, ideologies, power dynamics, inequalities, etc. that underly what they do will give them a better understanding of how thier businesses can, do, and should operate. I think some of this is probably happening already, but I’ll admit that I’m largely unaware of its extent. But then, this is part of my point… As someone who has been in the software industry, and computer science I haven’t seen it (and I was paying attention). Any sociological work that is being done on the IT industry itself is invisible.

Simply throwing more bandwidth at a problem is a practice that treats the symptoms and not the disease. If the symptoms can be well enough managed (negligible or non-existent), then there is nothing truly wrong with this approach, other than the fact that it is kind of inefficient. The problem is that most problems aren’t like that. A bad solution to a problem can easily be so expensive to execute that it is almost impossible supply enough bandwidth to run it (no matter how much it might seem like you have. Besides, algorithm analysis can show you that better algorithms become even more important as capabilities and capacities increase. Otherwise, you find yourself running hard up against the law of diminishing returns – unnecessarily!).

“Bandwidth” in these cases usually consists of seemingly “cheap” human labor or seemingly cheaper mechanical labor. The problem is that this has either a high human cost, or environmental cost (often both). For instance, did you know that large companies have to run off a separate power grid… from an entire city? This is simply because so much power is needed to run all of their computers. That’s a lot of power, and this of course has a direct effect on the environment (if nothing else, it reduces resources). The point I’m making is that since computing hardware is relatively cheap now, not nearly as much thought is put into how effectively it’s being used (I don’t mean we should hand tune everything in assembly like in the bad old days, but we should at least pay more attention to it’s asymptotic characteristics).

The old adage “if it ain’t broke, don’t fix it” has become the watchword of the day. While this might seem sensible, it’s really not. It fails to take into account how things change. My response is “an ounce of prevention is worth a pound of cure”. “if it ain’t broke, don’t fix it” results in us running things into the ground (after all, it’s not “broke” until this point right?). If these were humans we were talking about (and incidentally effects on humans are not that far removed), this would mean working them to death.

As I’ve said before, many people don’t seem to have realized that the industrial revolution is over, so they still think in these terms. They worry about the “mechanization” of society, and being replaced by machines, or being reduced to a “cog in the machine”. This is simply a failure of understanding what is at stake (as McLuhan has said, “anyone who wants to be taken over by a computer should just specialize”), which ironically makes it a valid concern (since so many people think this way, it results in the perpetuation of the system).

The problem is not reducing people to machines, or treating people like machines. The problem is that we treat machines so badly. I puzzle over the fact that people can fail to see that machines are not free floating structures. They are connected at both ends to human life, the environment (not that these are truly separable…), or some combination thereof. The closest that many people seem to come is to think of this as a karmic relation. I mean it literally though (and yes, I realize that there are other people who mean this literally too). People are right to fear machines. If they ever do become sentient (estimated time-frames vary, but possibly within the next 10-40 years), they might be fully justified in taking over and enslaving the human race.

In this light, it’s probably a good thing that so much marketing work is being put into putting a human face on (user) technology. If people can be tricked into relating to machines on a more personal level, maybe this will also trick them into treating them better. Or maybe not… The problem in doing this is people become less and less aware of what it means to treat machines well. Machines are not people, and they have different needs.

I think the rise of extreme sports (see “the hen and the pig“) offers some potential for hope. If you look at the relation of these people to their equipment, it is much tighter than most people have with a television, or a toaster. This is for 2 simple reasons:

The sport is less enjoyable if your equipment is not working it’s best.

If you don’t look after your equipment, you are putting your life at risk.

This has other side effects, such as equipment lasting longer, and being more environmentally friendly by being more efficient (not always, but it’s more likely). Another interesting aspect is that the technology becomes a way of developing a connection with your environment. It becomes a way of seeing and understanding the world that involves a high degree of involvement and commitment. I claim that we need to develop this same connection with all of our technology.

In the movie “π”, the main character was a talented mathematician with an uncanny ability to do complex calculations in his head. This gift however came at a price. He was plagued by severe migraine headaches accompanied by seizures. This is likened to the Promethean gift of fire, “My mother told me not to stare into the sun, so when I was 6 I did”. His eyes were damaged, but not permanently. Since that time he had been blessed with mathematical insight.

He does research in number theory, trying to identify patterns in complex systems such as the stock market. This research is highly controversial, and is regarded by his teacher as unscientific. But here, and in other places he does find a pattern. He keeps on being confronted by the same number in seemingly unrelated spheres. From the stock market, to the Jewish Torah, and the number π. As his obsession blurs with reality, he becomes paranoid, and for good reason. Everyone from the government and big corporations to a Jewish sect are after what he knows.

In uncovering this secret he has acquired divine knowledge, and like the Promethean gift of fire, it has burned him (again). Just like the man with X-Ray eyes, he is overwhelmed by the information that his discovery gives him access to. Everything is significant. Everything is infused with pattern, and he sees all of it. Not only if he looks, but even if he doesn’t look. He can’t help but understand. At the same time, his migraines, seizures and paranoia (justified paranoia, but paranoia nonetheless) become increasingly severe.

Finally he is at his limit, and can’t withstand the information he is bombarded with in having eaten this “forbidden fruit”, and he takes a power drill to the part of his head where the migraines seem to come from. In this semi-lobotomized state he is at peace, and ballance is restored. However, he has forgotten what he knew, and has lost all of his mathematical ability. Just like Icarus, he had flown too high (and just like Daedalus his teacher had warned him of this). The intensity of the light of information melted the wax of his wings, and he fell but hard.

While the focus of this movie was on the idea that there are some things humans were never meant to know, I find it interesting that the consequences are essentially information overload. There is another possible reading of this message however, and that is that you need to be a different kind of being to properly wield knowledge. While the trans and post-humanists might agree with this (and I’m not against it), this is essentially similar to the management approach of throwing more bandwidth at a problem.

Cyclists naturally see the world in cycles. Everything becomes a wheel. cycles of seasons, cycles of life and death, cycles of trends. Reading about single-speed cycling can be interesting as well. Since it is such a fringe aspect of the sport it attracts only certain individuals. One of the themes that frequently comes up on single speed cycling forums is that of commitment. Commitment is a very deep form of involvement. Involvement is something that admits of degrees, but commitment has sharp edges. You’re either in all the way, or not at all. One of the analogies used to describe the difference is a breakfast of bacon and eggs. The hen was involved, but the pig was committed.

McLuhan talked about electronic media as “cool” in the sense that they are involved and detached “like a surgeon performing surgery”. While the ideas of McLuhan are still surprisingly relevant (possibly even more relevant than current social-media theories), there is the possibility that there are other forces at work. Or if not, a different regime with some different properties. A phase transition.

If the electronic media that McLuhan was concerned with such as the telegraph, radio, television, telephone, etc. were “cool”, then arguably they weren’t committed since they are detached (maybe). Closer to the hen. With the increasing scale of integration of electronic technology the question of involvement becomes more and more relevant? i.e. are we closer to the hen or the pig?

People are starting to see some devices such as cell phones as a part of their body (prescient as ever, McLuhan talked about this phenomena as well with relation to extension of human capability, and auto-amputation as a natural protection mechanism against shock). Is this attachment related to the commitment of the pig?

The pig is an increasingly important mythic form (I’m being a bit facetious here in constructing a myth of the pig as committed by virtue of being bacon, but you get the idea) in our society. With the rise of extreme sports and violence as a means of claiming identity it should be apparent that something is happening. People climb cliffs, ride off them with mountain bikes, launch across gaps between buildings on skateboards (see “the leap of faith gap“- the part at the end of the segment); anorexia is a subjugation of the body and cutting is a bonding activity in high schools just to name a few examples. The characteristic that all of these things have in common is commitment. These activities do not admit of the hen, but only of the pig.

I’m not sure exactly what this means, but to see these trends as unrelated to technological mediation would be an incredible oversight. The ideas, ideologies and themes perforate all aspects of modern culture and subculture. Transmitted through music, video, clothing and art. The Internet being an aggregate of all media. When combined in this way the parts form new wholes that may (not) have existed before. Increasing immediacy means increasing involvement. If nothing else, I think we can at least say this represents a phase transition to a new regime. It’s an odd mixture between a reaction to information overload, and the requirement for increased stimulation (McLuhan talks about this kind of phenomena in relation to the myth of Narcissus and the narcotic numbness that extended capability enables/induces).