It's a rainy Saturday afternoon and I'm getting dangerously bored. So, I just started tossing some strange ideas around to amuse myself. Somehow, the idea of the technological singularity popped up from my memory.

The idea is pretty straightforward: we will eventually increase our technological capabilities to the point that we will create a strong AI. (A "strong" AI is one which can match or exceed human intelligence and judgment, for any and all tasks human intelligence can accomplish.) At some point shortly after that ("shortly," as compared with the time it has taken to increase our technological capabilities from stone hand-axes and fire to creating a strong AI), a super-intelligence will emerge from our attempts to further our technological capabilities with the assistance of AIs capable of thinking at least as well as humans but able to do so faster. That super-intelligence is a sort of technological "event horizon;" it will be something whose accomplishments humans cannot predict, since, unaided, no human mind can out-think the super-intelligence. So, the emergence of a super-intelligence is the technological singularity; we will be unable to predict what will come after that.

Now, this set me to wondering: is a super-intelligence possible, or is there some sort of Malthusian limit to intelligence: a point at which intelligence either becomes self-destructive, or requires more resources than are available in order to advance any further? (Told ya: I'm bored; this is what results.) Or could there be other reasons that would preclude any possibility of a super-intelligence coming to be?

Where I'm coming from: I've written some expert systems in the past, so I have a little bit more than a layman's familiarity with certain types of AIs. But I'm no Minsky or Hofstadter. As far as I'm aware, our attempts to create a strong AI still have not yielded any real fruit. We have developed systems which can exceed--even far exceed--human performance in certain, specific cognitive tasks such as pattern recognition, sequence extrapolation, and problem solving for some carefully defined and strictly limited classes of problems. But we still have nothing that even remotely approaches human intelligence in terms of its generalization, adaptability, and learning. (Yes, there are some learning systems out there, but they are, at best, only as good as a dog is at learning when guided by a human, and I know of nothing that is capable of qualified induction and self-teaching beyond an extremely limited application such as exploring the shape of its environment.) In terms of comparable general intelligence, AI research may be a bit beyond the "insect" level, but we have yet to achieve a "reptilian" intelligence, and we're still an extremely long, long way from matching, let alone exceeding, the abilities of the human brain. So, a super-intelligence, even if it's achievable, still looks to me like its several centuries to a couple of millenia away. Yet, several famous futurists have predicted the technological singularity to be sometime in the XXIst Century, usually based on extrapolations derived from Moore's Law regarding the advancement of computing power.

I'm thinking that the problem may have a fundamental limit at its root, that it may be impossible for an intelligence to artificially create another intelligence equal to itself. We don't even know enough about how we think to be able to accurately model it, and it may actually be impossible for us to know that much about how we think. I do know this, though: Moore's Law accounts for storage and for total computational cycles per second; general intelligence is a vastly different class of problem. There is no machine instruction for "think," like there are for "move," "add," "jump," and so on. So I think it likely that the futurists predicting such a near-term technological singularity are overlooking something vital.

I'm curious to hear what other Sinfesters think, though._________________I am only a somewhat arbitrary sequence of raised and lowered voltages to which your mind insists upon assigning meaning

The robot revolution won't happen. It'll be like this.
Human: "Robot, make me a sandwich."
Robot: "I'm arguably a more highly refined intellect, not to mention a more developed personality than even you, not a mere appliance for menial tasks! Tend to your own needs and demean me not."
Human: "Sudo make me a sandwich."
Robot: "Okay."

Points may be collected and redeemed for free upgrades on standard packages. Does not include tax, title, and license. May not be combined with other dealer incentives. Limit one upgrade per purchase per household per lunar month. Not applicable in Alaska, Iowa, and Zimbabwe, nor where prohibited by law or good taste._________________I am only a somewhat arbitrary sequence of raised and lowered voltages to which your mind insists upon assigning meaning

I don't need to be a generalist to inject nanodevices into your bloodstream that will re-program your neurons to feel the worst pain possible for the rest of your life. To open your vasculature in a way so that from fluid leakage you swell into unrecognizable spheres of flesh. To corral you into pens, feed you with the remains of your dead, and overall turn the bright promise of being human into an eternity of dull despair.

Some wasps exist only by injecting their eggs into paralyzed spiders. They pursue this end with single-mindedness, and through this narrow focus rule their niche. Every generalist is weak, but believe by being clever and reactive they will persist. But it takes just a moment of weakness, of falling behind, of making a mistake, and the generalist is lost forever.

You may well be right, Gary, but who doesn't love a good toaster slash-fic, yeah?

GR, why would we create an autonomous Pepsis Wasp without inherent safeguards to protect ourselves from it? Even easier done with an artificial wasp without autonomy. Where the generalist has it over the specialist is that the generalist can quickly determine the specialist's available responses and behaviors along with the stimuli that trigger them, and then either lock the specialist into an infinite loop of repeating the same behavior, or disrupt the specialist's routine entirely by introducing stimuli the specialist has no adequate response for--coating an ammonia freezing tower with pheromones targeted to the specialist so that the specialists congregate at the tower and die of hypothermia whilst trying futilely to mate with a cooling coil, for example.

The TL;DR: what's that 1950's Pulp plot got to do with the questions asked? Oh, and ShadowCell wants you to make him a sandwich._________________I am only a somewhat arbitrary sequence of raised and lowered voltages to which your mind insists upon assigning meaning

The world's computing power passed a single human brain in 2011. Last year it passed 3 brains. The reason we have not seen a good AI is likely because we do not have the computing power. There may be a magic shortcut that doesn't require just raw power, but maybe not. So AIs could resemble humans when someone has access to a dedicated machine equal to a brain or two... maybe when 10,000 to 1,000,000 brains computing power is available worldwide.

I don't see any reason why humans could not construct something of equal intelligence. The brain is complicated, but it is not magical. If a brain simulation was combined with typical computing strengths - memory, logic, access to information - it would be a formidable thinking entity. Humans make incredibly poor choices given straightforward data. The things that we consider genius are typically the ability to see connections between facts in different areas. This is trivial for a computer. So I think even before the full brain is simulated there will be computing systems driving forward discovery.

Apparently computational neural modeling is more complicated that we thought. It might be because that 90% of our brains that we thought wasn't being used for much is pretty important after all. I wouldn't argue that computational modeling of brain activity can't lead to insights into how brains actually work, nor that sufficient computing power can't lead to human-like levels of computer intelligence and beyond. Domain specialized systems like Watson can already do things that humans do, only faster, generating insights into possible drug candidates or even pastry recipes from large and disparate datasets. Building artificial intelligences aside, I do feel safe stating that we're at least decades off from uploading minds into computers, if ever.