Transhumanists pride themselves on basing their expectations on solid science. Karl Popper said that a theory is scientific if it can be proved wrong, and so we may ask: If the technological singularity is not happening, how would we know? I argue that online worlds like Second Life can serve as an indicator that we are either on-track or if the underlying technologies are faltering.

The reasons online worlds can serve this purpose is because many of the enabling technologies of the singularity also serve to push the size and sophistication of online worlds. For example, Vernor Vinge identified improvements in communication as being something that could lead to superhuman intelligence, saying ‘every time our ability to access information and communicate it to others is improved, we have in some sense achieved an increase over natural intelligence’. Arguably, online worlds are first and foremost platforms for communication. If, as we head toward the future, online worlds enable more people to be online simultaneously, and to exchange knowledge more efficiently (or, better yet, in ways that were not possible in the past), that could be taken as a sign that progress is heading in the right direction.

WHAT SHOULD WE BE WATCHING?

If communication is fundamental to online worlds, what about the Singularity? It is important to know, because we do not want to be distracted tracking trends of little or no relevance. Some people think the Singularity is all about mind uploading, but it really is not. Admittedly, Singularity enthusiasts are often also uploading enthusiasts and certainly the technologies that would enable one’s mind to be copied into an artificial brain/body would be very useful in Singularity research. However, even if we never develop uploading technologies, that in itself would not rule out the possibility of the Singularity happening. Simply put, ‘Singularity equals mind uploading’ is an incorrect definition.

Nor should the Singularity be seen as synonymous with humanlike AI. Again, if we ever find online worlds are being populated with autonomous avatars that anthropologists, psychologists and other experts in human behaviour agree are indistinguishable from avatars controlled by humans, we would very likely have technologies and knowledge that would be useful in Singularity research, but a complete lack of artificial intelligences that can ace the Turing test or any other test of humanlike capability would not rule out the Singularity.

Ok, well, what is essential? What can we point to and say, ‘progress in this area is faltering, therefore we can say the Singularity will not happen’? One such thing would be software complexity. Continual progress in pushing the envelope of computing power can only continue so long as developers can design more sophisticated software tools. The last time a computer was designed entirely by hand was in the 1970s. Since then we have seen orders-of-magnitude increases in the complexity of our computers, and this could only have been achieved by automating more and more aspects of the design process. By the late 1990s, a few cellphone chips were designed almost entirely by machines. The role of humans consisted of setting up the design space and the system itself discovered the most elegant solution using techniques inspired by evolution.

While it is true that we rely on a lot of automation in order to design and manufacture modern integrated circuits, we cannot yet say that humans can be taken completely out of the loop. Computers do not quite design and build themselves. One day, perhaps, one generation of computing technology will design and construct the next generation without any humans involved in the process. But not yet. For now, human ingenuity remains an indispensable aspect.

THE SOFTWARE COMPLEXITY PROBLEM

In an interview with Natasha Vita-More, Vernor Vinge identified a failure to solve the software complexity problem as being the most likely non-catastrophic scenario preventing the Singularity. ‘That is, we never figure out how to automate the solution of large, distributed problems’. If this were to happen, eventually progress in improving hardware would level off, because software engineers would no longer be able to deliver the necessary tools needed to develop the next generation of computers. With increases in the power of computers leveling off, progress in fields that rely on ever-more powerful computing and IT would continue only for as long as it takes to ‘max out’ the capabilities of the current generation. No further progress would be possible, because we could not progress to new generations of even more powerful IT.

Obviously, online worlds and the closely-related field of videogames rely on ever-more sophisticated software tools and on more powerful computers in order to deliver better graphics, physics simulations, AI and so on. If we compare the capabilities of online worlds and videogames that exist ‘today’ and find it increasingly difficult to point out improvements over previous years’ offerings, that could well be a sign that the software complexity problem is proving insolvable.

We should, however, be aware that some improvements are impossible to see because we have already surpassed the limits of human perception. Graphics is an obvious example. Perhaps one day realtime graphics will reach a fidelity that makes them completely indistinguishable from real life. It might be possible to produce even more capable graphics card, but the human eye would not be able to discern further improvements.

It is a fact that every individual technology can only be improved so far, and that we are closer to reaching the ultimate limits of some technologies than others. Isolated incidents of a technology leveling off might not be symptomatic of the software complexity problem, but if we notice a slowdown in progress across a broad range of technologies that rely on increasingly powerful computers, that would be compelling evidence.

THE DEFAULT POSITION OF DOUBT.

Look at Ray Kurzweil’s charts tracking progress in ‘calculations per second per $1,000′ or ‘average transistor price’. Look how smooth progress has been so far. One could be forgiven for thinking the computer industry as so far improved its products with little difficulty.

This is not true, of course. R+D has always faced barriers to further progress. For instance, in the 1970s we were rapidly approaching the limits of the wavelength of light used to sculpt the chips, it was becoming increasingly difficult to deal with the heat the chips generated, and a host of other problems were seen as cause for pessimism by many experienced engineers. Well, with the benefit of hindsight we know More’s Law did not come to an end. Rather, human ingenuity found solutions to all these problems.

According to Hans Moravec, doubt over further progress is the norm within R+D:

‘The engineers directly involved in making integrated circuits tend to be pessimistic about further progress, because they can see all the problems with the current approaches, but are too busy to pay much attention to the far-out alternatives in research labs. As long as conventional approaches continue to be improved, the radical alternatives don’t stand a competitive chance. But, as soon as progess in conventional techniques falter, radical alternatives jump ahead and start a new cycle of refinement’.

Similarly, residents of SL tend to be pessimistic about their online world. It never seems to be good enough. Of course, in its current state SL does have many faults that prevent it from being fast, easy and fun and if we do not have the skills to improve these deficiencies, we might as well declare the Singularity impossible right now. But, I would suggest that online worlds are doomed to remain ‘not quite good enough’, because what people can imagine doing will always be more ambitious than what the technologies of the day can deliver. That, after all, is why we continually strive to produce better technology. Yes, there may come a time when online worlds are advanced enough to allow anyone to easily do activities that are typical today, but all the knowledge and technology that gets us to this point will broaden our horizons and people will be complaining about not being able to easily perform feats we would could not even imagine doing today.

At any point in time, the path to further progress seems blocked by no end of problems. It is probably true, therefore, that at any time there were skeptical voices expressing doubt over substantial improvements over current technologies. For some, the Technological Singularity has come to take on an almost mythical status of some deus ex machina that will arrive and solve all our problems for us. If, by ‘problems’, we mean only material concerns, perhaps a combination of advanced AI and nanosystems could elevate all people to a high standard of living. However, it is a fact that any technology will create problems as well as solve them. That is another reason why we continually strive to invent new things- to solve the problems caused by previous generations of inventions!

WHAT THE SINGULARITY CAN NEVER ACCOMPLISH.

If you expect the Singularity to rid us of all problems, it will never manifest itself because such a utopian outcome is beyond the capability of any technologically-based system. But if we cannot use the eradication of all problems as the measure by which we judge the Singularity’s presence, what can we use?

AND WHAT IT CAN.

One way to answer this is to consider again what an inability to solve the software complexity problem would mean. Vinge saw this developing into ‘a state in communication and cultural self-knowledge where artists can see almost anything any human has done before- and can’t do any better’. With invention itself unable to grow beyond the boundaries the software complexity problem imposes, we would (in Vinge’s words) ‘be left to refine and cross-pollinate what has already been refined and cross-pollinated…art will segue into a kind of noise [which is] a symptom of complete knowledge- where knowers are forever trapped within the human compass’. Novelty, true novelty, would be a thing of the past. Whatever came along in the future, we would have seen it all before and would be hard-pressed to discern any improvement.

But, if the Singularity does happen- if technological evolution can advance to a point where we create and/or become posthuman- there would be a veritable Cambrian explosion of creativity and novelty. We tend to be rather human-centric when thinking about intelligence, imagining the spectrum runs from ‘village idiot’ to ‘Leonardo da Vinci’. But Darwin’s theory tells us that other species are our evolutionary cousins, connected to us by smooth gradients of extinct common ancestors. These facts tell us that the spectrum of intelligence must stretch beyond the ‘village idiot’ point towards progressively less intelligent minds, all the way down to a point where the term ‘intelligence’ (or even ‘mind’ or ‘brain’) is not in evidence at all. Think bacteria or viruses.

And what about the other direction? The Singularity is based on two assumptions: That intelligences fundamentally more capable than human intelligence (however you care to define it) is theoretically possible, and that with appropriate application of science and technology we (and/ or our technologies) will have minds as far above our own as ours are above the rest of the animal kingdom.

Not everyone accepts these assumptions. Some argue that our minds are too mysterious and complex for us to fully understand, and how can we fundamentally improve something we don’t fully understand? Others see ‘spritual machines’ as requiring an impoverished view of ‘spirituality’ rather than a transcendent view of ‘machines’. But, let us assume that people like Hugo de Garis are correct and that miniaturization and self-assembly will progress to a point where we can store and process information on individual molecues (or even atoms) and build complex three-dimensional patterns out of molecules. When you consider how a single drop of water contains more molecules than all the transistors in all the computer chips ever manufactured, you get some idea of how staggeringly powerful even a sugar-cube sized 3D molecular circuitry would be- even before that individual nanocomputer starts communicating with the gazillions of others throughout the environments of the world.

Let’s also assume that our efforts to understand what the brain is and how it functions eventually results in a fully reverse-engineered blueprint of the salient details of operation. We combine the two: gazillions of nanocomputers, each sugar-cubed sized object capable of processing the equivilent of (at least) one hundred million human brains, running whole brain emulations at electronic speeds (millions of times faster than the speed at which biological neurons communicate). Of course, it has the capacity to run millions of such emulations, perhaps networked together and able to take advantage of a computer’s superiority in data sharing, data mining and data retrieval. Millions of human-level agents networked together to make one posthuman mind. And there are gazillions of these minds making up whatever the world wide web as become.

Then what? Vinge said that, ‘before the invention of writing, almost every insight was happening for the first time ( at least to the knowledge of the small groups of humans involved)’. Isolated tribes and nations may well have come up with inventions they consider to be completely new, unaware that other nations got there first. As information and communication technologies were invented and improved, so did our ability to recall the past until (in Vinge’s words) ‘in our era, almost everything we do in the arts is done with an awareness of what has been done before’.

But what we have never seen is anything that requires greater-than-human intelligence. Therefore, the Technological Singularity will usher in a posthuman era that (from our perspective) will have a profound sense of novelty about it. If we are on-track for a Singularity, it should become increasingly common to encounter things we never imagined would be possible. We will commonly encounter artworks that we struggle to fit into existing categories- truly extraordinary renderings of a fundamentally superior mind.

CONCLUSION.

Ushering in the posthuman world has always been what Second Life was conceived for, at least in the mind of Rosedale. It never was about recreating the real-world experience of shopping malls and fashionably-slim young ladies. It was a dream that a combination of computing, communication, software tools and human imagination can be coordinated to achieve a transcendent state that is greater than the sum of its parts. ‘There are those who will say…God has to breathe life into SL for it to be magical and real’, Rosedale is on record as saying in Wagner James Au’s book ‘The Making Of Second Life’. ‘My answer to that is…it’ll breathe by itself, if it’s big enough…simply the fact that if the system is big enough and has enough complexity, it will emerge with all these properties. People will come from out of the dust’.

29 Responses to IF THE SINGULARITY IS NOT HAPPENING…

The software problem is indeed a big worry. The amount of crap software methodology and tools out there is staggering. It is very difficult to master more than a part and that is not because it is inherently complex or not only because of that. It is also because the tools suck largely and create a bunch of needlessly complexity and entail a great lack of flexibility to address many software problems elegantly in a highly reusable manner. Also there is the problem of the human in the loop. Our brains are not that great at understanding software and machine states very deeply. So we have the tool that don’t stress the majority of programmer brains too much whether they are adequate to the problems or not. Real world systems are very adaptive and sometimes chaotic or otherwise formally “bizarre”. To do a good job of automated anything that needs to operate well with such systems the software needs some of these same properties. But we humans and thus far our tools as well cannot manage or control or predict much about such systems. So we generally don’t build them.

It is almost a chicken and egg problem. Only the AGI or something close to one may be able to write the type of software that AGI can arise from.

Ah, yes, Seren, you’re quite right. Software that writes software — or, at least, meaningful software — does indeed require Artificial Intelligence. Extie points out very correctly that things like genetic algorithms go a long way to produce the most elegant solutions automatically — but they have to be bootstrapped (by a human). And here, indeed, is the whole dilemma. We might get hyper-super-computers like Extie’s “sugar cubes”, capable of designing fantastic systems in picoseconds, or, say, doing a fully-rendered Shrek-style movie in a single Planck unit of time… but… it might never get anything done before a human operator tells it what to do.

We simply have no idea how to programme creativity, and that’s what we desperately need to have for any hard AI :) The assumption, of course, is that a sufficiently complex system will have as its emergent property things like “creativity” or “self-awareness”, but… we really don’t know if those assumptions hold. Are ants creative? Can we start by emulating an ant’s brain and see if it comes up with some hints that it’s being “creative”, and then postulate that as we get better and better emulating other functions, creativity will become, by steps and degrees, closer to what we humans experience as creativity?… and even go beyond that? Lots of questions, and so far, there is only one answer: “no”. :)

I love the analogy with Second Life, though. Nobody would ever imagine in 2002 the incredible complexity of SL’s society and economy in 2010. The other day I was browsing across a lot of websites related to whole areas that I had no clue they existed — for instance, the amount of blogs, magazines, and even books (or at least collections of fiction) related just to steampunk roleplaying in SL is simply… staggering. There is more material online on just that specific topic (and note: I’m not talking about the overall steampunk fandom, but just the ones that are strictly related to SL!) than books in my home library! The list of shops that sell steampunk content for SL is… longer than my professional contact list, tenderly assembled over more than 15 years. Nothing of this existed 3 or even 2 years ago. So, yes, it’s true that infinitely complex systems start showing emerging properties that we couldn’t foresee…

… and maybe that will happen with concepts like “intelligence”, “creativity”, or “awareness” too.

The Singularity is by definition a unique one-time event NOT measurable by science. The Singularity will not be a general phenomenon. The Singularity will be of a single intellect/physical being. It will be evidence via revelation. The Singularity cannot be AI as AI has no desire to be the Singularity. The Singularity will be that intellect that has, first and foremost, the TRUEST grasp of the entire reality and secondly harness that knowledge to persuade the masses to exercise maximum moral autonomy. This striving towards Supremacy on parallel frictionless paths.

The reality is that the race to the Singularity is the race towards Supremacy BY A SINGLE INTELLECT. The idea that such a GOAL necessitates and/or requires collective action is a contradiction of the SINGULARity…

Many advovates of the Singularity (myself included) call themselves ‘patternists’. Thinking in terms of patterns, perhaps Singularities are not one-off events but things that happen once various forces come into play that result in a paradigm shift in the universe’s ability to organize matter/energy into patterns.

Something like the eye, for instance, relies on a precise arrangement of molecules. If molecules were just jostling about at random, the chances of an eye just happening to form are effectively zero. Whatever cirumstances lead to life arising from mere matter could be seen as a Singularity, because cummulative selection took only a few billion years to evolve all the extraordinary patterns of life on Earth (and maybe elsewhere), whereas before patterns of this complexity were just impossible.

Vinge has also suggested the rise of human intelligence is another singularity. The rest of the animal kingdom can only evolve at the speed of natural selection, whereas humans went from chipping flint into axes to sculpting billions of silicon wafers into integrated circuits and organizing them and a load of other technologies into the Internet in roughly 100,000 years. Again, from the perspective of the pre-human era the world wide web appeared in the blink of an eye.

Whether something is singular or many can depend on one’s perspective. A microscopic view shows lots of chemicals. We zoom out and find these chemicals form a single cell. We zoom further out and we find many cells. We zoom out again and find many cells with specialized functions coordinating their actions and the result is a single animal….Well, maybe with a wide enough temporal and spatial perspective we encounter civilizations that have organized their global communication and computational systems into what is effectively a planetary superorganism?

The problem with the scientific view of the Singularity is that there are just too many places to try and look. That’s why the fundamental character of all Singularities (Creation, God-meme creator, human intellectual conception) are out of the purview of science. That’s why AI as the Singularity is nonsense. Scientists will not create the Singularity. The Singularity will stop us from self-annihillation.

I do not necessarily believe in the Singularity as a specific goal. It probably will not happen as a result of some team trying to make a greater-than-human artificial intelligence. Kurzweil’s notion of all R+D striving to create the next-generation of their product (maybe not even thinking about AI), and the Singularity emerges from tens of thousands of cummulative and convergent steps, makes more sense to me. This is not entirely random, the huge economic incentives to create superior software, expand automation, the medical/psychological advances that would come from reverse-engineering the brain, provide excellent reasons to tackle problems that could turn out to provide solutions to the challenge of developing a superhuman AI. It is also possible for solutions to come from unexpected places, which is what I mean by ‘convergent steps’ (because problems in seemingly unrelated areas turn out to provide answers in other fields, thereby connecting them). I talked about such things in my article ‘Ray Kurzweil And The Conquering of Mount Improbable’ available at http://hplusmagazine.com/articles/ai/ray-kurzweil-and-conquest-mount-improbable

Doesn’t the Singularity need to be something definitive? Can we appropriate the Singularity to something nonhuman like “superhuman” AI? And what does that mean, really? It seems that the transhumanist’s desire to “transcend” leaves out the answer to “what exactly are we transcending?” The same “rational” forces seeking to upload onto a harddrive are the same folks seeking a physical and spiritual annihilation. It seems to me that any Singularity must have these components fullly intact.

I mean, how heavy is the idea of the Singularity without the notion of Supremacy driving it? And for a society immersed in “equality” doctrine, how detrimental is this anti-Supremacist mentality to the Singularity?

>It seems that the transhumanist’s desire to “transcend” leaves out the answer to “what exactly are we transcending?”<

I do not think transhumanists know the answer to this question. I think transhumanists see certain conditions of humans (such as short lifespans, and certain limitations of the human brain) as barriers in the way of even beginning to answer this question. In overcoming such barriers, the answer does not automatically arrive as well, it is just that we would then be in a much better position to make serious progress.

I do not think equality is compatible with the Singularity. The whole idea of the Singularity is that, at some point in the future, there will be minds fundamentally smarter than ours. Suppose every human has the opportunity and the will to upgrade. Each person becomes a million times smarter. In this scenario no Singularity is happening. It is a red queen situation, with everyone upgrading and therefore remaining exactly where they are, relative to everyone else. A Singularity depends on some minds NOT keeping up, thereby creating a world in which people coexist with others who are vastly smarter than they are.

SOFT TAKEOFF. In this scenario, we gradually evolve into a posthuman state. This seems to be what Kurzweil is betting on, with many little steps that always seem conservative from the perspective of the last few steps (although, looking far enough ahead up or down the steps of cummulative progress the change becomes profound).

HARD TAKEOFF. In this scenario, the leap to posthumanity happens abruptly. Not so much by accident, but by some group fiddling around with the parameters of their system and finally, the right combination of factors click into place and a once-catatonic system suddenly awakes.

But again, are we talking about a single advanced mind AS the Singularity? It seems Kurzweil is coy about this potential for the simple fact that it infuses in the Singularity a drive towards Supremacy by the individual mind/soul. And in our “liberalized” Western society, seeking Supremacy is an “evil” thing. Just look at the performance-enhancement “scandal” to get an idea of how our politicians are denying those who seek Supremacy. And seeking Supremacy is paramount to any emerging Singularity.

It’s been some time since I’ve looked into Kurzweil’s Singularity and it always struck as a real potential. But it seems that it isn’t anymore defined than it was 5-6 years ago?

A human mind or a human-created “mind?” And what will it know that makes it a Singularity? Or, will this mind do something that IS the Singularity?

1: VENERATION OF THE LEADER. 'Glorification of the leader to the point of divinity'.
2: INERRANCY OF THE LEADER. 'Belief that the leader cannot be wrong'.
3: OMNISCIENCE OF THE LEADER. 'Acceptance of the leader's beliefs and pronouncements on all subjects'.
4: PERSUASIVE TECHNIQUES. Methods, from benign to coercive, used to recruit new followers and reinforce current beliefs.
5: HIDDEN AGENDAS. The true nature of the group's beliefs and plans is obscured from or not fully disclosed to potential recruits and the general public.
6: DECEIT. Recruits and followers are not told everything they should know about the leader or the group's inner circle, and particularly disconcerting flaws or potentially embarrassing events or circumstances are covered up.
7: ABSOLUTE TRUTH/ABSOLUTE MORALITY. Belief that the leader has discovered final knowledge, and that the leader or group has developed a system of right and wrong thought and action applicable to everyone. Those who do not strictly follow this moral code are punished or dismissed from the group.

There is a possibility of cultish attitudes developing around a superintelligence. It is not difficult to imagine that people might come to accept a superintelligence's pronouncements on all subjects; form beliefs that it cannot be wrong; come to see it as divine; believe any moral code it develops is final and absolute. A mind fundamentally smarter than ours would, almost by definition, hold beliefs and make plans whose true nature is obscured or not fully revealed to the general public.

However, a rapid evolution of some minds, resulting in them becoming very much smarter than other minds cohabiting with them, is not in itself a definition of a cult. In fact, I would say cults thrive in ignorance more than intelligence.

>But again, are we talking about a single advanced mind AS the Singularity?what will it know that makes it a Singularity? Or, will this mind do something that IS the Singularity?<

Think of the impact that human intelligence has had on our planet. Our presence would be obvious to any aliens passing through our solar system, because the nightside of our planet would glow from the light of cities. Our intelligence has shaped the world and produced systems of thought that simply cannot be understood by the rest of the animal kingdom. If human-scale intelligence had not evolved, Earth today would look drastically different. The none-appearance of any other animal would not make anywhere near as much difference.

But, all the change we have wrought upon this Earth has been the product of HUMAN-scale intelligence. We have never seen art, science, engineering that is the product of SUPERHUMAN intelligence. It is profoundly difficult to say, specifically, what transformative effects this could have. Just as a (nonhuman) animal could not imagine computers, cars, the telephone, the Internet, the space shuttle, the Brandenburg Concerto…I cannot imagine anything worthy of a superintelligent mind because, well, I am NOT yet superintelligent.

>But again, are we talking about a single advanced mind AS the Singularity?<

The definition of a technological Singularity is 'the creation-by technology- of a greater-than-human intelligence'. Eliezer Yudkowsky added the further definition of recursive self-improvement: The superintelligence has the capability to design and build an even smarter superintelligence, which can also improve upon itself and so on and so on.

It is often assumed that this refers only to artificial intelligence. And indeed, a single superintelligent machine with recursive self-improvement would herald the arrival of the Singularity. But Vernor Vinge outlined several ways in which a tech singularity might come about. Most of these involve a greater collaboration between humans and certain technologies, primarily computational/information technologies. Whether or not this centers around a 'single entity' depends on whether or not you believe an increasingly powerful and intimate connection between people and systems of technology could be considered one single mind.

I’m not sold on AI as the Singularity for the simple fact that AI has no desire to be the Singularity. I also tend to see any Singularity in the next 30 years as something akin to a single intellect using the power of the Internet to steer the Western masses away from anti-Supremacy and self-

annihilation… A Singularity AS collective action seems contradictory. Yes, the masses may carry a collective responsibility to give effect to the Singularity, but the masses won’t BE the Singularity. Doesn’t a single mind HAVE TO BE any potential Singularity?

>I’m not sold on AI as the Singularity for the simple fact that AI has no desire to be the Singularity.<

It does not have to possess a desire to 'be the singularity'. What it does need, is the ability to recursively self-improve its own hardware and software and to use the resulting increasing smarts to chunk concepts, make connections and analogies between separate fields of knowledge, recognize patterns…and other activities to do with intelligence, and A) perform tasks that human brains are capable of in a superior way and B) come to perform tasks that (unaugmented) human brains are incapable of. The Singularity would then automatically emerge from all that, just as you cannot help but get water when you combine hydrogen and oxygen atoms in the right way.

A Singularity does not refer to anything 'singular'. It represents a phase change in reality where one set of rules breaks down and no longer produces useful insights. Classic example is the centre of black holes where gravity becomes so strong our models of gravity produce nonsense. That is not to say your vision of what might happen in 30 year's time will not come true. Maybe it will. It is just that a single mind is not, strictly speaking, the definition of a technological singularity.

“In fact, I would say cults thrive in ignorance more than intelligence”

Both ignorance and “intelligence” fits nicely with a Cult.

Cult Members think they are ‘smarter’ than everyone else – usually obsessed with some made-up facts and/or ‘one-size-fits-all’ Red Flag technologies that will end suffering forever, transform civilization positively etc. Cult organisations will, in most cases, only accept newcomers they’ve hand-picked themselves. Gullible, or perhaps insecure people whom are likely to believe in the meticulous, energy-driven presentations the Cult gives.

“evolution of some minds”

Self-selected Elitism doesn’t fit very well in-future (neither does being forced).

>Both ignorance and “intelligence” fits nicely with a Cult.Self-selected Elitism doesn’t fit very well in-future (neither does being forced).<

Force is optional. I mentioned two ways in which some minds fail to keep up: They cannot or they will not. The former implies the choice is taken away from them. The latter implies the choice is there but they opt out. I have met precious few transhumanists who would force people to do anything . In fact, when such a possibility is raised, it is usually to spark discussion on how to prevent this from ever happening.

But I have met plenty of anti-transhumanists who would deny the individual the right to choose by blocking the development of anything that threatens their preconceptions of how life aught to be.

Uhuh. It might seem as though the more intelligent you are, the less likely you would be to fall for a cult. However, being more intelligent can mean you are more adapt at defending beliefs you arrived at for none-smart reasons. That is not to say a cult is defined as 'a system of beliefs widely recognized as weird' because it certainly is not. It is an observation that the indoctrination system of the cult may be subtle enough to snare even smart people, who thereafter are harder to deprogram because they can construct complex defences against reason.

My dog has a brain that is a million times more complex than anything man has ever devised. She ran out in front of a car because she does not have the capacity to conceive linear movement. Guess what? She never will.

This is all very simple. You either believe in evolution or you do not. Personally, I think evolution is the biggest quack theory of them all. That simple systems gradually become more and more complex, all on their own, without any outside help. It’s just preposterous. But since most of the world now believes this unprovable and false paradigm, well, it’s just assumed that eventually code will start becoming more and more complex, all on its own, just like organic life did. Silly. It’s a concept for narrow little minds, and I don’t say that to be mean or ugly. Evolution is not real.

I’ve looked long and hard and, outside of pre-existing systems, I have never seen a simple system become complex without human intervention. But the drawback is that when humans build the complex, there is ALWAYS fault inherit within. Again, I ain’t no spring chicken, I ain’t no small child. I’ve been around. I’ve worked in software development, communications, systems architecture, etc. Code is buggy. The bigger the code, the greater the instance of bugs. Bugs do not fix themselves. Often when humans seek to fix the bugs – they merely create more bugs. Why? Because the rule is NOT to become more complex. The rule is that systems, outside of original pre-existing creation, tend to break down over time. The ancients saw this. The ancients wrote about this. Why does modern man fail to see it? I’ve already answered that! All systems break down over time, all systems trending towards the complex become unstable at some point. It is the law, not the exception.

I wish you all well, but the ungentle truth is that men ARE NOT becoming smarter, men ARE NOT evolving. Men are as faulted and riddled with self-doubt as ever before. They still seek to make themselves gods. They will fail as utterly today as they always have. I do not fear Singularity. It is not possible.

>This is all very simple. You either believe in evolution or you do not. Personally, I think evolution is the biggest quack theory of them all. I have never seen a simple system become complex without human intervention.<

Computers have become increasingly complex, while their design and manufacture has seen human input increasingly diminish. In 1978, computers were designed by humans, flow-charted on a wall. Within a decade, integrated circuits were a hundred times as complex, but appeared nine months after conception when it used to require three years. This was because increasingly powerful computers took over more and more of the designer's job. By the 1990s, the human contribution was restricted to merely setting up the design space and letting the system discover the most elegant solution using evolutionary algorithms.

Now, it is true that some human intervention is required, but it is also true that machines are taking over more and more of the design and manufacturing process. If this trend continues, next-generation computing devices will one day require zero human input for the design/manufacture. The previous generation of technologies will be solely responsible.

It is also not true that evolution is unfalsifiable. The theory places a strict order in which lifeforms can have evolved. Roughly speaking, single-cells came before multi-cells, multicells came before bodies, bodies came before the mamallian brain. So if we were to find, say, a fossil of a mammal in rocks geologists identified as belonging to the pre-cambrian, evolution is falsified.