The Zeitgeist Movement and the Intelligence Explosion

To me, the Zeitgeist Movement is about promoting low entropy, rationality, cooperation, freedom, equality, sustainability, unity and about decreasing existential risks for humanity - and ultimately about implementing a new socioeconomic system based on those virtues, and thus based on what is reasonable from a technological and ecological - not a monetary - point of view.

During the past years we heard Jacque Fresco and Peter Joseph talk about how supercomputers will aid us within the mentioned framework to come up with the best possible solutions, products and services, given the current available data - and this is exactly where I sense a flaw in their model of our future. While what they describe does not necessarily require a strong-ai/artificial general intelligence system, the desired functionality would only be obtainable by implementing large portions of such an AGI system. Utilizing a technology of this kind is certainly a beneficial thing, but unfortunately (or rather fortunately), is only one side of the story, as such a machine would most likely also be very good at further (recursively) improving itself.

"One day, we may design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence." http://intelligenceexplosion.com/

While the AGI research community does not agree on a certain date or year in which this intelligence explosion or singularity will take place, recent polls have shown that many of them think this will most likely happen within the next 20 to 40 years. Forecasts dealing with such complex endeavours have always been problematic and often wrong, but we now actually do have the mathematical model, roadmaps, blueprints and soon also the processing power required to create such systems. Please don't get me wrong - my point is not that you cannot implement a new socioeconomic framework before that time, but something rather profound: it most likely will render most of the problems, issues and solutions the Zeitgeist Movement communicates today obsolete. One of the first things such a system could suggest is to get rid of our inefficient biological substrate and the need for food and not to build aquaponics.

Once humanity has the opportunity to transcend into something greater and the possibility to get rid of all the limitations and suffering related to our biological substrate, i.e., the inevitability of death, illnesses and injuries, the need for food, water, clothing, houses with cosy beds, physical transportation, etc., many people will choose to do so.

This undoubtedly would represent the biggest paradigm shift in the history of our species and could very well take place independently and without (or shortly after) the implementation of a different socioeconomic framework. We would transition from "too irrational to approach" to "too advanced to consider" in terms of most of the problems the Zeitgeist Movement addresses and also the city designs "The Venus Project" advocates.

Furthermore, such a technology would allow us to solve most if not all of the other questions and problems for which the Zeitgeist Movement has no explicit answers: existential risks related to the research and the usage of ever more potent future technologies, the efficient colonization of space for increased redundancy, the fermi paradox, the mysteries of the multiverse, etc. ...

I do not mean to discourage anybody from what they are doing in the context of the movement, but rather to broaden the scope of the memes and thoughts within the movement. There are at least a couple of technologies on the horizon that will drastically change our everyday lives, but no other thing will have the same impact as overcoming our biological limit of intelligence.

As many of you know, the second law of thermodynamics is telling us that systems tend to disorder. This is true for gases, the universe, life, and in a similar way, also for societies. I would argue that the only force capable of actively keeping this process at bay is intelligence. The more intelligent an agent is, the more efficiently it can follow its utility function in order to reach its goals. A toddler, for example, is not very good at solving puzzles in an economical fashion - it might even hurt itself during the process or engage in irrational activities like crying. The child will need to acquire a higher degree of intelligence in order to be able to solve more complex puzzles - i.e., to decrease the local entropy of more complex systems. The same could be true for humanity.

The amount of irrelevant entropy or complexity we are generating as a society and inherent to our insane socioeconomic system could already be too high for our biologically limited intelligence to efficiently reduce it back toward useful levels (e.g., implementing a socioeconomic system based on rationality). I am not saying it can't be done, I am just pointing out that a higher degree of intelligence within our species would immensely facilitate the task.

"In 1966 Sagan and Shklovskii suggested that technological civilizations will either tend to destroy themselves within a century of developing interstellar communicative capability or master their self-destructive tendencies and survive for billion-year timescales. Self-annihilation may also be viewed in terms of thermodynamics: insofar as life is an ordered system that can sustain itself against the tendency to disorder, the "external transmission" or interstellar communicative phase may be the point at which the system becomes unstable and self-destructs." http://en.wikipedia.org/wiki/Fermi_paradox

Our brains were forged by biological evolution to survive in the African savannah - not to answer the most complex questions in the cosmos. However, we accumulated knowledge and managed to create tools and techniques empowering us to overcome the savannah playground and to create new environments and to play by different rules. Unfortunately, we have not yet managed to significantly improve our own processing power, and there still exists a limit of complexity our brain can efficiently deal with. Even the smartest person of today will, for example, probably not exhaustively decode the human genome within a thousand years, let alone fully comprehend ever more complex implications of future technologies and problems. Tools that are being deployed to help solve these kinds of problems (converting medicine into a truly information-based science, for example) already resemble more and more of a true artificial general intelligence.

Clearly AGI systems or even small brain enhancements could turn out to be very valuable, not only for the task of fixing our socioeconomic system, but also to our survival.

For billions of years our "ancestors" have existed as nothing but clumps of cells in the ocean, and it would be presumptuous to think that our current manifestation of entropic evolution is the end of that story. In fact, we are just the first species with the ability to overcome the constraints of biological evolution itself, and we are probably only a few years away from drawing the upper half of the s-shaped time/intelligence curve of planet earth.

Most certainly there will be innumerable problems, challenges and risks we will only be able to identify once we are substantially more intelligent than we are today - just as non-wonderland based rabbits cannot reflect upon the consequences of the death of the sun in respect to the ambient temperature in their burrow.

While we all long for a more liveable world, we should also stay open-minded toward other profound developments taking place on this beautiful planet. Of course, nobody knows what the future might bring and so it might turn out that what we are wishing for today (city designs, transportation systems, aquaponics, certain forms of social interactions, etc.) will not make much sense in 20 or 30 years from now. A caveman might have hoped for brighter bonfires but most likely not for electrical light because there was no way for him to anticipate what the future would be like.

Now more than ever it is important that we continue to educate people in a holistic manner about all the important aspects of the future of humanity.

In his recent book "A Cosmist Manifesto" Ben Goertzel (founder of the OpenCog AGI project) suggests "joy", "choice" and (spiritual) "growth" to be universal goals of life anywhere in the cosmos, and I guess these ultimately are also the high-level virtues within the Zeitgeist Movement. I highly recommend reading the book for free online to anyone interested in Carl Sagan-like wisdom mixed with AGI topics and a lot of cosmic scale philosophy.