The disruption in education is a topic I have written about at length. In essence, most education is just a transmission of commoditized information, that, like every other information technology, should be declining in cost. However, the corrupt education industry has managed to burrow deep into the emotions of its customers, to such an extent that a rising price for a product of stagnant (often declining) quality is not even questioned. For this reason, education is in a bubble that is already in the process of deflating.

What the MSCS at GATech accomplishes is four-fold :

Lowering the cost of the degree by almost an order of magnitude compared to the same degree as similarly-ranked schools

Making the degree available without relocation to where the institution is physically located

Scaling the degree to an eventual intake of 10,000 students, vs. just 300 that can attend a traditional in-residence program at GATech

Establishing best practices for other departments at GATech, and other institutions, to implement in order to create a broader array of MOOC degree programs

Eventually, the sheer size of enrollment will rapidly lead to GATech becoming a dominant alumni community within computer science, forcing other institutions to catch up. When this competition lowers costs even further, we will see one of the most highly paid and future-proof professions being accessible at little or no cost. When contrasted to the immense costs of attending medical or law school, many borderline students will pursue computer science ahead of professions with large student debt burdens, creating a self-reinforcing cycle of ever-more computer science and ATOM propagation. The fact that one can enroll in the program from overseas will attract many students from countries that do not even have schools of GATech's caliber (i.e. most countries), generating local talent despite remote education.

Crucially, this is strong evidence of how the ATOM always finds new ways to expand itself, since the field most essential to the feeding of the ATOM, computer science, is the one that found a way to greatly increase the number of people destined to work in it, by attacking both cost thresholds and enrollment volumes. This is not a coincidence, because the ATOM always finds a way around anything that is inhibiting the growth of the ATOM, in this case, access to computer science training. Subsequent to this, the ATOM can increase the productivity of education even in less ATOM-crucial fields medicine, law, business, and K-12, since the greatly expanded size of the computer science profession will provide entrepreneurs and expertise to make this happen. This is how the ATOM captures an ever-growing share of the economy into rapidly-deflating technological fundamentals.

As always, the ATOM AotM succeeds through reader suggestions, so feel free to suggest candidates. Criteria include the size and scope of the disruption, how anti-technology the disrupted incumbent was, and an obvious improvement in the quality of a large number of lives through this disruption.

The polygons in any graphical engine increase as a square root of Moore's Law, so the number of polygons doubles every three years.

Sometimes, pictures are worth thousands of words :

1976 :

1986 :

1996 :

2006 :

I distinctly remember when the 2006 image looked particularly impressive. But now, it no longer does. This inevitably brings us to...

2016 (an entire video is available, with some gameplay footage) :

This series illustrates how progress, while not visible over one or two years, accumulates to much more over longer periods of time.

Now, extrapolating this trajectory of exponential progress, what will games bring us in 2020? or 2026? Additionally, note that screen sizes, screen resolution, and immersion (e.g. VR goggles) have risen simultaneously.

The rate of technological change has been considerably slower than its trendline ever since the start of the 21st century. I wrote about this back in 2008, but at the time, I did not have quite as advanced techniques of observing and measuring the gap between the rate of change and the trendline, as I do now.

The dot-com bust coincided with a trend toward lower nominal GDP (since everyone wrongly focuses on 'real' GDP, which has less to do with real-world decisions than nominal GDP), and this has led to technological change, despite sporadic bursts, generally progressing at what is currently only 60-70% of its trendline rate. For this reason, may technologies that seemed just 10 years away in 2000, have still not arrived as of 2014. I will write much more on this at a later date.

But for now, two overdue technologies are finally plodding towards where many observers thought they would have been by 2010. Nonetheless, they are highly disruptive, and will do a great deal to change many industries and societies.

What is interesting about AI is how it can greatly expand the capabilities of those who know know to incorporate AI with their own intelligence. The greatest chess grandmaster of all time, Magnus Carlssen, became so by training with AI, and it is unclear that he would have become this great if he lived before a time when such technologies were available.

The recursive learning aspect of AI means that an AI can quickly learn more from new people who use it, which makes it better still. One very obvious area where this could be used is in medicine. Currently, millions of MD general practitioners and pediatricians are seen by billions of patients, mostly for relatively common diagnostics and treatments. If a single AI can learn enough from enough patient inputs to replace most of the most common diagnostic capabilities of doctors, then that is a huge cost savings to patients and the entire healthcare system. Some doctors will see their employment prospects shrink, but the majority will be free to move up the chain and focus on more serious medical problems and questions.

Another obvious use is in the legal system. On one hand, while medicine is universal, the legal system of each country is different, and lawyers cannot cross borders. On the other hand, the US legal system relies heavily on precedent, and there is too much content for any one lawyer or judge to manage, even with legal databases. An AI can digest all laws and precedents and create a huge increase in efficiency once it learns enough. This can greatly reduce the backlog of cases in the court system, and free up judicial capacity for the most serious cases.

The third obvious application is in self-driving cars. Driving is an activity where the full range of possible traffic situations that can arise is not a particularly huge amount of data. Once an AI gets to the point where it analyzes every possible accident, near-accident, and reported pothole, it can easily make self-driving cars far safer than human driving. This is already being worked on at Google, and is only a few years away.

Get ready for AI in all its forms. While many jobs will be eliminated, this will be exceeded by the opportunity to add AI into your own life and your own capabilities. Make your IQ 40 points higher than it is when you need it most, and your memory thrice as deep - all will be possible in the 2020s for those who learn to use these capabilities. In fact, being able to augment your own marketable skills through the use of AI might become one of the most valuable skillsets for the post-2025 workforce.

Everyone knows that the Oculus Rift headset will be released to the consumer in 2015, and that most who have tried it has had their expectations exceeded. It supposedly corrects many of the previous problems of other VR/AR technologies that have dogged developers for two decades, and has a high resolution.

But entertainment is not the only use for a VR/AR headset like the Oculus Rift, for the immersve medium that the device facilitates has tremendous potential for use in education, military training, and all types of product marketing. Entirely new processes and business models will emerge.

One word of caution, however. My decade of direct experience with running a large division of a consumer technology company compels me to advise you not to purchase any consumer technology product until it is in its third generation of consumer release, which is usually 24-48 months after initial release. The reliability and value for money are usually not compelling until Gen three. Do not mistake fractional generations (i.e. 'version 1.1', or 'iPhone 5, 5S, and 5C) for actual generations. Thre Oculus Rift may be an exception to this norm (as are many Apple products), but in general, don't be an early adopter on the consumer side.

Imagine, if you would, that the immersive movies and video games of the near future are not just fully actualized within the VR of the Oculus Rift, but that the characters of the video game adapt via connection to some AI, so that game characters far too intelligent to be overcome by hacks and cheat codes emerge.

Similarly, imagine if various forms of training and education are not just improved via VR, but augmented via AI, where the program learns exactly where the student is having a problem, and adapts the method accordingly, based on similar difficulties from prior students. Suffice it to say, both VR and AI will transform medicine from its very foundations. Some doctors will be able to greatly expand their practices, while others find themselves relegated to obsolesence.

Two overdue technologies, are finally on our doorstep. Make the most of them, because if you don't, someone else surely is.

Words like 'disruption' and 'destruction' usually have negative meanings, and one may strain to find any good ways in which to use the terms. But today, the accelerating rate of change ensures that more technologies alter more aspects of life at an ever-quickening rate. A little-understood dimension of this is the concept of Joseph Schumpeter's 'Creative Destruction', where the process of technological change topples existing norms and replaces them with new ones, often quite rapidly.

Technological diffusion was in a lull in 2008, as I pointed out at the time. But now, in 2010, I am happy to report that the recess has passed, and that the accelerating rate of change is rising back to the long-term exponential trendline (although it may not be fully back at the trendline until 2013, when people who have not been paying attention will be wondering why they were taken by surprise). The Impact of Computing continues to progress, infusing itself into a wider and wider swath of our lives, and speeding up the rate of change in complacently stagnant industries that never thought technology could affect them. Silicon Valley continues to be 'ground zero' for creative destruction, and complacent industries thousands of miles away could be toppled by someone working from their bedroom in Silicon Valley.

Just a few of the examples of creative destruction that is presently in process have been covered by prior articles here at The Futurist. These, along with others, are :

1) Video Conferencing is poised to disrupt not just airline and hotel industry revenues (which stand to lose tens of billions of dollars per year of business travel revenue), but the real-estate, medical, and aeronautical industries as well. Corporations will see substantial productivity gains from successful adoption of videoconferencing as a substitute for 50% or more of their travel expenses. Major mergers and acquisitions have happened in this sector in the last few months, and imminent price reductions will open the floodgates of diffusion. Skype provides a form of video telephony that is free of cost. This is described in detail in my August 2008 article on the subject, as well as in my earlier October 2006 introductory article.

2) Surface Computing, which I wrote about in July of 2008, has begun to emerge in a myriad of forms, from the handheld Apple iPad to the upcoming consumer version of the table-sized Microsoft Surface. This not only transforms human-computer interaction for the first time in decades, but the Apple 'Apps' ecosystem alters the utility of the Internet as well. All sizes between the blackboard and the iPad will soon be available, and by 2015, personal computing, and the Internet, will be quite different than they are today, with surfaces of varying sizes abundant in many homes.

3) The complete and total transformation of video games into the dominant form of home entertainment will be visible by 2012 through a combination of technologies such as realistic graphics, motion-responsive controllers, 3-D televisions, voice recognition, etc. The biggest casualty of this disruption will be television programming, which will struggle to retain viewers. Beyond this, the way in which humans process sensations of pleasure, excitement, and entertainment will irrevocably change. Thus, the way humans relate to each other will also change. I have written about this in April 2006, with a follow-up in July 2009.

4) The book-publishing industry has been stubbornly resistant to technology, as evidenced by their insistence as late as 2003 that manuscript queries be submitted by postal mail, and that a self-addressed stamped envelope be enclosed in which a reply can be sent. A completed manuscript would take a full 12 months to be printed and distributed, and the editors didn't even find this to be odd. Fortunately, two simultaneous disruptions are toppling this obsolete and unproductive industry from both ends. Print-on-demand services that greatly shorten the self-publishing process and entry-cost, such as iUniverse and Blurb, are now flexible and easy, while finished books can further avoid the paper-binding process altogether and be available to millions in e-book format for the Kindle and other e-readers. Books that cost, say, $15 to print, bind, and distribute now cost almost zero, enabling the author and reader to effectively split the money saved. When e-readers are eventually available for only $100, bookstores that sell paper books will be relegated to surviving mostly on gifts, coffee table books, and cafe revenues. This is a disruption that is happening quickly due to it being so overdue in the first place, resulting in a speedy 'catchup'. I wrote about this in more detail in December of 2009.

5) The automobile is undergoing multiple major transformations at once. Strong, light nanomaterials are entering the bodies of cars to increase fuel efficiency, engines are migrating to hybrid and electrical forms, sub-$5000 cars in India and China will lead to innovations that percolate up to lower the cost of traditional Western models, and the computational power engineered into the average car today leads to major feature jumps relative to models from just 5 years ago. The $25,000 car of 2020 will be superior to the $50,000 car of 2005 in every measurable way.

By 2016, consumer behavior will change to a mode where people consider it normal to 'upgrade' their perfectly functioning 6-year-old cars to get a newer model with better electronic features. This may seem odd, but people did not tend to replace fully functional television sets before they failed until the 2003 thin-TV disruption. The Impact of Computing pulls ever-more products into a rapid trajectory of improvement.

By 2018, self-driving cars will be readily available to the average US consumer, and will constitute a significant fraction of cars on the highway. This will revise existing assumptions about highway speeds and acceptable commute distances, and will further impede the real estate prices of expensive areas.

6) The Mobile Internet revolution, which I wrote about in October of 2009, is already transforming the way consumers in developed markets access the Internet. The bigger disruption is the entry of 1 billion new Internet users from emerging economies. While many of these people have relatively little education compared to Western Internet users, as the West shrinks as a fraction of total Internet mindshare, many Western cultural quirks that are seen as normal might be seen for the minority positions that they are. Thomas Friedman's concept of the world being 'flat' has not even begun to fully manifest.

8) Despite the efforts of Democrats to create a system unfavorable to advancement in healthcare and biotechnology, innovation continues on several fronts (partly due to Asian nations compensating for US shortfalls). One disruption is robotic surgery, where incisions can be narrow instead of the customary practice of making incisions large enough for the surgeon's hands, which in turn often necessitates sawing open the sternum, pelvis, etc. Intuitive Surgical is a company that already has a market cap of $14 Billion.

The biggest disruption, however, is that the globalization of technology is enabling medical tourism. In the US, about twice as much is spent on healthcare per person as in other OECD countries. If manufacturing and software work can be offshored, so can many aspects of healthcare, which is much more expensive than manufacturing or software engineering ever became in the US. This will correct inflated salaries in the healthcare sector, return the savings to consumers, and force innovations and systemic improvements in all OECD countries.

9) By all accounts, the cost of genome sequencing has plunged faster than any other technology, ever (it is less clear how this was accomplished, and whether the next 4 years will see a comparable drop). I tend to be skeptical about such eye-popping numbers, because if something became so much cheaper so quickly, yet it still didn't sweep over the world, then maybe it was not so valuable after all.

10) Social media such as Facebook, Twitter, etc. are mostly inundated with the trivialities of young people, or of older people who never matured, who think they have an audience far larger than it is. However, these mediums have been used to horizontally organize interest groups and movements for political change that know no distance barriers or boundaries.

Blogs have shattered the hold that traditional media had on the release of information and opinions, and the revenues of newspapers, magazines, and network television have tumbled. The Tea Party movement in the US was started by a very small number of people, but has surged with a momentum that has reshaped the American landscape in just one year, and, irony of ironies, the Tea Party is spreading to overtaxed Britain. The next Iranian revolution will not only use Twitter and YouTube, but will have millions of collaborators outside of Iran, operating out of their own homes.

Aside from this effectively being a sizable 'tax cut' for the economy, this is particularly valuable as a complement to mobile Internet penetration in poorer regions, as the capacity to conduct web micro-transactions without fees will be an essential element of human development. The highly successful concept of micro-finance will be augmented when transaction fees that consumed a high percentage of these sub-$10 transactions are minimized.

So we see there are at least 12 ways in which our daily lives will shift considerably in just the next few years. The typical process of creative destruction results in X wealth being destroyed, and 2X wealth being created instead, but by different people. For each of the 12 disruptions listed, 'X' might be as much a $1 Trillion. As a result, the US economy might be mired in a long-term situation where vanishing industries force many laid off workers to start in new industries at the entry level, for half of their previous compensation, even as new fortunes created by the new industries cause net wealth increases. The US could see a continuation of high unemployment combined with high productivity gains and corporate earnings growth for several years to come. Big paydays for entrepreneurs will make the headlines frequently, right alongside stories of people who have to accept permanent 50% pay reductions. This would be the 'new normal'.

Income diversification is the golden rule of the early 21st century. Those that fail to create and maintain multiple streams of income are imperiling themselves. The hottest career one can embark on, which will never be obsolete, is that of the serial entrepreneur.

What a unique thing a book is. Made from a tree, it has a hundred or more flexible pages that contain written text, enabling the book to contain a large sum of information in a very small volume. Before paper, clay tablets, sheepskin parchment, and papyrus were all used to store information with far less efficiency. Paper itself was once so rare and valuable that the Emperor of China had guards stationed around his paper possessions.

Before the invention of the printing press, books were written by hand, and few outside of monastaries knew how to read. There were only a few thousand books in all of Europe in the 14th century. Charlemagne himself took great effort to learn how to read, but never managed to learn how to write, which still put him ahead of most kings of the time, who were generally illiterate.

But with the invention of the printing press by Johannes Gutenberg in the mid-15th century, it became possible to make multiple copies of the same book, and before long, the number of books in Europe increased from thousands to millions.

Fast forward to the early 21st century, and books are still printed by the millions. Longtime readers of The Futurist know that I initially had written a book (2001-02), and sought to have it published the old-fashioned way. However, the publishing industry, and literary agents, were astonishingly low-tech. They did not use email, and required queries to be submitted via regular mail, with a self-addressed, stamped envelope included. So I had to pay postage in both directions, and wait several days for a round trip to hear their response. And this was just the literary agents. The actual publishing house, if they decide to accept your book, would still take 12 months to produce and distribute the book even after the manuscript was complete. Even then, royalties would be 10-15% of the retail price. This prospect did not seem compelling to me, and I chose to parse my book into this blog you see before you.

The refusal by the publishing industry to use email and other productivity-enhancing technologies as recently as 2003 kept their wages low. Editors always moaned that they worked 60 hours a week just to make $50,000 a year, the same as they made in 1970. My answer to them is that they have no basis to expect wage increases without increasing their productivity through technology.

In the meantime, self-publishing technologies emerged to bypass the traditional publishers' role as arbitrers of what can become a book and what cannot. From Lulu to iUniverse to BookSmart, any individual can produce a book, with copies that can be printed on demand. Instances where an individual is seeking to go it alone without being saddled with a huge upfront inventory production and storage burden, or is otherwise marketing to only a tiny audience, have flourished. But print-on-demand is not the true disruption - that was yet to come.

The Amazon Kindle launched in late 2007 at the high price of $400. Within 2 years, a substantially more advanced Kindle 2 was available for a much lower price of $260, alongside competing readers from several other companies. Many people feel that the appeal of holding a physical book in our hands cannot be replaced by a display screen, and take a cavalier attitude towards dismissing e-readers. The tune changes upon learning that the price of a book on an e-reader is just a third of what the paper form at a brick-and-mortar bookstore, with sales tax, would cost. Market research firm iSuppli estimates that 5 million readers have been sold in 2009, and another 12 million will sell in 2010. Amazon estimates that over one-third of its book sales are now through the kindle, greatly displacing sales of paper books.

Imagine what happens when the Kindle and other e-readers cost only $100. Brick and mortar bookstores will consolidate to fewer premises, extract profits mainly from picture-heavy books and magazines, and step up their positioning as literary coffeehouses. Many employees and affiliates of the publishing industry will see their functions eliminated as part of the productivity gains. College students forced to pay $100 for a textbook produced in small quantities will now pay only $20 for an e-reader version. But even this is not the ultimate endgame of disruption.

Therein lies the crescendo of disruption. The Intel Reader is a $1500 device for the visually impaired, but will soon evolve into a technology that interfaces with Kindle-type e-readers and chatters off e-books at 250 words/minute, from the full e-book library that is vastly larger than any traditional collection of audiobooks. A 90,000-word novel could be recited in just 6 hours, enabling a user to imbibe the whole book during a single coast-to-coast flight, even if the lights are dimmed. People could further choose to preserve their vision at home, devouring book after book with the lights out. As the technology advances further, the speech technology will allow the user to select a voice of his choosing to be read to in, perhaps even his own voice.

Thus, without many people even noticing the murmurs, we can predict that the next 3 years will see the biggest transformation in book production and consumption since the days of Johannes Gutenberg. That is a true demonstration of both the Accelerating Rate of Change and The Impact of Computing.

Almost 3 years ago, in October of 2006, I first wrote about Cisco's Telepresence technology which had just launched at that time, and how video conferencing that was virtually indistinguishable from reality was eventually going to sharply increase the productivity and living standards of corporate employees (image : Cisco).

At that time, Cisco and Hewlett Packard both launched full-room systems that cost over $300,000 per room. Since then, there has not been any price drop from either company, which is unheard of for a system with components subject to Moore's Law rates of price declines. This indicates that market demand has been high enough for both Cisco and HP to sustain pricing power and improve margins. Smaller companies like LifeSIze, Polycom, and Teleris have lower-end solutions for as little as $10,000, that have also been selling briskly, but have not yet dragged down the Cisco/HP price tier.

In a trend that could transform the way companies do business, Cisco Systems has slashed its annual travel budget by two-thirds — from $750 million to $240 million — by using similar conferencing technology to replace air travel and hotel bills for its vast workforce.

Likewise, Hewlett-Packard says it sliced 30 percent of its travel expenses from 2007 to 2008 — and expects even better results for 2009 — in large part because of its video conference technology.

If Cisco can chop its travel expenses by two-thirds, and save $500 million per year (which increases their annual profit by a not-insignificant 6-10%), then every other large corporation can save a similar magnitude of money. For corporations with very narrow operating margins, the savings could have a dramatic impact on operating earnings, and therefore stock price. The Fortune 500 alone (excluding airline and hotel companies) could collectively save $100 billion per year, in a wave set to begin immediately if either Cisco or HP drops the price of their solution, which may happen in a matter of months. We will soon see that for every $20 that corporations used to spend on air travel and hotels, they will instead be spending only $1 on videoconferencing expenses. This is gigantic gain in enterprise productivity.

Needless to say, high-margin airline revenue from flights between major business centers (such as San Francisco-Taipei or New York-London) will be slashed, and airlines will have to consolidate to fewer flights, making suitability for business travel even less flexible and losing even more passengers. Hotels will have to consolidate, and taxis and restaurants in business hubs will suffer as well. But these are merely the most obvious of disruptions. What is even more interesting are the less obvious ripple effects that only manifest a few years later, which are :

1) Employee Time and Hassle : Anyone who has had to travel to another continent for a Mon-Fri workweek trip knows that the process of taking a taxi to the airport, waiting 2 hours at the airport, the flight itself, and the ride to the final destination consumes most of the weekends on either side of the trip. Most senior executives log over 200,000 miles of flight per year. This is a huge drag on personal time and quality of life. Travel on weekdays consume productive time that the employer could benefit from, which for senior executives, could be worth thousands of dollars per hour. Furthermore, in an era of superviruses, we have already seen SARS, bird flu, and swine flu as global pandemic threats within the last few years. A reduction of business travel will slow down the rate at which such viruses can spread across the globe and make quarantines less inconvenient for business (although tourist travel and remaining business travel are still carriers of this).

2) Real Estate Prices in Expensive Areas : Home prices in Manhattan and Silicon Valley are presently 4X or more higher than a home of the same square footage 80 miles away. By 2015, the single-screen solution that Cisco sells for $80,000 today may cost as little as $2000, and those from LifeSize and others may be even cheaper, so hosting meetings with colleagues from a home office might be as easy as running a conference call. A good portion of employees who have small children may find it possible to do their jobs in a manner than requires them to go to their corporate office only once or twice a week. If even 20% of employees choose to flee the high-cost housing near their offices, the real estate prices in Manhattan and Silicon Valley will deflate significantly. While this is bad news for owners of real-estate in such areas, it is excellent news for new entrants, who will see an increase in their purchasing power. Best of all, working families may be able to afford to have children that they presently cannot finance.

3) Passenger Aviation Technological Leap : Airlines and aircraft manufacturers have little recourse but to respond to these disruptions with innovations of their own, of which the only compelling possibility is to have each journey take far less time. It is apparent that there has been little improvement in the speed of passenger aircraft in the last 40 years. J. Storrs Hall at the Foresight Institute has an article up with a chart that shows the improvements and total flattening of the speed of passenger airline travel. The cost of staying below Mach 1 vs. being above it are very different, as much as 3X, which accounts for the sudden halt in speed gains just below the speed of sound after the early 1960s. However, the technologies of supersonic aircraft (which exist, of course, in military planes) are dropping in price, and it is possible that suborbital passenger flight could be available for the cost of a first-class ticket by 2025. The Ansari X-prize contest and Space Ship Two have already demonstrated early incarnations of what could scale up to larger planes. This will not reverse the video-conferencing trend, of course, but it will make the airlines more competitive for those interactions that have to be in person.

So we are about to see a cascade of disruptions pulsate through the global economy. While in 2009, you may have no choice but to take a 14-hour flight (each way) to Asia, in 2025, the similar situation may present you with a choice between handling the meeting with the videoconferencing system in your home office vs. taking a 2-hour suborbital flight to Asia.

On April 1, 2006, I wrote a detailed article on the revolutionary changes that were to occur in the concept of home entertainment by 2012 (see Part I and Part II of the article). Now, in 2009, half of the time within the six-year span between the original article and the prediction has elapsed. Of course, given the exponential nature of progress, much more happens within the second half of any prediction horizon relative to the first half.

The prediction issued in 2006 was:

Video Gaming (which will no longer be called this) will become a form of entertainment so widely and deeply partaken in that it will reduce the time spent on watching network television to half of what it is (in 2006), by 2012.

The basis of the prediction was detailed in various points from the original article, which in combination would lead to the outcome of the prediction. The progress as of 2009 around these points is as follows :

The number of polygons per square inch on the screen is a technology that is closely tied to The Impact of Computing, and can only rise steadily. The 'uncanny valley' is a hurdle that designers and animators will take a couple of years to overcome, but overcoming this barrier is inevitable as well.

2) Flat-screen HDTVs reach commodity prices : This has already happened, and prices will continue to drop so that by 2012, 50-inch sets with high resolution will be under $1000. A thin television is important, as it clears the room to allow more space for the movement of the player. A large size and high resolution are equally important, in order to create an immersive visual experience.

We are rapidly trending towards LED and Organic LED (OLED) technologies that will enable TVs to be less than one centimeter thick, with ultra-high resolution.

3) Speech and motion recognition as control technologies : When the original article was written on April 1, 2006, the Nintendo Wii was not yet available in the market. But as of June 2009, 50 million units of the Wii have sold, and many of these customers did not own any game console prior to the Wii.

4) More people are migrating away from television, and towards games : Television viewership is plummeting, particularly among the under-50 audience, as projected in the original 2006 article. Fewer and fewer television programs of any quality are being produced, as creative talent continues to leak out of television network studios. At the same time, World of Warcraft has 11 million subscribers, and as previously mentioned, the Wii has 50 million units in circulation.

There are only so many hours of leisure available in a day, and Internet surfing, movies, and video games are all more compelling than the ever-declining quality of television offerings. Children have already moved away from television, and the trend will creep up the age scale.

5) Some people can earn money through games : There are an increasing number of ways where avid players can earn real money from activities within a Game. From trading of items to selling of characters, this market is estimated at over $1 billion in 2008, and is growing. Highly skilled players already earn thousands of dollars per year this way, and with more participants joining through more advanced VR experiences described above, this will attract a group of people who are able to earn a full-time living through these VR worlds. This will become a viable form of entrepreneurship, just like eBay and Google Ads support entrepreneurial ecosystems today.

Taking all 5 of these points in combination, the original 2006 prediction appears to be on track. By 2012, hours spent on television will be half of what they were in 2006, with sports and major live events being the only forms of programming that retain their audience.

Overall, the prediction seems to be well on track. Disruptive technologies are in the pipeline, and there is plenty of time for each of these technologies to combine into unprecedented new applications. Let us see what the second half of the time interval, between now and 2012, delivers.

Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months. But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years. To not internalize this more deeply is to miss financial opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society. Hence, it is time to update the first version of this all-important article that was written on February 21, 2006.

Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement. Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 12% a year for the last fifty years. Individual years have ranged between +30% and -12%, but let us say that the trend growth of both industries is 12% a year for the next couple of decades.

So, we can conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year. If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78

The Impact of Computing grows at a scorching pace of 78% a year.

Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves. Consider the most popular television shows of the 1970s, where the characters had all the household furnishings and electrical appliances that are common today, except for anything with computational capacity. Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970. It is obvious what has changed during this period, to induce the economic gains.

In the 1970s, there was virtually no household product with a semiconductor component. In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year. In the early 1990s, many people began to have home PCs. For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power. In the late 1990s, the PC was joined by the Internet connection and the DVD player.

Now, I want everyone reading this to tally up all the items in their home that qualify as 'Impact of Computing' devices, which is any hardware device where a much more powerful/capacious version will be available for the same price in 2 years. You will be surprised at how many devices you now own that did not exist in the 80s or even the 90s.

Include : Actively used PCs, LCD/Plasma TVs and monitors, DVD players, game consoles, digital cameras, digital picture frames, home networking devices, laser printers, webcams, TiVos, Slingboxes, Kindles, robotic toys, every mobile phone, every iPod, and every USB flash drive. Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Do not include : Tube TVs, VCRs, film cameras, individual video games or DVDs, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year.

If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.

To summarize, the number of devices in an average home that are on this curve, by decade :

1960s and earlier : 0

1970s : 0-1

1980s : 1-2

1990s : 3-4

2000s : 6-12

2010s : 15-30

2020s : 40-80

The average home of 2020 will have multiple ultrathin TVs hung like paintings, robots for a variety of simple chores, VR-ready goggles and gloves for advanced gaming experiences, sensors and microchips embedded into clothing, $100 netbooks more powerful than $10,000 workstations of today, surface computers, 3-D printers, intelligent LED lightbulbs with motion-detecting sensors, cars with features that even luxury models of today don't have, and at least 15 nodes on a home network that manages the entertainment, security, and energy infrastructure of the home simultaneously.

At the industrial level, the changes are even greater. Just as telephony, photography, video, and audio before them, we will see medicine, energy, and manufacturing industries become information technology industries, and thus set to advance at the rate of the Impact of Computing. The economic impact of this is staggering. Refer to the Future Timeline for Economics, particularly the 2014, 2024, and 2034 entries. Deflation has traditionally been a bad thing, but the Impact of Computing has introduced a second form of deflation. A good one.

It is true that from 2001 to 2009, the US economy has actually shrunk in size, if measured in oil, gold, or Euros. To that, I counter that every major economy in the world, including the US, has grown tremendously if measured in Gigabytes of RAM, TeraBytes of storage, or MIPS of processing power, all of which have fallen in price by about 40X during this period. One merely has to select any suitable product, such as a 42-inch plasma TV in the chart, to see how quickly purchasing power has risen. What took 500 hours of median wages to purchase in 2002 now takes just 40 hours of median wages in 2009. Pessimists counter that computing is too small a part of the economy for this to be a significant prosperity elevator. But let's see how much of the global economy is devoted to computing relative to oil (let alone gold).

Oil at $50/barrel amounts to about $1500 Billion per year out of global GDP. When oil rises, demand falls, and we have not seen oil demand sustain itself to the extent of elevating annual consumption to more than $2000 Billion per year.

Semiconductors are a $250 Billion industry and storage is a $200 Billion industry. Software, photonics, and biotechnology are deflationary in the same way as semiconductors and storage, and these three industries combined are another $500 Billion in revenue, but their rate of deflation is less clear, so let's take just half of this number ($250 Billion) as suitable for this calculation.

So $250B + $200B + $250B = $700 Billion that is already deflationary under the Impact of Computing. This is about 1.5% of world GDP, and is a little under half the size of global oil revenues.

The impact is certainly not small, and since the growth rate of these sectors is higher than that of the broader economy, what about when it becomes 3% of world GDP? 5%? Will this force of good deflation not exert influcence on every set of economic data? At the moment, it is all but impossible to get major economics bloggers to even acknowledge this growing force. But over time, it will be accepted as a limitless well of rising prosperity.

12% more dollars spent each year, and each dollar buys 59% more power each year. Combine the two and the impact is 78% more every year.

The time has thus come for making specific predictions about the details of future economic advancement. I hereby present a speculative future timeline of economic events and milestones, which is a sibling article to Economic Growth is Exponential and Accelerating, v2.0.

2008-09 : A severe US recession and global slowdown still results in global PPP economic growth staying positive in calendar 2008 and 2009. Negative growth for world GDP, which has not happened since 1973, is not a serious possibility, even though the US and Europe experience GDP contraction in this period. The world GDP growth rate trendline resides at growth of 4.5% a year.

2010 : World GDP growth rebounds strongly to 5% a year. More than 3 billion people now live in emerging economies growing at over 6% a year. More than 80 countries, including China, have achieved a Human Development Index of 0.800 or higher, classifying them as developed countries.

2012 : Over 2 billion people have access to unlimited broadband Internet service at speeds greater than 1 mbps, a majority of them receiving it through their wireless phone/handheld device.

2013 : Many single-family homes in the US, particularly in California, are still priced below the levels they reached at the peak in 2006, as predicted in early 2006 on The Futurist. If one adjusts for cost of capital over this period, many California homes have corrected their valuations by as much as 50%.

2014 : The positive deflationary economic forces introduced by the Impact of Computing are now large and pervasive enough to generate mainstream attention. The semiconductor and storage industries combined exceed $800 Billion in size, up from $450 Billion in 2008. The typical US household is now spending $2500 a year on semiconductors, storage, and other items with rapidly deflating prices per fixed performance. Of course, the items puchased for $2500 in 2014 can be purchased for $1600 in 2015, $1000 in 2016, $600 in 2017, etc.

2015 : As predicted in early 2006 on The Futurist, a 4-door sedan with a 240 hp engine, yet costing only 5 cents/mile to operate (the equivalent of 60 mpg of gasoline), is widely available for $35,000 (which is within the middle-class price band by 2015). This is the result of combined advances in energy, lighter nanomaterials, and computerized systems.

2018 : Among new cars sold, gasoline-only vehicles are now a minority. Millions of vehicles are electrically charged through solar panels on a daily basis, relieving those consumers of a fuel expenditure that was as high as $3000 a year in 2008. Some electrical vehicles cost as little as 1 cent/mile to operate.

2019 : The Dow Jones Industrial Average surpasses 25,000. The Nasdaq exceeds 5000, finally surpassing the record set 19 years prior in early 2000.

2020 : World GDP per capita surpasses $15,000 in 2008 dollars (up from $8000 in 2008). Over 100 of the world's nations have achieved a Human Development Index of 0.800 or higher, with the only major concentrations of poverty being in Africa and South Asia. The basic necessities of food, clothing, literacy, electricity, and shelter are available to over 90% of the human race.

Trade between India and the US touches $400 Billion a year, up from only $32 Billion in 2006.

2022 : Several millon people worldwide are each earning over $50,000 a year through web-based activities. These activities include blogging, barter trading, video production, web-based retail ventures, and economic activites within virtual worlds. Some of these people are under the age of 16. Headlines will be made when a child known to be perpetually glued to his video game one day surprises his parents by disclosing that he has accumulated a legitimate fortune of more than $1 million.

2024 : The typical US household is now spending over $5000 a year on products and services that are affected by the Impact of Computing, where value received per dollar spent rises dramatically each year. These include electronic, biotechnology, software, and nanotechnology products. Even cars are sometimes 'upgraded' in a PC-like manner in order to receive better technology, long before they experience mechanical failure. Of course, the products and services purchased for this $5000 in 2024 can be obtained for $3200 in 2025, $2000 in 2026, $1300 in 2027, etc.

2025 : The printing of solid objects through 3-D printers is inexpensive enough for such printers to be common in upper-middle-class homes. This disrupts the economics of manufacturing, and revamps most manufacturing business models.

2027 : 90% of humans are now living in nations with a UN Human Development Index greater than 0.800 (the 2008 definition of a 'developed country', approximately that of the US in 1960). Many Asian nations have achieved per capita income parity with Europe. Only Africa contains a major concentration of poverty.

2030 : The United States still has the largest nominal GDP among the world's nations, in excess of $50 Trillion in 2030 dollars. China's economy is a close second to the US in size. No other country surpasses even half the size of either of the two twin giants.

The world GDP growth rate trendline has now surpassed 5% a year. As the per capita gap has reduced from what it was in 2000, the US now grows at 4% a year, while China grows at 6% a year.

10,000 billionaires now exist worldwide, causing the term to lose some exclusivity.

2032 : At least 2 TeraWatts of photovoltaic capacity is in operation worldwide, generating 8% of all energy consumed by society. Vast solar farms covering several square miles are in operation in North Africa, the Middle East, India, and Australia. These farms are visible from space.

2034 : The typical US household is now spending over $10,000 a year on products and services that are affected by the Impact of Computing. These include electronic, biotech, software, and nanotechnology products. Of course, the products and services purchased for this $10,000 in 2034 can be obtained for $6400 in 2035, $4000 in 2036, $2500 in 2037, etc.

2040 : Rapidly accelerating GDP growth is creating astonishing abundance that was unimaginable at the start of the 21st century. Inequality continues to be high, but this is balanced by the fact that many individual fortunes are created in extremely short times. The basic tools to produce wealth are available to at least 80% of all humans.

Tourism into space is affordable for upper middle class people, and is widely undertaken.

________________________________________________________

I believe that this timeline represents a median forecast for economic growth from many major sources, and will be perceived as too optimistic or too pessimistic by an equal number of readers. Let's see how closely reality tracks this timeline.

I am of the belief that we will experience a Technological Singularity around 2050 or shortly thereafter. Many top futurists all arrive at prediction dates between 2045 and 2075. The bulk of Singularity debate revolves not so much around 'if' or even 'when', but rather 'what' the Singularity will appear like, and whether it will be positive or negative for humanity.

To be clear, some singularities have already happened. To non-human creatures, a technological singularity that overhauls their ecosystem already happened over the course of the 20th century. Domestic dogs and cats are immersed in a singularity where most of their surroundings surpass their comprehension. Even many humans have experienced a singularity - elderly people in poorer nations make no use of any of the major technologies of the last 20 years, except possibly the cellular phone. However, the Singularity that I am talking about has to be one that affects all humans, and the entire global economy, rather that just humans that are marginal participants in the economy. By definition, the real Technological Singularity has to be a 'disruption in the fabric of humanity'.

In the period between 2008 and 2050, there are several milestones one can watch for in order to see if the path to a possibile Singularity is still being followed. Each of these signifies a previously scarce resource becoming almost infinitely abundant (much like paper today, which was a rare and precious treasure centuries ago), or a dramatic expansion in human experience (such as the telephone, airplane, and Internet have been) to the extent that it can even be called a transhuman experience. The following are a random selection of milestones with their anticipated dates.

Each of these milestones, while not causing a Singularity by themselves, increase the probability of a true Technological Singularity, with the event horizon pulled in closer to that date. Or, the path taken to each of these milestones may give rise to new questions and metrics altogether. We must watch for each of these events, and update our predictions for the 'when' and 'what' of the Singularity accordingly.

Computing, once seamlessly synonymous with technological progress, has not grabbed headlines in recent memory. We have not had a 'killer ap' in computing in the last few years. Maybe you can count Wi-fi access to laptops in 2002-03 as the most recent one, but if that is not a sufficiently important innovation, we then have to go all the way back to the graphical World Wide Web browser in 1995. Before that, the killer ap was Microsoft Office for Windows in 1990. Clearly, such shifts appear to occur at intervals of 5-8 years.

I can, without hesitation, nominate surface computing as the next great generational augmentation in the computing experience. This is because surface computing entirely transforms the human-computer interaction in a matter that is more suitable for the human body than the mouse/keyboard model is. In accordance with the Impact of Computing, rapid drops in the costs of both high-definition displays and tactile sensors are set to bring this experience to consumers by the end of this decade.

As far as early applications of surface computing, a fertile imagination can yield many prospects. For example, a restaurant table may feature a surface that displays the menu, enabling patrons to order simply by touching the picture of the item they choose. The information is sent to the kitchen, and this saves time and reduces the number of waiters needed by the restaurant (as waiters would only be needed to deliver the completed orders). Applications for classroom and video game settings also readily present themselves.

Watch for demonstrations of various surface computers at your local electronics store, and keep an eye on the price drops. After seeing a demonstration, do share at what pricepoint you might purchase one. The next generation of computing beckons.

Most of these will be available to average consumers within the next 7-10 years, and will extend lifespans while dramatically lowering healthcare costs (mostly through enhanced capabilities of early detection and prevention, as well as shorter recovery times for patients).� This is consistent with my expectation that bionanotechnology is quietly moving along established trendlines despite escaping the notice of most people.� These technologies will also move us closer to Actuarial Escape Velocity, where the rate of lifespan increases exceed that of real time.�

Another angle that these technologies effect is the globalization of healthcare.� We have previously noted the success of 'medical tourism' in US and European patients seeking massive discounts on expensive procedures.� These technologies, given their potential to lower costs and recovery times, are even more suitable for medical offshoring than their predecessors, and thus could further enhance the competitive position of the countries that are quicker to adopt them.� If the US is at the forefront of using the 'bloodstream bot' to unclog arteries, the US thus once again becomes more attractive than getting a traditional procedure done in India or Thailand.� But if the lower cost destinations also adopt these technologies faster than the heavily regulated US, then even more revenue migrates overseas and the US healthcare sector would suffer further deserved blows, and be under even greater pressure to conform to market forces.� As technology once again acts as the great leveler, another spark of hope for reforming the dysfunctional US healthcare sector has emerged.�

These technologies are near enough to availability that you may even consider showing this article to your doctor, or writing a letter to your HMO.� Plant the seed into their minds...

There are minor but growing elements of evidence that the rate of technological change has moderated in this decade. Whether this is a temporary trough that merely precedes a return to the trendline, or whether the trendline itself was greatly overestimated, will not be decisively known for some years. In this article, I will attempt to examine some datapoints to determine whether we are at, or behind, where we would expect to be in 2008.

This brings us to the chart below from Ray Kurzweil (from Wikipedia) :

This chart appears prominently in many of Kurzweil's writings, and brilliantly conveys the concept of how each major consumer technology reached the mainstream (as defined by a 25% US household penetration rate) in successively shorter times. The horizontal axis represents the year in which the technology was invented.

This chart was produced some years ago, and therein lies the problem. If we were to update the chart to the present day, which technology would be the next addition after 'The Web'?

Many technologies can claim to be the ones to occupy the next position on the chart. IPods and other portable mp3 players, various Web 2.0 applications like social networking, and flat-panel TVs all reached the 25% level of mainstream adoption in under 6 years in accordance with an extrapolation of the chart through 2008. However, it is debatable that any of these are 'revolutionary' technologies like the ones on the chart, rather than merely increments above incumbent predecessors. The iPod merely improved upon the capacity and flexibility of the walkman, the plasma TV merely consumed less space than the tube TV, etc. The technologies on the chart are all infrastructures of some sort, and it is clear that after 'The Web', we are challenged to find a suitable candidate for the next entry.

Thus, we either are on the brink of some overdue technology emerging to reach 25% penetration of US households in 6 years or less, or the rapid diffusion of the Internet truly was a historical anomaly, and for the period from 2001 to 2008 we were merely correcting back to a trendline of much slower diffusion (where it take 10-15 years for a technology to each 25% penetration in the US). One of the two has to be true, at least for an affluent society like the US.

This brings us to the third and final dimension of possibility. This being the decade of globalization, with globalization itself being an expected natural progression of technological change, perhaps a US-centric chart itself was inappropriate to begin with. Landline telephones and television sets still do not have 25% penetration in countries like India, but mobile phones jumped from zero to 10% penetration in under 7 years. The oft-cited 'leapfrogging' of technologies that developing nations can benefit from is a crucial piece of technological diffusion, which would thus show a much smaller interval between 'telephones' and 'mobile phones' than in the US-based chart above. Perhaps '10% Worldwide Household Penetration' is a more suitable measure than '25% US Household Penetration', which would then possibly show that there is no lull in worldwide technological adoption at all.

I may try to put together this new worldwide chart. The horizontal axis would not change, but the placement of datapoints along the vertical axis would. Perhaps Kurzweil merely has to break out of US-centricity in order to strengthen his case and rebut most of his critics.

The Year in Nanotechnology : Stanford University research into nanowires that dramatically increase battery capacity is the most promising breakthrough of 2007, in any discipline. Think 30-hour laptop batteries.

Most of the innovations in the articles above are in the laboratory phase, which means that about half will never progress enough to make it to market, and those that do will take 5 to 15 years to directly affect the lives of average people (remember that the laboratory-to-market transition period itself continues to shorten in most fields). But each one of these breakthroughs has world-changing potential, and that there are so many fields advancing simultaneously guarantees a massive new wave of improvement to human lives.

This scorching pace of innovation is entirely predictable, however. To internalize the true rate of technological progress, one merely needs to appreciate :

We are fortunate to live in an age when a single calendar year will invariably yield multiple technological breakthroughs, the details of which are easily accessible to laypeople. In the 18th century, entire decades would pass without any observable technological improvements, and people knew that their children would experience a lifestyle identical to their own. Today, we know with certainty that our lives in 2008 will have slight but distinct and numerous improvements in technological usage over 2007, just as 2007 was an improvement over 2006.

Now, Adobe Systems, the company famous for tools like Photoshop and Acrobat Reader, is developing software that could bring the power of a Hollywood animation studio to the average computer and let users render high-quality graphics in real time. Such software could be useful for displaying ever-more-realistic computer games on PCs and for allowing the average computer user to design complex and lifelike animations.

The Impact of Computing mandates that any computationally driven product or capability exponentially drops in cost by 30% to 60% every year. Each film that was considered to be a breakthrough in computer-derived special effects, from Toy Story to the Lord of the Rings Trilogy, used technology that continues to become commoditized. What was groundbreaking from Pixar in 1995 is today affordable to second-tier video game companies designing games on $2 million budgets, and television programs intended for syndication and cable. Before long, the prices inevitably reach the consumer.

Needless to say, this greatly enhances the reach of the nascent cottage industry of Machinima, and eventually will lead to small groups of 2-3 people producing full-length animated feature films that can be distributed on the Internet. I have written about this in detail in my article from 4/1/2006, The Next Big Thing in Entertainment, particularly in Part II. This development from Adobe is one of the necessary steps towards realizing the vision that I outlined in the original article. Machinima will be to Hollywood what the blogosphere became to Big Media.

The Lifeboat Foundation has a special report detailing their view of the top ten transhumanist technologies that have some probability of 25 to 30-year availability. Transhumanism is a movement devoted to using technologies to transcend biology and enhance human capabilities.

I am going to list out each of the ten technologies described in the report, provide my own assessment of high, medium, or low probability or mass-market availability by a given time horizon, and link to prior articles written on The Futurist about the subject.

10. Cryonics : 2025 - Low, 2050 - Moderate

I can see the value in someone who is severely maimed or crippled opting to freeze themselves until better technologies become available for full restoration. But outside of that, the problem with cryonics is that very few young people will opt to risk missing their present lives to go into freezing, and elderly people can only benefit after revival when or if age-reversal technologies become available. Since going into cryonic freezing requires someone else to decide when to revive you, and any cryonic 'will' may not anticipate numerous future variables that could complicate execution of your instructions, this is a bit too risky, even if it were possible.

The good news here is that gene sequencing techniques continue to become faster due to the computers used in the process themselves benefiting from Moore's Law. In the late 1980s, it was thought that the human genome would take decades to sequence. It ended up taking only years by the late 1990s, and today, would take only months. Soon, it will be cost-effective for every middle-class person to get their own personal genome sequenced, and get customized medicines made just for them.

While this is a staple premise of most science fiction, I do not think that space colonization may ever take the form that is popularly imagined. Technology #2 on this list, mind uploading, and technology #5, self-replicating robots, will probably appear sooner than any capability to build cities on Mars. Thus, a large spaceship and human crew becomes far less efficient than entire human minds loaded into tiny or even microscopic robots that can self-replicate. A human body may never visit another star system, but copies of human minds could very well do so.

Artificial limbs, ears, and organs are already available, and continue to improve. Artificial and enhanced muscle, skin, and eyes are not far.

5. Autonomous Self-Replicating Robots : 2030 - Moderate

This is a technology that is frightening, due to the ease at which humans could be quickly driven to extinction through a malfunction that replicates rouge robots. Assuming a disaster does not occur, this is the most practical means of space exploration and colonization, particular if the robots contain uploads of human minds, as per #2.

From the Great Wall of China in ancient times to Dubai's Palm Islands today, man-made structures are already visible from space. But to achieve transhumanism, the same must be done in space. Eventually, elevators extending hundreds of miles into space, space stations much larger than the current ISS (240 feet), and vast orbital solar reflectors will be built. But, as stated in item #7, I don't think true megascale projects (over 1000 km in width) will happen before other transhumanist technologies render the need for them obsolete.

2. Mind Uploading : 2050 - Moderate

This is what I believe to be the most important technology on this list. Today, when a person's hardware dies, their software in the form of their thoughts, memories, and humor, necessarily must also die. This is impractical in a world where software files in the form of video, music, spreadsheets, documents, etc. can be copied to an indefinite number of hardware objects.

If human thoughts can reside on a substrate other than human brain matter, then the 'files' can be backed up. That is all there is to it.

1. Artificial General Intelligence : 2050 - Moderate

This is too vast of a subject to discuss here. Some evidence of progress appears in unexpected places, such as when, in 1997, IBM's Deep Blue defeated Gary Kasparov in a chess game. Ray Kurzweil believes that an artificial intelligence will pass the Turing Test (a bellwether test of AI) by 2029. We will have to wait and see, but expect the unexpected, when you least expect it.

A robotic insect, similar in size and weight to a wasp or hornet, has successfully taken flight at Harvard University (article and photo at MIT Technology Review). This is an amazing breakthrough, because just a couple of years ago, such robots were pigeon-sized, and thus far less useful for detailed military and police surveillance.

At the moment, the flight path is still only vertical, and the power source is external. Further advances in the carbon polymer materials used in this robot will reduce weight further, enabling greater flight capabilities. Additional robotics advances will reduce size down to housefly or even mosquito dimensions. Technological improvements in batteries will provide on-board power with enough flight time to be useful. All of this will take 5-8 years to accomplish. After that, it may take another 3 years to achieve the capabilities for mass-production. Even then, the price may be greater than $10,000 per units.

Needless to say, by 2017-2020, this may be a very important military technology, where thousands of such insects are released across a country or region known to contain terrorists. They could land on branches, light fixtures, and window panes, sending information to one another as well as to military intelligence. Further into the future, if these are ever available for private use, than that could become quite complicated.

The World Wide Web, after just 12 years in mainstream use, has become an infrastructure accessed by hundreds of millions of people every day, and the medium through which trillions of dollars a year are transacted. In this short period, the Web has already been through a boom, a crippling bust, and a renewal to full grandeur in the modern era of 'Web 2.0'.

But imagine, if you could, a Web in which web sites are not just readable in human languages, but in which information is understandable by software to the extent that computers themselves would be able to perform the task of sharing and combining information. In other words, a Web in which machines can interpret the Web more readily, in order to make it more useful for humans. This vision for a future Internet is known as the Semantic Web.

Some are already referring to the Semantic Web as 'Web 3.0'. This type of labeling is a reliable litmus test of a technology falling into the clutches of emotional hype, and thus caution is warranted in assessing the true impact of it. I believe that the true impact of the Semantic Web will not manifest itself until 2012 or later. Nonetheless, the Semantic Web could do for scientific research what email did for postal correspondence and what MapQuest did for finding directions - eliminate almost all of the time wasted in the exchange of information.

BusinessWeek has a slideshow revealing new electronic devices that a consumer could use to enhance (or complicate) certain aspects of daily life. Among these is the very promising Sunlight Direct System, which I discussed back on September 5, 2006. Others, such as the Lawnbott ($2500), cost far more than the low-tech solution of hiring people to mow your lawn for the entire expected life of the device, ensuring that mass-market adoption is at least 4-5 years away.

All of this is a very strong and predictable manifestation of The Impact of Computing, which mandates that entirely new categories of consumer electronics appear at regular intervals, and that they subsequently become cheaper yet more powerful at a consistent rate each year. Let us observe each of these functional categories, and the rate of price declines/feature enhancements that they experience.

I stumbled upon something while reading the Asian Development Bank's report on the world economy. No big surprises here, but one tiny chart stood out. The column chart of WW and Asian semiconductor sales from 2001 to 2006 indicates that while Asia accounted for just one third of semiconductor sales in 2001, they comprise half of it today.

BusinessWeek has an article and slideshow on the rapidly diversifying applications of advanced VR technology.

This is a subject that has been discussed heavily here on The Futurist, through articles like The Next Big Thing in Entertainment, Parts I, II, and III, as well as Virtual Touch Brings VR Closer. The coverage of this topic by BusinessWeek is a necessary and tantalizing step towards the creation of mass-market products and technologies that will enhance productivity, defense, healthcare, and entertainment.

Technologically, these applications and systems are heavily encapsulated within The Impact of Computing with very few components that are not exponentially improving. Thus, cost-performance improvements of 30-58% a year are guaranteed, and will result in stunningly compelling experiences as soon as 2012.

To the extent that many people who seek reading material about futurism are primarily driven by the eagerness to experience 'new types of fun', this area, more than any other discussed here, will deliver the majority of new fun that consumers can experience in coming years.

Most of the innovations in the articles above are in the laboratory phase, which means that about half will never progress enough to make it to market, and those that do will take 5 to 15 years to directly affect the lives of average people (remember that the laboratory-to-market transition period itself continues to shorten in most fields). But each one of these breakthroughs has world-changing potential, and that there are so many fields advancing simultaneously guarantees a massive new wave of improvement to human lives.

This scorching pace of innovation is entirely predictable, however. To internalize the true rate of technological progress, one merely needs to appreciate :

We are fortunate to live in an age when a single calendar year will invariably yield multiple technological breakthroughs, the details of which are easily accessible to laypeople. In the 18th century, entire decades would pass without any observable technological improvements, and people knew that their children would experience a lifestyle identical to their own. Today, we know with certainty that our lives in 2007 will have slight but distinct and numerous improvements in technological usage over 2006.

On August 22, 2006, I wrote an article titled Terrorism, Oil, Globalization, and the Impact of Computing. The article described how four seemingly unrelated forces had emerged in the last few years to create a quadruple inflection point that unleashed massive new market dynamics. Take a moment to go back and read that article.

While the life blood of business is the firm handshake, face-to-face meeting, and slick presentation, the quadruple inflection point above might just permanently elevate the bar that determines which meetings warrant the risks, costs, and hassle of business travel when there are technologies that can enable many of the same interactions. While these technologies are only poor substitutes now, improved display quality, bandwidth, and software capabilities will greatly increase their utility.

While the optimal experience requires both parties to have the system, limiting the opportunities for it's use in the near future, as more corporations adopt the system, using it becomes a routine practice in an increasing number of corporate settings. Corporations will be able to save a decent portion of time and cost of employee business travel, and redeploy those savings into R&D. Cisco itself expects to reduce business travel by 20%, saving $100 million per year. If each of the Fortune Global 500 corporations adopted it, they would save anywhere from $20 to $80 Billion per year.

The full system with three screens, cameras, and high-speed networking equipment costs $300,000. However, almost all of the components of the system are full member technologies of the Impact of Computing, and hence the same system is bound to cost under $50,000 by 2011, and perhaps much less. Cisco expects the market to reach $1 billion in annual revenue by 2011, which would amount to 20,000 units per year. Eventually, prices for a single screen version (currently $80,000) might reach just $2000 by 2015, making them common household items, allowing more people to work from home and untethering them from living in expensive geographies against their preference.

Two substantial innovations from Google and Cisco have emerged in just two months since the original article. It is fascinating to watch the modern innovation economy adapt so rapidly to a new market need. There will be much more to marvel over in the coming months and years.

3) At the same time, globalization has increased the volume and variety of business conducted between the US and Asia, as well as between other nations. More jobs involve international interaction, and frequent overseas travel. This demand directly clashes with the forced realities of items 1) and 2), creating a market demand for something to ease this conflicting pressure, which leads us to...

4) The Impact of Computing, which estimates that the increasing power and number of computing devices effectively leads to a combined gross impact that increases by approximately 78% a year. One manifestation of the Impact is the development of technologies like Webex, high-definition video conferencing over flat-panel displays, Skype, Google Earth, Wikimapia, etc. These are not only tools to empower individuals with capabilities that did not even exist a few years ago, but these capabilities are almost free. Furthermore, they exhibit noticeable improvements every year, rapidly increasing their popularity.

While the life blood of business is the firm handshake, face-to-face meeting, and slick presentation, the quadruple inflection point above might just permanently elevate the bar that determines which meetings warrant the risks, costs, and hassle of business travel when there are technologies that can enable many of the same interactions. While these technologies are only poor substitutes now, improved display quality, bandwidth, and software capabilities will greatly increase their utility.

The same can even apply to tourism. Google Earth and WikiMapia are very limited substitutes for traveling in person to a vacation locale. However, as these technologies continue to layer more detail onto the simulated Earth, combined with millions of attached photos, movies, and blogs inserted by readers into associated locations, a whole new dimension of tourism emerges.

Imagine if you have a desire to scale Mount Everest, or travel across the Sahara on a camel. You probably don't have the time, money, or risk tolerance to go and do something this exciting, but you can go to Google Earth or WikiMapia, and click on the numerous videos and blogs by people who actually have done these things. Choose whichever content suits you, from whichever blogger does the best job.

See through the eyes of someone kayaking along the coast of British Columbia, walking the length of the Great Wall of China, or spending a summer in Paris as an artist. The possibilities are endless once blogs, video, and Google Earth/WikiMapia merge. Will it be the same as being there yourself? No. Will it open up possibilities to people who could never manage to be there themselves, or behave in certain capacities if there? Absolutely.

In 1999, maybe 50 million US households had dial-up Internet access at 56 kbps speeds. In 2006, there are 50 million Broadband subscribers, with 3-10 mbps speeds. This is roughly a 100X improvement in 7 years, causing a massive increase in the utility of the Internet over this period. The question is, can we get an additional 10X to 30X improvement in the next 4 years, to bring us the next generation of Internet functionality? Let's examine some new technological deployments in home Internet access.

Verizon's high-speed broadband service, known as FIOS, is currently available to about 3 million homes across the US, with downstream speeds of 5 Mbps available for $39.95/month and higher speeds available for greater prices. How many people subscribe to this service out of the 3 million who have the option is not publicly disclosed.

However, Verizon will be upgrading to a more advanced fiber-to-the-home standard that will increase downstream speeds by 4X and upstream speeds by 8X. Verizon predicts that this upgrade will permit it to offer broadband service at 50 or even 100 Mbps to homes on its FIOS network. Furthermore, the number of homes with access to FIOS service will rise from the current 3 million to 6 million by the end of 2006.

The reason this is significant is that if falls precisely within the concept of the Impact of Computing. The speed of the Internet service increases by 4X to 8X, while the number of homes with access to it increases by 2X, for an effective 8X to 16X increase in Impact, and the associated effects on society. High-definition video streaming, video blogging, video wikis, and advanced gaming will all emerge as rapidly adopted new applications as a result.

We often hear about how Japan and South Korea already have 100 Mbps broadband service while the US languishes at 3-10 Mbps with little apparent progress. True, but Africa has vast natural resources and Taiwan, Israel, and Switzerland do not. Which countries make better use of the advantages available to them? In the same way, South Korea and Japan may have a lot of avid online gamers, but have not made use of their amazing high-speed infrastructure to create businesses in the last 2 years like Google Adwords, Zillow, MySpace, Wikipedia, etc. The US has spawned these powerful consumer technologies even with low broadband speeds, due to our innovation and fertile entrepreneurial climate that exceeds even that of advanced nations like Japan and South Korea. Just imagine the innovations that will emerge with the greatly enhanced bandwidth that will soon be available to US innovators.

Give the top 80 million American households and small businesses access to 50 Mbps Internet connections for $40/month by 2010, and they will produce trillions of dollars of new wealth, guaranteed.

The 2006 edition of the Nanotech Report from Lux Research was published recently. This is something I make a point to read every year, even if only a brief summary is available for free.

Some of the key findings that are noteworthy :

1) Nanotechnology R&D reached $9.6 billion in 2005, up 10% from 2004. This is unremarkable when one considers that the world economy grew 7-8% in nominal terms in 2005, but upon closer examination of the subsets of R&D, corporate R&D and venture capital grew 18% in 2005 to hit $5 billion. This means that many technologies are finally graduating from basic research laboratories and are being turned into products, and that investment in nanotechnology is now possible. This also confirms my estimation that the inflection point of commercial nanotechnology was in 2005.

But a deeper concept worth internalizing is how an extension of the Impact of Computing will manifest itself. If the quality of nanotechnology per dollar increases at the same 58% annual rate as Moore's Law (a modest assumption), combining this qualitative improvement rate with a dollar growth of 64% a year yields an effective Impact of Nanotechnology of (1.58)*(1.64) = 160% per year. As the base gets larger, this will become very visible.

3) Nanotech-enabled products on the market today command a price premium of 11% over traditional equivalents, even if the nanotechnology is not directly noticed.

Here is a follow up to the two-part article, the Next Big Thing in Entertainment, where a prediction is made that the video game industry will give rise to something much larger, that transforms many dimensions of entertainment entirely.

I feel one additional detail worth discussing is the performance of stocks that may do well from this phenomenon. A 5-year chart of four game development companies, Electronic Arts (ERTS), Activision (ATVI), Take-Two Interactive Software (TTWO), and THQ Inc. (THQI), plus retailer Gamestop (GME) provides an interesting picture.

All 5 companies appear to have greatly outperformed the S&P500 over the last 5 years, despite this being a poor period for technology stocks. Past performance is no indication of future returns, and it is difficult to predict with competitors will prevail over others, but a basket of stocks in this sector will be very interesting to watch for the next 6 years.

Continuing from Part I, where a case is made that the successor to video games, virtual reality, will draw half of all time currently spent on television viewership by 2012.

The film industry, on the other hand, has far less of a captive audience than television, and thus evolved to be much closer to a meritocracy. Independent films with low budgets can occasionally do as well as major studio productions, and substantial entrepreneurship is conducted towards such goals.

This is also a business model that continually absorbs new technology, and even has a category of films generated entirely through computer animation. A business such as Pixar could not have existed in the early 1990s, but from Toy Story (1995) onwards, Pixar has produced seven consecutive hits, and continues to generate visible increases in graphical sophistication with each film. At the same time, the tools that were once accessible only to Pixar-sized budgets are now starting to become available to small indie filmmakers.

Even while the factors in Part I will draw viewers away from mediocre films, video game development software itself can be modified and dubbed to make short films. Off-the-shelf software is already being used for this purpose, in an artform known as machinima. While most machinima films today appear amateurish and choppy, in just a few short years the technology will enable the creation of Toy Story calibre indie films.

By democratizing filmmaking, machina may effectively do to the film industry what blogs did to the mainstream media. In other words, a full-length feature film created by just 3 developers, at a cost of under $30,000, could be quickly distributed over the Internet and gain popularity in direct proportion to its merit. Essentially, almost anyone with the patience, skill, and creativity can aspire to become a filmmaker, with very little financing required at all. This too, just like the blogosphere before it, will become a viable form of entrepreneurship, and create a new category of self-accomplished celebrities.

At the same time, machinima will find a complementary role to play among the big filmmakers as well, just as blogs are used for a similar purpose by news organizations today. Peter Jackson or Steven Spielberg could use machinima technology to slash special-effects costs from millions to mere thousands of dollars. Furthermore, since top films have corresponding games developed alongside them, machinima fits nicely in between as an opportunity for the fan community to create 'open source' scenes or side stories of the film. This helps the promotion and branding of the original film, and thus would be encouraged by the producer and studio.

Thousands of people will partake in the creation of machinima films by 2010, and by 2012 one of these films will be in the top 10 of all films created that year, in terms of the number of Google search links it generates.These machinima films will have the same effect on the film industry that the blogosphere has had on the mainstream media.

There you have it, the two big changes that will fundamentally overturn entertainment as we know it, while making it substantially more fun and participatory, in just 6 short years.

Computer graphics and video games have improved in realism in direct accordance with Moore's Law. Check out the images of video game progression to absorb the magnitude of this trend. One can appreciate this further by merely comparing Pixar's Toy Story (1995) to their latest film, Cars (2006). But to merely project this one trend to predict that video games will have graphics that look as good as the real thing is an unimaginative plateau. Instead, let's take it further and predict :

Video Gaming (which will no longer be called this) will become a form of entertainment so widely and deeply enjoyed that it will reduce the time spent on watching network television to half of what it is today, by 2012.

Impossible, you say? How can this massive change happen in just 6 years? First, think of it in terms of 'Virtual Reality' (VR), rather than 'games'. Then, consider that :

1) Flat hi-def television sets that can bring out the full beauty of advanced graphics will become much cheaper and thinner, so hundreds of millions of people will have wall-mounted sets of 50 inches or greater for under $1000 by 2012.

2) The handheld controllers that adults find inconvenient will be replaced by speech and motion recognition technology. The user experience will involve speaking to characters in the game, and sports simulations will involve playing baseball or tennis by physically moving one's hand. Eventually, entire bodysuits and goggles will be available for a fully immersive experience.

3) Creative talent is already migrating out the television industry and into video games, as is evident by the increase in story quality in games and the decline in the quality of television programs. This trend will continue, and result in games available for every genre of film. Network television has already been reduced to depending on a large proportion of low-budget 'reality shows' to sustain their cost-burdened business models.

4) Adult-themed entertainment has driven the market demand and development of many technologies, like the television, VCR, DVD player, and Internet. Gaming has been a notable exception, because the graphics have not been realistic enough to attract this audience, except for a few unusual games. However, as realism increases through points 1) and 2), this vast new market opens up, which in turn pushes development. For the first time, there are entire conferences devoted to this application of VR technology. The catalyst that other technologies received is yet to stimulate gaming.

5) Older people are averse to games, as they did not have this form of entertainment when they were young. However, people born after 1970 have grown up with games, and thus still occasionally play them as adults. As the pre-game generation is replaced by those familiar with games, more VR tailored for older people will develop. While this demographic shift will not make a huge change by 2012, it is irreversibly pushing the market in this direction every year.

6) Online multiplayer role-playing games are highly addictive, but already involve people buying and selling game items for real money, to the tune of a $1.1 billion per year market. Highly skilled players already earn thousands of dollars per year this way, and with more participants joining through more advanced VR experiences described above, this will attract a sizable group of people who are able to earn a full-time living through these VR worlds. This will become a viable form of entrepreneurship, just like eBay and Google Ads support entrepreneurial ecosystems today.

There you have it, a convergence of multiple trends bringing a massive shift in how people spend their entertainment time by 2012, with television only watched for sports, documentaries, talk shows, and a few top programs.

The progress in gaming also affects the film industry, but in a very different way. The film industry will actually become greatly enhanced and democratized over the same period. For this, stay tuned for Part II tomorrow.

Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months. But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years. To not internalize this more deeply is to miss investment opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society.

Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement. Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 15% a year for the last fifty years. Individual years have ranged between +30% and -12%, but let's say these industries have grown large enough that their growth rate slows down to an average of 12% a year for the next couple of decades.

So, we can crudely conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year. If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78.

The Impact of Computing grows at a screaming rate of 78% a year.

Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves. Consider the most popular television shows of the 1970s, such as The Brady Bunch or The Jeffersons, where the characters had virtually all the household furnishings and electrical appliances that are common today, except for anything with computational capacity. Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970. It is obvious what has changed during this period, to induce the economic gains.

In the 1970s, there was virtually no household product with a semiconductor component. Even digital calculators were not affordable to the average household until very late in the decade.

In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year.

In the early 1990s, many people began to have home PCs. For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power.

In the late 1990s, the PC was joined by the Internet connection and the DVD player, bringing the number of household devices on the Moore's Law-type curve to three.

Today, many homes also have a wireless router, a cellular phone, an iPod, a flat-panel TV, a digital camera, and a couple more PCs. In 2006, a typical home may have as many as 8 or 9 devices which are expected to have descendants that are twice as powerful for the same price, in just the next 12 to 24 months.

To summarize, the number of devices in an average home that are on this curve, by decade :

1960s and earlier : 0

1970s : 0

1980s : 1-2

1990s : 3-4

2000s : 6-12

If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.

Fifth Generation iPod, released October 2005, 60 GB capacity for $399, or 12X more capacity in four years, for the same price.

Total iPods sold in 2002 : 381,000

Total iPods sold in 2005 : 22,497,000, or 59 times more than 2002.

12X the capacity, yet 59X the units, so (12 x 59) = 708 times the impact in just three years. The rate of iPod sales growth will moderate, of course, but another product will simply take up the baton, and have a similar growth in impact.

Now, we have a trend to project into the near future. It is a safe prediction that by 2015, the average home will contain 25-30 such computationally advanced devices, including sophisiticated safety and navigation systems in cars, multiple thin HDTVs greater than 60 inches wide diagonally, networked storage that can house over 1000 HD movies in a tiny volume, virtual-reality ready goggles and gloves for advanced gaming, microchips and sensors embedded into several articles of clothing, and a few robots to perform simple household chores.

Not only does Moore's Law ensure that these devices are over 100 times more advanced than their predecessors today, but there are many more of them in number. This is the true vision of the Impact of Computing, and the shocking, accelerating pace at which our world is being reshaped.

I will expand on this topic greatly in the near future. In the meantime, some food for thought :

As several streams of technological progress, such as semiconductors, storage, and Internet bandwidth continue to grow exponentially, doubling every 12 to 24 months, one subset of this exponential progress that offers a compelling visual narrative is the evolution of video games.

Video games evolve in graphical sophistication as a direct consequence of Moore's Law. A doubling in the number of graphical polygons per square inch every 18 months would translate to an improvement of 100X after 10 years, 10,000X after 20 years, and 1,000,000X after 30 years, both in resolution and in number of possible colors.

Sometimes, pictures are worth thousands of words :

1976 :

1986 :

1996 :

2006 :

Now, extrapolating this trajectory of exponential progress, what will games bring us in 2010? or 2016?

I actually predict that video games will become so realistic and immersive that they will displace other forms of entertainment, such as television. Details on this to follow.

The Internet was born as early as 1969, but no later than 1983, depending on what you consider to be the event most analogous to a 'birth'. However, only a tiny fraction of the world's people were aware of the Internet even in the early 1990s. Then, by 1994-95, the graphical browser from Netscape seemingly emerged from nowhere, opening up a wonderland that appeared to have the sum total of human knowledge instantly available to anyone with a computer.

This 'World Wide Web' was predicted by almost no one in the late 1980s and was absent from the vast majority of science fiction work depicting the late 1990s onwards, just five years before it happened (with the notable exception of Ray Kurzweil in his book "The Age of Intelligent Machines"). So many supposed 'great thinkers' missed it. How?

Because, while they could easily extrapolate exponential trends such as Moore's Law and the dropping cost of telephone calls/data transfer, almost no one thought about the bigger vision - combining the two.

1) By the late 1980s, personal computers were starting to make their way to the mass market. That most of the population might have bought their first PC by 1995 was an easy prediction.

2) Long-distance telephone rates were dropping through the full 20-year period from 1970 to 1990. That this would continue until costs would be virtually zero was an easy prediction. Plus, people already had modems and where exchanging data between computers in the 1980s.

But combining the two, for the grand vision of hundreds of millions of PCs collectively accessing and contributing to the growing World Wide Web of information, was the missing layer of analysis that almost every great thinker missed.

Notice how the number of internet hosts was already growing exponentially in the early 1990s, but the apparent 'knee of the curve' occurred after 1996.

So, the next question becomes : How do we make additional predictions by noticing multiple, steady exponential trends, and knowing which ones will combine into something explosive, at what time?

That is, of course, the $64 trillion question. I will venture a few, however, in the coming weeks. Stay tuned..........