There is an emerging paradox within the flow of technological diffusion. The paradox is, ironically, that rapid progress of technology has constrained its own ability to progress further.

What exactly is the meaning of this? As we see from Chapter 3 of the ATOM, all technological products currently amount to about 2% of GDP. The speed of diffusion is ever faster (see chart), and the average household is taking on an ever-widening range of rapidly advancing products and services.

Refer to the section from that chapter, about the number of technologically deflating nodes in the average US household by decade (easily verified by viewing any TV program from that decade), and a poll for readers to declare their own quantity of nodes. To revisit the same thing here :

Include : Actively used PCs, LED TVs and monitors, smartphones, tablets, game consoles, VR headsets, digital picture frames, LED light bulbs, home networking devices, laser printers, webcams, DVRs, Kindles, robotic toys, and every external storage device. Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Exclude : Old tube TVs, film cameras, individual software programs and video games, films on storage discs, any miscellaneous item valued at less than $5, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year.

The estimation of results that this poll would have yielded by decade, for the US, is :

1970s and earlier : 0

1980s : 1-2

1990s : 2-4

2000s : 5-10

2010s : 12-30

2020s : 50-100

2030s : Hundreds?

Herein lies the problem for the average household. The cost to upgrade PCs, smartphones, networking equipment, TVs, storage, and in some cases the entire car, has become expensive. This can often run over $2000/year, and unsurprisingly, upgrades have been slowing.

The technology industry is hence a victim of its own success. By releasing products that cause so much deflation and hence low Nominal GDP growth and sluggish job growth, the technology industry has been constricting its own demand base. Amidst all the job-loss through technological automation, the hiring of the tech industry itself is constrained if fewer people can keep buying their products. If the bottom 70-80% of US household income brackets can no longer keep up with technological upgrades, their ability to keep up with new economic opportunities will suffer as well.

This is why monetization of technological progress into a dividend is crucial, which is where the ATOM Direct Universal Exponential Stipend (DUES) fits in. It is so much more than a mere 'basic income', since it is directly indexed to the exact speed to technological progress. As of April 2017, the estimated DUES amount in the US is $500/month (up from $400/month back in February 2016 when the ATOM was first published). A good portion of this cushion enables faster technology upgrades and more new adoption.

When people think of FinTech, they think of a few things like peer-to-peer lending, payment companies, asset management firms, or maybe even cryptocurrencies. But one of the most outdated yet burdensome costs in all of finance, spread across the widest range of people, is still overlooked. The mortgage lending process is heavily padded with fees that are remnants of a bygone age.

First, we must begin with the effect of technology on short-term interest rates. The Fed Funds rate was close to zero for several years, and it is apparent that any brief increase in rates by the Federal Reserve will swiftly be reversed once markets punish the move in subsequent months. We are in an age of accelerating and exponential technological deflation, and not only will the Fed Funds rate have to be zero forever, but money-printing will be needed to offset deflation. This process has already been underway for years, and is not yet recognized as part of the long term trend of technological progress.

A 30-year mortgage was the standard format for decades, with a variable-rate mortgage seen as risky after a borrower locks in a low rate on their 30-year mortgage. But when the Fed Funds rate was at nearly zero, the LIBOR (London Interbank Offer Rate) hovered around 0.18% or so. If you get a variable-rate mortgage, then the rate is calculated off of the LIBOR, with an additional premium levied by the lending institution. This premium is about 1.5% or more. When the LIBOR rate was over 3% not too many years ago, the lender premium was only a third of the mortgage, but now, it is 85-90% of the mortgage. So instead of paying 0.18%, the lender pays 1.7%. This huge buffer represents one of the most attractive areas for FinTech to disrupt, as what was once a secondary cost is now the overwhelmingly dominant padding, itself a remnant of a bygone age.

When almost 90% of the interest charged in a mortgage merely represents the value that the lending institution provides, we can examine the components of this and see which of those could be replaced with a lower cost technological alternative. The lender, such as a major bank, provides a brand name, a mortgage officer to meet with face-to-face, and other such provisions. All of this is either unnecessary, or can be provided at much lower cost with the latest technologies. For example, blockchains can ensure the security aspects of the mortgage transaction are robust. Online consumer review services can provide an extra layer of reputational buttressing to any innovative new lending platform. The rationale for such a hefty mortgage markup over the underlying interest rate is just no longer there.

If the lender premium in a mortgage falls from 85-90% down to, say, 50%, then the rate on an adjustable rate mortgage will decline to just twice the LIBOR, or about 0.4%. Even thought the Federal Reserve has recently increased the Fed Funds rate, this is very temporary, and 0% will be the Fed Funds rate for the majority of the forseeable future, just as it has been for the last 9 years.

When this sort of ATOM-derived cost savings on interest payments percolates through the economy, it will cause a series of disruptions that will greatly reduce one of the last main consumer expenditures not yet being attacked by technology. Housing costs have risen above the inflation rate in many major cities, against the grain of technology. This is unnatural, since a home does not spontaneously renovate itself, get bigger, or otherwise increase in inherent value. On the contrary, the materials deteriorate over time, so the value should fall. Yet, home prices rise despite these structural forces, due to artificial decisions to restrict supply, lower bond yields through QE, etc. This artificial propping up of home prices masks the excessive costs in the industry, particularly in the mortgage-lending sector. As Fintech irons out the aforementioned outdated expenses in the mortgage-lending process, many fundamental assumptions about home ownership will change.

Home ownership is a very emotional concept for many buyers (which is why there is a widespread misconception that a person 'owns' their home even while they are making mortgage payments on it, when in reality, ownership is achieved only when the mortgage is fully paid off). This emotion obscures the high costs of obsolete products and procedures that continue to reside in the mortgage industry.

Amidst all the technological disruptions we have seen within the last generation, most people still don't understand that the central origin of most disruptions is an outdated, expensive incumbent system. But the FinTech wing of the ATOM has started the 'cracks in the dam' process against a very substantial and widely-levied cost, and this may be the disruption that brings FinTech's dividends to the masses.

The disruption in education is a topic I have written about at length. In essence, most education is just a transmission of commoditized information, that, like every other information technology, should be declining in cost. However, the corrupt education industry has managed to burrow deep into the emotions of its customers, to such an extent that a rising price for a product of stagnant (often declining) quality is not even questioned. For this reason, education is in a bubble that is already in the process of deflating.

What the MSCS at GATech accomplishes is four-fold :

Lowering the cost of the degree by almost an order of magnitude compared to the same degree as similarly-ranked schools

Making the degree available without relocation to where the institution is physically located

Scaling the degree to an eventual intake of 10,000 students, vs. just 300 that can attend a traditional in-residence program at GATech

Establishing best practices for other departments at GATech, and other institutions, to implement in order to create a broader array of MOOC degree programs

Eventually, the sheer size of enrollment will rapidly lead to GATech becoming a dominant alumni community within computer science, forcing other institutions to catch up. When this competition lowers costs even further, we will see one of the most highly paid and future-proof professions being accessible at little or no cost. When contrasted to the immense costs of attending medical or law school, many borderline students will pursue computer science ahead of professions with large student debt burdens, creating a self-reinforcing cycle of ever-more computer science and ATOM propagation. The fact that one can enroll in the program from overseas will attract many students from countries that do not even have schools of GATech's caliber (i.e. most countries), generating local talent despite remote education.

Crucially, this is strong evidence of how the ATOM always finds new ways to expand itself, since the field most essential to the feeding of the ATOM, computer science, is the one that found a way to greatly increase the number of people destined to work in it, by attacking both cost thresholds and enrollment volumes. This is not a coincidence, because the ATOM always finds a way around anything that is inhibiting the growth of the ATOM, in this case, access to computer science training. Subsequent to this, the ATOM can increase the productivity of education even in less ATOM-crucial fields medicine, law, business, and K-12, since the greatly expanded size of the computer science profession will provide entrepreneurs and expertise to make this happen. This is how the ATOM captures an ever-growing share of the economy into rapidly-deflating technological fundamentals.

As always, the ATOM AotM succeeds through reader suggestions, so feel free to suggest candidates. Criteria include the size and scope of the disruption, how anti-technology the disrupted incumbent was, and an obvious improvement in the quality of a large number of lives through this disruption.

With the new year, we are starting a new article series here at The Futurist. The theme will be a recognition of exceptional innovation. Candidates can be any industry, corporation, or individual that has created an innovation exemplifying the very best of technological disruption. The more ATOM principles exhibited in an innovation (rising living standards, deflation acting in proportion to prior inflation in the incumbent industry, rapid continuous technological improvement, etc.), the greater the chance of qualification.

The inaugural winner of the ATOM Award of the Month is the US hydraulic fracturing industry. While 'fracking' garnered the most news in 2011-13, the rapid technological improvements continued. Natural gas continues to hover around just $3, making the US one of the most competitive countries in industries where natural gas is a large input. Oil prices continue to fall due to ever-improving efficiencies, and from the chart, we can see how many of the largest fields have seen breakevens fall from $80 to under $40 in just the brief 2013-16 period. This is of profound importance, because now even $50 is a profitable price for US shale oil. There is no indication that this trend of lower breakeven prices has stopped. Keep in mind that the massive shale formations in California are not even being accessed yet due to radical obstruction, but a breakeven of $30 or lower ensure the pressure to extract this profit from the Monterrey shale continues to rise. Beyond that, Canada has not yet begun fracking of its own, and when it does, it will certainly have at least as much additional oil as the US found.

This increase, which is just an extra 3M barrels/day to US supply, was nonetheless enough to capsize this highly elastic market and crash world oil prices from $100+ to about $50. Given the improving breakevens, and possibility of new production, this will continue to pressure oil prices for the foreseeable future. This has led to the US turning the tables on OPEC and reversing a large trade deficit into what is now a surplus. If you told any of those 'peak oil' Malthusians that the US would soon have a trade surplus with OPEC, they would have branded you as a lunatic. Note how that ill-informed Maoist-Malthusian cult utterly vanished. Furthermore, this plunge in oil prices has strengthened the economies of other countries that import most of their oil, from Japan to India.

Under ATOM principles, technology always finds a way to lower the cost of something that has become artificially expensive and is hence obstructing the advancement of other technologies. Oil was a premier example of this, as almost all technological innovation is done in countries that have to import large portions of their oil, while almost none is done by oil exporters. Excess wealth accumulation by oil exporters was an anti-technology impediment, and demanded the attention of a good portion of the ATOM. Remember that the worldwide ATOM is of an ever rising size, and comprises of the sum total of all technological products in production at a given time (currently, about 2% of world GDP). Hence, all technological disruptions are interconnected, and when the ATOM is freed up from the completion of a certain disruption, that amount of disruptive capacity becomes available to tackle something new. Given the size of this disruption to oil prices and production geography, this occupied a large portion of the ATOM for a few years, which means a lot of ATOM capacity is now free to act elsewhere.

This concludes our very first ATOM AotM to kick off the new year. I need candidate submissions from readers in order to get a good pool to select from. Criteria include the size and scope of the disruption, how anti-technology the disrupted incumbent was, and an obvious improvement in the quality of a large number of lives through this disruption.

I came across some recent charts about the growth of these two unrelated sectors, one disrupting manufacturing, the other disrupting software of all types (click to enlarge). On one hand, each chart commits the common error of portraying smooth parabola growth, with no range of outcomes in the event of a recession (which will surely happen well within the 8-year timelines portrayed, most likely as soon as 2017). On the other hand, these charts provide reason to be excited about the speed of progress seen in these two highly disruptive technologies, which are core pillars of the ATOM.

This sort of growth rate across two quite unrelated sectors, while present in many prior disruptions, is often not noticed by most people, including those working in these particular fields. Remember, until recently, it took decades or even centuries to have disruptions of this scale, but now we see the same magnitude of transformation happen in mere years, and in many pockets of the economy. This supports the case that all technological disruptions are interconnected and the aggregate size of all disruptions can be calculated, which is a core tenet of the ATOM.

The recent FOMC meetings continue to feature a range of debate only around the rate at which the Fed Funds rate can be increased up to about 4% (which has not coincided with a robust economy since the late 1990s). They actually describe this as a 'normal' rate, and the process of raising the rate as 'normalization'. The 'Dot Plot' pictured here indicates the paradigm that the Federal Reserve still believes. Even the most 'dovish' members still think that the Fed Funds rate will be above 2% by 2019.

This is dangerously inaccurate. At the start of 2016, the Federal Reserve expected that they will do four rate likes this year itself. Now they are down to an expectation of just two (one more than the one early in this year), and may just halt with one. How can a collection of supposedly the best and wisest economic forecasters be so consistently wrong? A 20% stock market correction will lead to a swift rate reversal and a 25%+ correction will lead to a resumption of QE in excess of $100B/month.

The -2% indicated by the Wu-Xia shadow rate might be as deep as -4% by 2025, under current trends of technological diffusion. The worldwide central bank easing required to halt deflation by that time will be several times higher than today. As per the ATOM policy reform recommendations, this can be an exceptionally favorable thing if the fundamentals are recognized.

In the ATOM e-book, we examine how technological disruption can be measured, and how the aggregate disruption ongoing in the world at any given time continues along a smooth, exponentially rising trendline. Among these, certain disruptions are invisible to most onlookers, because a tangential technology is simultaneously disrupting seemingly unrelated industries from an orthogonal direction. In that vein, here are two separate lists of industries that are being disrupted, one by Deep Learning and the other by Blockchain.

Note how many industries are present in both of the above lists, meaning that the sectors have to deal with compound disruptions from more than one direction.

In addition, we see that sectors where disruption was artificially thwarted due to excessive regulation and government protectionism merely see a sharper disruption, higher up in the edifice. When the disruption arrives through abstract technologies such as Deep Learning and Blockchain, the incumbents are unlikely to be able to thwart it, due to the source of the disruption being effectively invisible to the untrained eye. What is understood by very few is that the accelerating rate of adoption/diffusion, depicted in this chart here from Blackrock, is enabled by such orthogonal forces that are not tied to any one product category or even industry.

To begin, refer to the vintage 2006 article where I estimated telescope power to be rising at a compound annual rate of approximately 26%/year, although that is a trendline of a staircase with very large steps. This, coincidentally, is exactly the same rate at which computer graphics technology advances, which also happens to be the square root of Moore's Law's rate of progress. According to this timeline, a wave of powerful telescopes arriving now happens to be right on schedule. Secondly, refer to one of the very best articles on The Futurist, titled 'SETI and the Singularity', where the impact of increasing telescopic power is examined. The exponential increase in the detection of exoplanets (chart from Wikipedia), and the implications for the Drake Equation, are measured, with a major prediction about extraterrestrial life contained therein.

The best news of the last month was something that most people entirely missed. Amidst all the distractions and noise that comprises modern media, a quiet press release discloses that a supercomputer has suddenly become more effective than human doctors in diagnosing certain types of ailments.

This is exceptionally important. As previously detailed in Chapter 3 of The ATOM, not only was a machine more competent than an entire group of physicians, but the machine continues to improve as more patients use it, which in turn makes it more attractive to use, which enables the accrual of even more data upon which to improve further.

But most importantly, a supercomputer like Watson can treat patients in hundreds of locations in the same day via a network connection, and without appointments that have to be made weeks in advance. Hence, such a machine replaces not one, but hundreds of doctors. Furthermore, it takes very little time to produce more Watsons, but it takes 30+ years to produce a doctor from birth, among the small fraction of humans with the intellectual ability to even become a physician. The economies of scale relative to the present doctor-patient model are simply astonishing, and there is no reason that 60-80% of diagnostic work done by physicians cannot soon be replaced by artificial intelligence. This does not mean that physicians will start facing mass unemployment, but rather than the best among them will be able to focus on more challenging problems. The most business-minded of physicians can incorporate AI into their practice to see a greater volume of patients on more complicated ailments.

This is yet another manifestation of various ATOM principles, from technologies endlessly crushing the cost of anything overpriced, to self-reinforcing improvement of deep learning.

In ATOM terms, the progress of Tesla is an example of everything from how all technological disruptions are interlinked, to how each disruption is deflationary in nature. It is not just about the early progress towards electric cars, removal of the dealership layer of distribution, or the recent erratic progress of semi-autonomous driving. Among other things, Tesla has introduced lower-key but huge innovations such as remote wireless software upgrades of the customer fleet, which itself is a paradigm shift towards rapidly-iterating product improvement. In true ATOM form, the accelerating rate of technological change is beginning to sweep the automobile along with it.

When Tesla eventually manages to release a sub-$35,000 vehicle, the precedents set in dealership displacement, continual wireless upgrades, and semi-autonomous driving will suddenly all be available across hundreds of thousands of cars, surprising unprepared observers but proceeding precisely along the expected ATOM trajectory.

However, there may be more nuances to this concept than previously addressed. It may be that since GDP is a human construct, it only happens to be correlated to the accelerating rate of change by virtue of humans being the forefront of advancing intelligence. It could be that once artificial intelligence can advance without human assistance, most types of technology that improve human living standards may stagnate, since the grand goal of propagating AI into space is no longer bottlenecked by human progress. Humans are certainly not the final state of evolution, as evidenced by the much greater suitability of AI for space exploration (AI does not require air or water, etc.).

That is certainly something to think about. Human progress may only be on an accelerating curve until a handoff to AI is completed. After that, metrics quite different than GDP may be the best to measure progress, as the AI perhaps only cares about computational density, TERAFLOPs, etc.

The polygons in any graphical engine increase as a square root of Moore's Law, so the number of polygons doubles every three years.

Sometimes, pictures are worth thousands of words :

1976 :

1986 :

1996 :

2006 :

I distinctly remember when the 2006 image looked particularly impressive. But now, it no longer does. This inevitably brings us to...

2016 (an entire video is available, with some gameplay footage) :

This series illustrates how progress, while not visible over one or two years, accumulates to much more over longer periods of time.

Now, extrapolating this trajectory of exponential progress, what will games bring us in 2020? or 2026? Additionally, note that screen sizes, screen resolution, and immersion (e.g. VR goggles) have risen simultaneously.

I refer readers back to an article written here in 2011, titled 'The End of Petrotyranny', where I claimed that high oil prices were rapidly burning through the buffer that was shielding oil from technological disruption. I quantified the buffer in an equation, and even provided a point value to how much of the buffer was still remaining at the time.

I am happy to declare a precise victory for this prediction, with oil prices having fallen by two-thirds and remaining there for well over a year. While hydraulic fracturing (fracking) turned out to be the primary technology to bring down the OPEC fortress, other technologies such as photovoltaics, batteries, and nanomaterials contributed secondary pressure to the disruption. The disruption unfolded in accordance with the 2011 Law of Finite Petrotyranny :

From the start of 2011, measure the dollar-years of area enclosed by a chart of the price of oil above $70. There are only 200 such dollar-years remaining for the current world petro-order. We can call this the 'Law of Finite Petrotyranny'.

Go to the original article to see various scenarios of how the dollar-years could have been depleted. While we have not used up the full 200 dollar-years to date, the range of scenarios is now much tighter, particularly since fracking in the US continues to lower its breakeven threshold. At present, over $2T/year that was flowing from oil importers to oil producers, has now vanished, to the immense benefit of oil importers, which are the nations that conduct virtually all technological innovation.

The 2011 article was not the first time this subject of technological pressure rising in proportion to the degree of oil price excess has been addressed here at The Futurist. There were prior articles in 2007, as well as 2006 (twice).

As production feverishly scales back, and some of the less central petrostates implode, oil prices will gradually rise back up, generally saturating at the $70 level (itself predicted in 2006) in order to deplete the remaining dollar-years. But we may never again see oil at such a high price relative to world GDP, as existed from most of 2007-14 (oil would have to be $200+/barrel today to surpass the record of $147 set in 2008, in proportion to World GDP).

The rate of technological change has been considerably slower than its trendline ever since the start of the 21st century. I wrote about this back in 2008, but at the time, I did not have quite as advanced techniques of observing and measuring the gap between the rate of change and the trendline, as I do now.

The dot-com bust coincided with a trend toward lower nominal GDP (since everyone wrongly focuses on 'real' GDP, which has less to do with real-world decisions than nominal GDP), and this has led to technological change, despite sporadic bursts, generally progressing at what is currently only 60-70% of its trendline rate. For this reason, may technologies that seemed just 10 years away in 2000, have still not arrived as of 2014. I will write much more on this at a later date.

But for now, two overdue technologies are finally plodding towards where many observers thought they would have been by 2010. Nonetheless, they are highly disruptive, and will do a great deal to change many industries and societies.

What is interesting about AI is how it can greatly expand the capabilities of those who know know to incorporate AI with their own intelligence. The greatest chess grandmaster of all time, Magnus Carlssen, became so by training with AI, and it is unclear that he would have become this great if he lived before a time when such technologies were available.

The recursive learning aspect of AI means that an AI can quickly learn more from new people who use it, which makes it better still. One very obvious area where this could be used is in medicine. Currently, millions of MD general practitioners and pediatricians are seen by billions of patients, mostly for relatively common diagnostics and treatments. If a single AI can learn enough from enough patient inputs to replace most of the most common diagnostic capabilities of doctors, then that is a huge cost savings to patients and the entire healthcare system. Some doctors will see their employment prospects shrink, but the majority will be free to move up the chain and focus on more serious medical problems and questions.

Another obvious use is in the legal system. On one hand, while medicine is universal, the legal system of each country is different, and lawyers cannot cross borders. On the other hand, the US legal system relies heavily on precedent, and there is too much content for any one lawyer or judge to manage, even with legal databases. An AI can digest all laws and precedents and create a huge increase in efficiency once it learns enough. This can greatly reduce the backlog of cases in the court system, and free up judicial capacity for the most serious cases.

The third obvious application is in self-driving cars. Driving is an activity where the full range of possible traffic situations that can arise is not a particularly huge amount of data. Once an AI gets to the point where it analyzes every possible accident, near-accident, and reported pothole, it can easily make self-driving cars far safer than human driving. This is already being worked on at Google, and is only a few years away.

Get ready for AI in all its forms. While many jobs will be eliminated, this will be exceeded by the opportunity to add AI into your own life and your own capabilities. Make your IQ 40 points higher than it is when you need it most, and your memory thrice as deep - all will be possible in the 2020s for those who learn to use these capabilities. In fact, being able to augment your own marketable skills through the use of AI might become one of the most valuable skillsets for the post-2025 workforce.

Everyone knows that the Oculus Rift headset will be released to the consumer in 2015, and that most who have tried it has had their expectations exceeded. It supposedly corrects many of the previous problems of other VR/AR technologies that have dogged developers for two decades, and has a high resolution.

But entertainment is not the only use for a VR/AR headset like the Oculus Rift, for the immersve medium that the device facilitates has tremendous potential for use in education, military training, and all types of product marketing. Entirely new processes and business models will emerge.

One word of caution, however. My decade of direct experience with running a large division of a consumer technology company compels me to advise you not to purchase any consumer technology product until it is in its third generation of consumer release, which is usually 24-48 months after initial release. The reliability and value for money are usually not compelling until Gen three. Do not mistake fractional generations (i.e. 'version 1.1', or 'iPhone 5, 5S, and 5C) for actual generations. Thre Oculus Rift may be an exception to this norm (as are many Apple products), but in general, don't be an early adopter on the consumer side.

Imagine, if you would, that the immersive movies and video games of the near future are not just fully actualized within the VR of the Oculus Rift, but that the characters of the video game adapt via connection to some AI, so that game characters far too intelligent to be overcome by hacks and cheat codes emerge.

Similarly, imagine if various forms of training and education are not just improved via VR, but augmented via AI, where the program learns exactly where the student is having a problem, and adapts the method accordingly, based on similar difficulties from prior students. Suffice it to say, both VR and AI will transform medicine from its very foundations. Some doctors will be able to greatly expand their practices, while others find themselves relegated to obsolesence.

Two overdue technologies, are finally on our doorstep. Make the most of them, because if you don't, someone else surely is.

The Natural Progression of Educational Efficiency : The great Emperor Charlemagne lived in a time when even most monarchs (let alone peasants) were illiterate. Charlemagne had a great interest in attaining literacy for himself and fostering literacy on others. But the methods of education in the early 9th century were primitive and books were handwritten, and hence scarce. Despite all of his efforts, Charlemagne only managed to learn to read after the age of 50, and never quite learned how to write. This indicates how hard it was to attain modern standards of basic literacy at the time.

Over time, as the invention of the printing press enabled the mass production of books, literacy became less exclusive over the subsequent centuries, and methods of teaching that could teach the vast majority of six-year-old children how to read became commonplace, delivered en masse via institutions that came to be known as 'schools'. Since most of us grew up within a mass-delivered classroom model with minimal customization, we consider this method of delivery to be normal, and almost every parent can safely assume that if their child has an IQ above 80 or so, that they will be able to read competently at the right age.

But consider what the Internet age has made available for those who care to take it. I can say with great certainty that the most valuable things I have learned have all been derived from the Internet, free of cost. Whether it was the knowledge that led to new incomes streams, new social capital, or any other useful skills, it was available over the Internet, and that too in just the last decade. Almost every challenge in life has an answer than can be found online. This brings up the question of whether formal schooling, and the immense pricetag associated with it, is still the primary source from which a person can attain the most marketable skills.

Why Education Became an Industry Prone to Attracting Inefficiency : To begin, we first have to address some of the adverse conditioning that most people receive, about what education is, what it should cost, and where it can be obtained. Through centuries of marketing that preys on human insecurity at being left behind, and the tendency to conflate correlation with causation, an immense bubble has inflated over a multi-decade period, and is at its very peak.

Education, which in the bottom 99.9% of classroom settings is really just the transmission of highly commoditized information, has usually correlated to greater economic prospects, especially since, until recently, very few people were likely to overtake the threshold beyond which further education would no longer have a tight correlation to greater earnings. This is why many parents are willing to spare no expense on the education of their children, even to the extent of having fewer children than they might otherwise have had, when estimating the cost of educating them. Exploiting the emotions of parents, the education industry manages to charge ever more money for a product that is often declining in quality, with surprisingly little questioning from their customers. We are so accustomed to this unrelenting rise in costs at all levels of education that few people realize how highly perverse it is.

Glenn Reynolds of Instapundit, with his books 'The Higher Education Bubble' and 'The K-12 Implosion', has been the earliest and most vocal observer of a bubble in the education industry. The vast corruption and sexual misconduct by faculty in K-12 public schools is described in the latter of those two books, but over here, we will focus mostly on higher education.

Among the dynamics he has described are how government subsidization of universities directly as well as of student loans enables universities to increase fees at a rate that greatly outstrips inflation, which in turn allows universities to hire legions of non-academic staff, many of whom exist only to politicize the university experience and further the goals of politicians and government bureaucrats.

As a result, university degrees have gotten more expensive, while the salaries commanded by graduates have remained flat or even fallen. The financial return of many university degrees no longer justifies their cost, and this is true not just of Bachelor's Degrees, but even of many MBA and JD degrees from any school ranked outside the Top 10 or even Top 5.

Graduates often have as much as $200,000 in debt, yet have difficulty finding jobs that pay more than $50,000 a year. Student loan debt has tripled in a decade, even while many universities now see no problem in departing from their primary mission of education, and have drifted into a priority of ideological brainwashing. Combine all these factors, and you have a generation of young people who may have student debt larger than the mortgage on a median American house (meaning they will not be the first-time home purchasers that the housing market depends on to survive), while having their head filled with indoctrination that carries zero or even negative value in the private sector workforce.

When you combine this erosion of value with the fact that it now takes just minutes to research a topic, from home and at any hour, that previously would have involved half a day at the public library, why should the same sort of efficiency gain not be true for more formal types of education that are actually becoming scarcer within universities?

Primed For Creative Destruction : Employers want skills, rather than credentials. There may have been a time when a credential had a tight correlation with a skillset that an employer sought in a new hire, but that has weakened over time, given the dynamic nature of most jobs, and the dilution of rigor in attaining the credential that most degrees have become. Furthermore, technology makes many skillsets obsolete, while creating openings for new ones. With the exception of those with highly specialized advanced degrees, very few people over the age of 30 today, can say that the demands of their current job have much relevance to what they learned in college, or even what computing, productivity, and research tools they may have used in college. Furthermore, anyone who has worked at a corporation for a decade or more is almost certainly doing a very different job than the one they were doing when they were first hired.

Hence, the superstar of the modern age is not the person with the best degree, but rather the person who acquires the most new skills with the greatest alacrity, and the person with the most adaptable skillset. A traditional degree has an ever-shortening half-life of relevance as a person's career progresses, and even fields like Medicine and Law, where one cannot practice without the requisite degree, will not be exempt from this loosening correlation between pedigree and long-term career performance. Agility and adaptability will supercede all other skillsets in the workforce.

Google, always leading the way, no longer mandates college degrees as a requirement, and has recently disclosed that about 14% of its employees do not have them. If a few other technology companies follow suit, then the workforce will soon have a pool of people working at very desirable employers, who managed to attain their position without the time and expense of college. If employers in less dynamic sectors still have resistance to this concept, they will find it harder to ignore the growing number of resumes from people who happen to be alumni of Google, despite not having the required degree. As change happens on the margins, it will only take a small percentage of the workforce to be hired by prestigious employers.

The Disruption Begins at the Top : Since this disruption is technological and almost entirely about software, perhaps the disruption has to originate where the people most directly responsible for the disruption exist. The program that has the potential to slash the costs of entry into a major career category is an online Master of Science in Computer Science (MSCS) degree through a collaboration between the Georgia Institute of Technology, Udacity, and AT&T. For an estimated cost of just $6700, this program can enroll 10,000 geographically dispersed students at once (as opposed to the mere 300 MSCS degrees per year that Georgia Tech was awarding previously). This is a tremendous revolution in terms of both cost and capacity. A degree that can make a graduate eligible for high-paying jobs in a fast-growing field, is now accessible to anyone with the ability to succeed in the program. The implications of this are immense.

For one thing, this profession, which happens to be one with possibly the fastest-growing demand, has itself found a way to greatly increase the influx of new contributors to the field. By removing both cost and geographical location, the program competes not just with brick and mortar MSCS programs, but with other degrees as well. Students who may have otherwise not considered Computer Science as a career at all, may now choose it simply due to the vastly lower cost of preparation relative to similarly high-paying careers like other forms of engineering, law, or medicine. Career changers can jump the chasm at lower risk than before, for the same reasons.

As fields similarly suitable to remote learning (say, systems engineering, mathematics, or certain types of electrical engineering) see MOOC degree programs created for them, more avenues open up. Fields where education can be more easily transmitted to this model will see an inherent advantage over fields that cannot be learned this way, in terms of attracting talent. These fields in turn grow in size, becoming a larger portion of the economy, and creating even more demand for new entrants above a certain competence threshold.

Multi-Faceted Disruption : As The Economist has noted, MOOCs have not yet unleashed a 'gale of Schumpeterian creative destruction' onto universities. But this is still a conflation of the degree and the knowledge, particularly when the demands of the economy may shift many times during a person's career. Udacity, Coursera, MITx, Khan Academy, and Udemy are just a few of the entities enabling low-cost education at all levels. Some are for-profit, some are non-profit. Some address higher education, and some address K-12 education. Some count as credit towards degrees, and some are not intended for degree-granting, but rather for remedial learning. But among all these websites, an innovative pupil can learn a variety of seemingly unrelated subjects and craft an interlocking, holistic education that is specific to his or her goals.

When the sizes and shapes of education available online has so much variety, many assumptions about who has what skills will be challenged. There will be too many counterexamples against the belief that a certain degree qualifies a person for a certain job. Furthermore, the standardization of resumes and qualifications that the paradigm of degrees creates has gone largely unchallenged. People who are qualified in two or more fields will be able to cast a wider net in their careers, and entrepreneurs seeking to enter a new market can get up to speed swiftly.

Scale to the Topmost Educators : There was a time when music and video could not be recorded. Hundreds of orchestras across a nation might be playing the same song, or the same play might be performed by hundreds of thespians at the same time. Recording technologies enabled the most marketable musicians and actors to reach millions of customers at once, benefiting them and the consumer, while eliminating the bottom 99% of workers in these professions. Consumers and the best producers benefitted, while the lesser producers could no longer justify their presence in the marketplace and had to adapt.

The same will happen to teachers. It is not efficient for the same 6th-grade math or 8th grade biology to be taught by hundreds of thousands of teachers across the English-speaking world each year. Instead, technology will enable scale and efficiency. The best few lectures will be seen by all students, and it is quite possible that the best teacher, as determined by market demand, earns far more than one currently thinks a teacher can earn. The rise of the 'celebrity teacher' is entirely possible, when one considers the disintermediation and concentration that has already happened with music and theatrical production. This sort of competition will increase quality that students receive, and ensure renumeration is more closely tied to teacher caliber.

Conclusion : It is not often that we see something experience a dramatic worsening in cost/benefit ratio while competitive alternatives simultaneously become available at far lower costs than just a few years prior. When a status quo has existed for the entire adult lifetime of almost every American alive today, people fail to contemplate the peculiarity of spending as much as the cost of a house on a product of highly variable quality, very uncertain payoff, and very little independent auditing. The degree of outdatedness in the assumption that paying a huge price for a certain credential will lead to a certain career with a certain level of earnings means the edifice will topple far more quickly than many people are prepared for.

2015 is a year that will see the key components of this transformation fall into place. Some people will be enter the same career while spending $50,000 less on the requisite education, than they may have expected. Many colleges will shrink their enrollments or close their doors altogether. The light of accountability will be shone on the vast corruption and ideological extremism present in some of the most expensive institutions (Moody's has already downgraded the outlook of the entire US higher education industry). But most importantly, the most valuable knowledge will become increasingly self-taught from content available to all, and the entire economy will begin the process of adjusting to this new reality.

As oil prices remain high, we once again see murmurs of anticipated doom from various quarters. Such fears are grossly miscalculated, as I have described in my 2007-08 articles about how oil at $120/barrel creates desirable chain reactions, as well as my rebuttal to the poorly considered beliefs of peak oil alarmists, who seem capable of being sold not one, but two bridges in Brooklyn. Today, however, I am going to combine the concepts in both of those articles with some new analysis I have done to enable us to predict when oil will lose the economic power it currently holds. You are about to see that not only are peak oil alarmists wrong, but they are just about as wrong as those predicting in 1988 that the Soviet Union would soon dominate the world, and will soon be equally worthy of ridicule.

Unenlightened Punditry and Fashionable Posturing :

As I mentioned in a previous article, many observers incessantly contradict themselves on whether they want oil to be inexpensive, or whether they want higher oil prices to spur technological innovations. One of the most visible such pundits is Thomas Friedman, who has many interesting articles on the subject, such as his 2007 piece titled 'Fill 'Er Up With Dictators' :

But as oil has moved to $60 to $70 a barrel, it has fostered a counterwave — a wave of authoritarian leaders who are not only able to ensconce themselves in power because of huge oil profits but also to use their oil wealth to poison the global system — to get it to look the other way at genocide, or ignore an Iranian leader who says from one side of his mouth that the Holocaust is a myth and from the other that Iran would never dream of developing nuclear weapons, or to indulge a buffoon like Chávez, who uses Venezuela’s oil riches to try to sway democratic elections in Latin America and promote an economic populism that will eventually lead his country into a ditch.

But Mr. Friedman is a bit self-contradictory on which outcome he wants, as evidenced across his New York Times columns.

So here’s my prediction: You tell me the price of oil, and I’ll tell you what kind of Russia you’ll have. If the price stays at $60 a barrel, it’s going to be more like Venezuela, because its leaders will have plenty of money to indulge their worst instincts, with too few checks and balances. If the price falls to $30, it will be more like Norway. If the price falls to $15 a barrel, it could become more like America

Either tax gasoline by another 50 cents to $1 a gallon at the pump, or set a $50 floor price per barrel of oil sold in America. Once energy entrepreneurs know they will never again be undercut by cheap oil, you’ll see an explosion of innovation in alternatives.

And by not setting a hard floor price for oil to promote alternative energy, we are only helping to subsidize bad governance by Arab leaders toward their people and bad behavior by Americans toward the climate.

All of these articles were written within a 4-month period in early 2007. Both philosophies are true by themselves, but they are mutually exclusive. Mr. Friedman, what do you want? Higher oil prices or lower oil prices? Such confusion indicates how the debate about energy costs and technology is often high on rhetoric and low on analysis.

Much worse, however, is the fashionable scaremongering that the financial media uses to fill up their schedule, amplified by a general public that gets suckered into groupthink. To separate the whining from the reality, I apply the following simple test to verify whether people are actually being pinched by high oil prices or not. If a large portion of average Americans have made arrangements to carpool to work (as was common in the 1970s), then oil prices are high. Absent the willingness to make this adjustment, their whining about gasoline is not a reflection of actual hardship. This enables us to declare that oil prices are not approaching crisis levels until most 10-mile-plus commuters are carpooling, that too in groups of three, rather than just two. Coordinating of carpools is thus the minimum test of whether oil prices are actually causing any significant changes in behavior.

Fortunately, $100 oil, a price that was considered a harbinger of doom as recently as 2007, is now not even enough to induce carpooling in 2011. This quiet development is remarkably unnoticed, and conceals the substantial economic progress that has occurred.

Economic Adaptations :

The following chart from Calculated Risk (click to enlarge) shows the US trade deficit split between oil and non-oil imports. This chart is not indexed as a percentage of GDP, but if it were, we would see that oil imports at $100/barrel today are not much higher of a percentage of GDP than in 1998, when oil was just $20/barrel. In fact, the US produces much more economic output per barrel of oil compared to 1998. We can thus see that unlike in 1974 when the US economy has much less demand elasticity for oil, today the ability of the economy to adjust oil consumption more quickly in reaction to higher prices makes the bar to experience an 'oil shock' much harder to clear. US oil imports will never again attain the same percentage of GDP as was briefly seen in 2008.

Of even more importance is the amazingly consistent per capita consumption of oil since 1982, which has remained at exactly 4.6 barrels/person despite a tripling real GDP per capita during the same period (chart by Morgan Downey). This immediately deflates the claim that the looming economic growth of China and India will greatly increase oil consumption, since the massive growth from 1982 to 2011 did not manage to do this. At this point, annual oil consumption, currently at around 32 billion barrels, only rises at the rate of population growth - about 1% a year.

This leads me to make a declaration. 32 billion barrels at around $100/barrel is $3.2 Trillion in annual consumption. This is currently less than 5% of nominal world GDP. I hereby declare that :

Oil consumption worldwide will never exceed $4 Trillion/year, no matter how much inflation, political turmoil, or economic growth there is. Thus, 'Peak Oil Consumption' happens long before 'Peak Oil Supply' ever could.

This would mean that oil would gradually shrink as a percentage of world GDP, just as it has shrunk as a percentage of US GDP since 1982. Even when world GDP is $150 Trillion, oil consumption will still be under $4 Trillion a year, and thus a very small percentage of the economy. Mark my words, and proceed further to read about how I can predict this with confidence.

The Carnival of Creative Destruction :

There are at least seven technologies that are advancing to reduce oil demand by varying degrees, many of which have been written about separately here at The Futurist :

1) Natural Gas : Technologies that aid the discovery of natural gas have advanced at great speed, and supplies have skyrocketed to a level that exceeds anything humanity could consume in the next few decades. The US alone has enough natural gas to more than offset all oil consumption, and the price of natural gas is currently on par with $50 oil.

3) Cellulose Ethanol and Algae Oil : Corn ethanol was never going to be suitable in cost or scale, but the infrastructure established by the corn ethanol industry makes the transition to more sophisticated forms of ethanol production easier. But fuels from switchgrass and algae are much more cost-effective, and will be ramping up in 2012. Solazyme is an algae oil company that went public recently, and already has a market capitalization of $1.5 Billion.

5) Telepresence : Telepresence, while expensive today, will drop in price under the Impact of Computing and displace a substantial portion of business air travel, as described in detail here. By 2015, geographically dispersed colleagues will seem to be closer to each other, despite meeting in person less often than they did in 2008.

6) Wind Power : Wind Power already generates almost 3% of global electricity consumption, and is growing quickly. When combined with battery advances that improve the range and power of electric and plug-in hybrid vehicles, we get two simultaneous disruptions - oil being displaced not just by electriciy, but by wind electricity.

Plus, these are just the technologies that displace oil demand. There are also technologies that increase oil supply, such as supercomputing-assisted oil discovery and new drilling techniques. Supply-increasing technologies work to reduce oil prices and while they possibly slow down oil demand displacement, they too work to weaken petrotyranny.

The problem in any discussion of these technologies is that the debate centers around an 'all or none' simplicity of whether the alternative can replace all oil demand, or none at all. That is an unnuanced exchange that fails to comprehend that each technology only has to replace 10% of oil demand. Natural gas can replace 10%, ethanol another 10%, efficiency gains another 10%, wind + solar another 10%, and so on. Thus, if oil consumption as a percentage of world GDP is lower in a decade than it is today, that itself is a huge victory. It hardly matters which technology advances faster than the others (in 2007, natural gas did not appear as though it would take the lead that it enjoys today), what matters is that all are advancing, and that many of these technologies are highly complementary to each other.

What is also overlooked is how quickly the pressure to shift to alternatives grows as oil becomes more expensive. If, say, cellulose ethanol is cost-effective with oil at $70, then oil at $80 causes a modest $10 dollar differential in favor of cellulose. If oil is $120, then this differential is now $50, or five times more. Such a delta causes much greater investment and urgency to ramp up research and production in cellulose ethanol. Thus, each increment in oil price creates a much larger zone of profitability for any alternative.

The Cost of Petrotyranny :

This map of nations scaled in proportion to their petroleum reserves (click to enlarge) replaces thousands of words. Some contend that the easy money derived from exporting oil leads to inevitable corruption and the financing of evil well beyond the borders of petro-states, while others lament the misfortune that this major energy source is concentrated in a very small area containing under 2% of the world's population. Other sources of energy, such as natural gas, are much more evenly distributed across the planet, and this supply chain disadvantage is starting to work against oil.

However, as we saw in the 2008 article, many of these regimes are dancing on a very narrow beam only as wide as the span between oil of $70 and $120/barrel. While a price below $70 would be fatal to the current operations of Iran, Venezuela, and Russia, even a high price leads to a shrinkage in export revenue, as domestic consumption rises to reduce export units to a greater degree than can be offset by a price rise. Furthermore, higher prices accelerate the advance of the previously mentioned technologies. For the first time, we can now estimate how long oil can still hold such an exalted economic status.

Quantifying the Remaining Petro-Yoke :

For the first time, we can make the analysis of both technological and political pressure exerted by a particular oil price more precise. We can now quantify the rate of technological demand destruction, and predict the actual number of years before oil ceases to have any ability to cause economic recessions, and regimes like Iran, Venezuela, and Russia no longer can subsist on oil exports to the same degree. This brings me to the second declaration of this article :

From the start of 2011, measure the dollar-years of area enclosed by a chart of the price of oil above $70. There are only 200 such dollar-years remaining for the current world petro-order. We can call this the 'Law of Finite Petrotyranny'.

Allow me to elaborate.

Through some proprietary analysis, I have calculated that the remaining lifetime of oil's economic importance as follows :

From the start of 2011, take the average price of West Texas Intermediate (WTI), Brent, or NYMEX oil, and subtract $70 from that, each year.

Take the number accumulated, and designate that as 'X' dollar-years.

As soon as X equals to 200 dollar-years, then oil will not just fall below $70, but will never again be a large enough portion of world GDP to have a significant macroeconomic impact.

You can plug in your own numbers to estimate the year in which oil will cease to exert such power. For example, if you believe that oil will average $120, which is $50 above the $70 floor, then the X points are expended at a rate of $50/year, meaning depletion at the end of 2014. If oil instead averages just $100, then the X points are expended at $30/year, meaning it will take 6.67 years, or until late 2017, to consume them. Points are only depleted when oil is above $70, but are not restored if oil is below $70 (as research projects may be discontinued or postponed, but work already done is not erased). For those who (wrongly) insist that oil will soon be $170, the good news for them is that in such an event they will see the X points depleted in just two short years. The graph provides 3 scenarios, of oil averaging $120, $110, and $100, and indicating in which year such a price trend would exhaust the 200 X points from points A, B, and C, which is the area of each of the three rectangles. In reality, price fluctuations will cause variations in the rate of X point depletion, but you get the idea.

Keep in mind the Law of Finite Petrotyranny, and on that basis, welcome any increase in oil prices as the hastening force of oil replacement that it is. My personal opinion? We average about $100/barrel, causing depletion of the X points in 2017 (scenario 'C' in green).

Conclusion :

So what happens after the Law of Finite Petrotyranny manifests itself? Let me pre-empt the strawmen that critics will erect, and state that oil will still be an important source of energy. But most people will no longer care about the price of oil, much as the average person does not keep track of the price of natural gas or coal. Oil will simply be a fuel no longer important enough to cause recessions or greatly alter consumer behavior through short-term spikes. Many OPEC countries will see a great reduction in their power, and will no longer be able to placate their citizens through petro-handouts alone. These countries would do well to act now and diversify their economies, phase in civil liberties while they can still do so incrementally, and prepare for a future of much lower leverage over their current customers.

So cheer oil prices higher so that the X points get frittered away quickly. It will be fun.

Observers have been waiting for carbon nanotubes, buckyballs, and graphene to transform the world for quite some time, and the wait has been longer than they expected. Enthusiasts for this new miracle material had all but vanished. Is this warranted? Where does the state of innovation in various forms of carbon, that could yield ultra-strong, ultra-light materials and superfast computing really stand?

Ultra-dense computing and storage : Graphene transistors smaller than 1 nanometer have been demonstrated. Carbon allotopes could keep the exponential doubling of both computing and storage capacity going well into the 2030s.

Carbon Fiber Vehicles : This lightweight, ultrastrong material can save vast amounts of fuel by reducing the weight of cars and aeroplanes. While premium products such as the $6000 Trek Madone bicycles are already made from carbon fiber, greater volume is reducing prices and will soon make the average car much ligher than it is today, increasing fuel efficiency and reducing traffic fatalities.

Energy Storage : Natural Gas is not only much cheaper than oil per unit of energy (oil would have to drop to about $30 to match current NG prices), but the supply of NG is more evenly distributed across the world than the oil supply. The US alone has an enormous reserve of natural gas that could ensure total energy independence. The main problem with NG is storage, which is the primary reason oil displacement is not happening rapidly. But microporous carbon can effectively act as a sponge for natural gas, enabling safe and easy transport. This could potentially change the entire energy map.

There are other applications beyond these core three, but suffice it to say, the allotopes of carbon can perform a greater variety of functions than any other material available to us today. Watch for indications of carbon allotopes popping up in the strangest of places, and know that each emergence drives the cost down ever lower.

This is a version 2.0 of a legendary article written here back on March 19, 2006, noticed and linked by Hugh Hewitt, which led to The Futurist getting on the blogosphere map for the first time. Less than four years have elapsed since the original publication, but the landscape of global warfare has changed substantially over this time, warranting an update to the article.

Given the massive media coverage of the minutia of the Iraq War, and the fashionable fad of being opposed to it, one could be led to think that this is one of the most major wars ever fought. Therein lies the proof that we are actually living in the most peaceful time ever in human history.

Just a few decades ago, wars and genocides killing upwards of a million people were commonplace, with more than one often underway at once. Remember these?

We can thus conclude that by historical standards, the current Iraq War was tiny, and can barely be found on the list of historical death tolls. That it got so much attention merely indicates how little warfare is going on in the world, and how ignorant of historical realities most people are.

Why have so many countries quitely adapted to peaceful coexistence? Why is a war between Britain and France, or Russia and Germany, or the US and Japan, nearly impossible today? Why are we not seeing a year like 1979, where the entire continent of Asia threatened to fly apart due to three major events happening at once (Iranian Revolution, Soviet Invasion of Afghanistan, Chinese invasion of VietNam)?

We can start with the observation that never have two democratic countries, with per-capita GDPs greater than $10,000/year on a PPP basis, gone to war with each other. The decline in warfare in Europe and Asia corelates closely with multiple countries meeting these two conditions over the last few decades, and this can continue as more countries graduate to this standard of freedom and wealth. The chain of logic is as follows :

1) Nations with elected governments and free-market systems tend to be the overwhelming majority of countries that achieve per-capita incomes greater than $10,000/year. Only a few petro-tyrannies are the exception to this rule.

2) A nation with high per-capita income tends to conduct extensive trade with other nations of high prosperity, resulting in the ever-deepening integration of these economies with each other. A war would disrupt the economies of both participants as well as those of neutral trading partners. Since the citizens of these nations would suffer financially from such a war, it is not considered by elected officials.

3) As more of the world's people gain a vested interest in the stability and health of the interlocking global economic system, fewer and fewer countries will consider international warfare as anything other than a lose-lose proposition.

4) More nations can experience their citizenry moving up Maslow's Hierarchy of Needs, allowing knowledge-based industries thrive, and thus making international trade continuously easier and more extensive.

5) Since economic growth is continuously accelerating, many countries have crossed the $10,000/yr barrier in just the last 20 years, and so the reduction in warfare after 1991 years has been drastic even if there was little apparent reduction over the 1900-1991 period.

This explains the dramatic decline in war deaths across Europe, East Asia, and Latin America over the last few decades. Thomas Friedman has a similar theory, called the Dell Theory of Conflict Prevention, wherein no two countries linked by a major supply chain/trade network (such as that of a major corporation like Dell Computer), have ever gone to war with each other, as the cost of losing the presence of major industries through war is prohibitive to both parties. If this is the case, then the combinations of countries that could go to war with each other continues to drop quickly.

To predict the future risk of major wars, we can begin by assessing the state of some of the largest and/or riskiest countries in the world. Success at achieving democracy and a per-capita GDP greater that $10,000/yr are highlighted in green. We can also throw in the UN Human Development Index, which is a composite of these two factors, and track the rate of progress of the HDI over the last 30 years. In general, countries with scores greater than 0.850, consistent with near-universal access to consumer-class amenities, have met the aforementioned requirements of prosperity and democracy. There are many more countries with a score greater than 0.850 today than there were in 1975.

Let's see how some select countries stack up.

China : The per-capita income is rapidly closing in on the $10,000/yr threshold, but democracy is a distant dream. I have stated that China will see a sharp economic slowdown in the next 10 years unless they permit more personal freedoms, and thus nurture entrepreneurship. Technological forces will continue to pressure the Chinese Communist Party, and if this transition is moderately painless, the ripple effects will be seen in most of the other communist or autocratic states that China supports, and will move the world strongly towards greater peace and freedom. The single biggest question for the world is whether China's transition happens without major shocks or bloodshed. I am optimistic, as I believe the CCP is more interested in economic gain than clinging to an ideology and one-party rule, which is a sharp contrast from the Mao era where 40 million people died over ideology-driven economic schemes. Cautiously optimistic.

India : A secular democracy has existed for a long time, but economic growth lagged far behind. Now, India is catching up, and will soon be a bulwark for democracy and stability for the whole world. Some of the most troubled countries in the world, from Burma to Afghanistan, border India and could transition to stability and freedom under India's sphere of influence. India is only now realizing how much the world will depend on it. Optimistic.

Russia : A lack of progress in the HDI is a total failure, enabling many countries to overtake Russia over the last 15 years. Putin's return to dictatorial rule is a further regression in Russia's progress. Hopefully, energy and technology industries can help Russia increase its population growth rate, and up its HDI. Cautiously optimistic.

Indonesia : With more Muslims than the entire Middle East put together, Indonesia took a large step towards democracy in 1999 (improving its HDI score), and is doing moderately well economically. Economic growth needs to accelerate in order to cross $10,000/yr per capita by 2020. Cautiously optimistic.

Pakistan : My detailed Pakistan analysis is here.The divergence between the paths of India and Pakistan has been recognized by the US, and Pakistan, with over 50 nuclear warheads, is also where Osama bin Laden and thousands of other terrorists are currently hiding. Any 'day of infamy' that the US encounters will inevitably be traced to individuals operating in Pakistan, which has regressed from democracy to dictatorship, and is teetering on the edge of religious fundamentalism. The economy is growing quickly, however, and this is the only hope of averting a disaster. Pakistan will continue to struggle between emulating the economic progress of India against descending into the dysfunction of Afghanistan. Pessimistic.

Iraq : Although Iraq is not a large country, its importance to the world is disproportionately significant. Bordering so many other non-democratic nations, our hard-fought victory in Iraq now places great pressure on all remaining Arab states. The destiny of the US is also interwined with Iraq, as the outcome of the current War in Iraq will determine the ability of America to take any other action, against any other nation, in the future. Optimistic.

Iran : Many would be surprised to learn that Iran is actually not all that poor, and the Iranian people have enough to lose that they are not keen on a large war against a US military that could dispose of Iran's military just as quickly as they did Saddam's. However, the autocratic regime that keeps the Iranian people suppressed has brutally quashed democratic movements, most recently in the summer of 2009. The secret to turning Iran into a democracy is its neighbor, Iraq. If Iraq can succeed, the pressure on Iran exerted by Internet access and globalization next door will be immense. This will continue to nibble at the edges of Iranian society, and the regime will collapse before 2015 even without a US invasion. If Iran's leadership insists on a confrontation over their nuclear program, the regime will collapse even sooner. Cautiously optimistic.

But smaller-scale terrorism is nothing new. It just was not taken as seriously back when nations were fighting each other in much larger conflicts. The 1983 Beirut bombing that killed 241 Americans did not dominate the news for more than two weeks, as it was during the far more serious Cold War. Today, the absence of wars between nations brings terrorism into the spotlight that it could not have previously secured.

Wars against terrorism have been a paradigm shift, because where a war like World War II involved symmetrical warfare between declared armies, the War on Terror involves asymmetrical warfare in both directions. Neither party has yet gained a full understanding of the power it has over the other.

A few terrorists with a small budget can kill thousands of innocents without confronting a military force. Guerilla warfare can tie down the mighty US military for years until the public grows weary of the stalemate, even while the US cannot permit itself to use more than a tiny fraction of its power in retaliation. Developed nations spend vastly more money on political and media activites centered around the mere discussion of terrorism than the terrorists themselves need to finance a major attack on these nations.

At the same time, pervasively spreading Internet access, satellite television, and consumer brands continue to disseminate globalization and lure the attention of young people in terrorist states. We saw exactly this in Iran in the summer of 2009, where state-backed murders of civilian protesters were videotaped by cameraphone, and immediately posted online for the world to see. This unrelentingly and irreversibly erodes the fabric of pre-modern fanaticism at almost no cost to the US and other free nations. The efforts by fascist regimes to obstruct the mists of the information ethersphere from entering their societies is so futile as to be comical, and the Iranian regime may not survive the next uprising, when even more Iranians will have camera phones handy. Bidirectional asymmetry is the new nature of war, and the side that learns how to harness the asymmetrical advantage it has over the other is the side that will win.

It is the wage of prosperous, happy societies to be envied, hated, and forced to withstand threats that they cannot reciprocate back onto the enemy. The US has overcome foes as formidable as the Axis Powers and the Soviet Union, yet we managed to adapt and gain the upper hand against a pre-modern, unprofessional band of deviants that does not even have the resources of a small nation and has not invented a single technology. The War on Terror was thus ultimately not with the terrorists, but with ourselves - our complacency, short attention spans, and propensity for fashionable ignorance over the lessons of history.

But 44 months turned out to be a very long time, during which we went from a highly uncertain position in the War on Terror to one of distinct advantage. Whether we continue to maintain the upper hand that we currently have, or become too complacent and let the terrorists kill a million of us in a day remains to be seen.

The Singularity. The event when the rate of technological change becomes human-surpassing, just as the advent of human civilization a few millenia ago surpassed the comprehension of non-human creatures. So when will this event happen?

There is a great deal of speculation on the 'what' of the Singularity, whether it will create a utopia for humans, cause the extinction of humans, or some outcome in between. Versions of optimism (Star Trek) and pessimism (The Matrix, Terminator) all become fashionable at some point. No one can predict this reliably, because the very definition of the singularity itself precludes such prediction. Given the accelerating nature of technological change, it is just as hard to predict the world of 2050 from 2009, as it would have been to predict 2009 from, say, 1200 AD. So our topic today is not going to be about the 'what', but rather the 'when' of the Singularity.

Let us take a few independent methods to arrive at estimations on the timing of the Singularity.

1) Ray Kurzweil has constructed this logarithmic chart that combines 15 unrelated lists of key historic events since the Big Bang 15 billion years ago. The exact selection of events is less important than the undeniable fact that the intervals between such independently selected events are shrinking exponentially. This, of course, means that the next several major events will occur within single human lifetimes.

Kurzweil wrote with great confidence, in 2005, that the Singularity would arrive in 2045. One thing I find about Kurzweil is that he usually predicts the nature of an event very accurately, but overestimates the rate of progress by 50%. Part of this is because he insists that computer power per dollar doubles every year, when it actually doubles every 18 months, which results in every other date he predicts to be distorted as a downstream byproduct of this figure. Another part of this is that Kurzweil, born in 1948, is famously taking extreme measures to extend his lifespan, and quite possibly may have an expectation of living until 100 but not necessarily beyond that. A Singularity in 2045 would be before his century mark, but herein lies a lesson for us all. Those who have a positive expectation of what the Singularity will bring tend to have a subconscious bias towards estimating it to happen within their expected lifetimes. We have to be watchful enough to not let this bias influence us. So when Kurzweil says that the Singularity will be 40 years from 2005, we can apply the discount to estimate that it will be 60 years from 2005, or in 2065.

2) John Smart is a brilliant futurist with a distinctly different view on accelerating change from Ray Kurzweil, but he has produced very little visible new content in the last 5 years. In 2003, he predicted the Singularity for 2060, +/- 20 years. Others like Hans Moravec and Vernor Vinge also have declared predictions at points in the mid/late 21st century.

3) Ever since the start of the fictional Star Trek franchise in 1966, they have made a number of predictions about the decades since, with impressive accuracy. In Star Trek canon, humanity experiences a major acceleration of progress starting from 2063, upon first contact with an extraterrestrial civilization. While my views on first contact are somewhat different from the Star Trek prediction, it is interesting to note that their version of a 'Singularity' happened to occur in 2063 (as per the 1996 film Star Trek : First Contact).

4) Now for my own methodology. We shall first take a look at novel from 1863 by Jules Verne, titled "Paris in the 20th Century". Set about a century in the future from Verne's perspective, the novel predicts innovations such as air conditioning, automobiles, helicopters, fax machines, and skyscrapers in detail. Such accuracy makes Jules Verne the greatest futurist of the 19th century, but notice how his predictions involve innovations that occured within 120 years of writing. Verne did not predict exponential growth in computation, genomics, artificial intelligence, cellular phones, and other innovations that emerged more than 120 years after 1863. Thus, Jules Verne was up against a 'prediction wall' of 120 years, which was much longer than a human lifespan in the 19th century.

So we can return to the Impact of Computing as a driver of the 21st century economy. In the article, I have written about how about $700 Billion per year as of 2008, which is 1.5% of World GDP, comprises of products that improve at an average of 59% a year per dollar spent. Moore's Law is a subset of this, but this cost deflation applies to storage, software, biotechnology, and a few other industries as well.

If products tied to the Impact of Computing are 1.5% of the global economy today, what happens when they are 3%? 5%? Perhaps we would reach a Singularity when such products are 50% of the global economy, because from that point forward, the other 50% would very quickly diminish into a tiny percentage of the economy, particularly if that 50% was occupied by human-surpassing artificial intelligence.

We can thus calculate a range of dates by when products tied to the Impact of Computing become more than half of the world economy. In the table, the columns signify whether one assumes that 1%, 1.5%, or 2% of the world economy is currently tied, and the rows signify the rate at which this percentage share of the economy is increasing, whether 6%, 7%, or 8%. This range is derived from the fact that the semiconductor industry has a 12-14%% nominal growth trend, while nominal world GDP grows at 6-7% (some of which is inflation). Another way of reading the table is that if you consider the Impact of Computing to affect 1% of World GDP, but that share grows by 8% a year, then that 1% will cross the 50% threshold in 2059. Note how a substantial downward revision in the assumptions moves the date outward only by years, rather than centuries or even decades.

We see these parameters deliver a series of years, with the median values arriving at around the same dates as aforementioned estimates. Taking all of these points in combination, we can predict the timing of the Singularity. I hereby predict that the Technological Singularity will occur in :

2060-65 ± 10 years

So the earliest that it can occur is 2050 (hence the URL of this site), and the latest is 2075, with the highest probability of occurance in 2060-65. There is virtually no statistical probability that it can occur outside of the 2050-75 range (sorry, Ray).

So now we know the 'when' of the Singularity. We just don't know the 'what', nor can we with any certainty.

Almost 3 years ago, in October of 2006, I first wrote about Cisco's Telepresence technology which had just launched at that time, and how video conferencing that was virtually indistinguishable from reality was eventually going to sharply increase the productivity and living standards of corporate employees (image : Cisco).

At that time, Cisco and Hewlett Packard both launched full-room systems that cost over $300,000 per room. Since then, there has not been any price drop from either company, which is unheard of for a system with components subject to Moore's Law rates of price declines. This indicates that market demand has been high enough for both Cisco and HP to sustain pricing power and improve margins. Smaller companies like LifeSIze, Polycom, and Teleris have lower-end solutions for as little as $10,000, that have also been selling briskly, but have not yet dragged down the Cisco/HP price tier.

In a trend that could transform the way companies do business, Cisco Systems has slashed its annual travel budget by two-thirds — from $750 million to $240 million — by using similar conferencing technology to replace air travel and hotel bills for its vast workforce.

Likewise, Hewlett-Packard says it sliced 30 percent of its travel expenses from 2007 to 2008 — and expects even better results for 2009 — in large part because of its video conference technology.

If Cisco can chop its travel expenses by two-thirds, and save $500 million per year (which increases their annual profit by a not-insignificant 6-10%), then every other large corporation can save a similar magnitude of money. For corporations with very narrow operating margins, the savings could have a dramatic impact on operating earnings, and therefore stock price. The Fortune 500 alone (excluding airline and hotel companies) could collectively save $100 billion per year, in a wave set to begin immediately if either Cisco or HP drops the price of their solution, which may happen in a matter of months. We will soon see that for every $20 that corporations used to spend on air travel and hotels, they will instead be spending only $1 on videoconferencing expenses. This is gigantic gain in enterprise productivity.

Needless to say, high-margin airline revenue from flights between major business centers (such as San Francisco-Taipei or New York-London) will be slashed, and airlines will have to consolidate to fewer flights, making suitability for business travel even less flexible and losing even more passengers. Hotels will have to consolidate, and taxis and restaurants in business hubs will suffer as well. But these are merely the most obvious of disruptions. What is even more interesting are the less obvious ripple effects that only manifest a few years later, which are :

1) Employee Time and Hassle : Anyone who has had to travel to another continent for a Mon-Fri workweek trip knows that the process of taking a taxi to the airport, waiting 2 hours at the airport, the flight itself, and the ride to the final destination consumes most of the weekends on either side of the trip. Most senior executives log over 200,000 miles of flight per year. This is a huge drag on personal time and quality of life. Travel on weekdays consume productive time that the employer could benefit from, which for senior executives, could be worth thousands of dollars per hour. Furthermore, in an era of superviruses, we have already seen SARS, bird flu, and swine flu as global pandemic threats within the last few years. A reduction of business travel will slow down the rate at which such viruses can spread across the globe and make quarantines less inconvenient for business (although tourist travel and remaining business travel are still carriers of this).

2) Real Estate Prices in Expensive Areas : Home prices in Manhattan and Silicon Valley are presently 4X or more higher than a home of the same square footage 80 miles away. By 2015, the single-screen solution that Cisco sells for $80,000 today may cost as little as $2000, and those from LifeSize and others may be even cheaper, so hosting meetings with colleagues from a home office might be as easy as running a conference call. A good portion of employees who have small children may find it possible to do their jobs in a manner than requires them to go to their corporate office only once or twice a week. If even 20% of employees choose to flee the high-cost housing near their offices, the real estate prices in Manhattan and Silicon Valley will deflate significantly. While this is bad news for owners of real-estate in such areas, it is excellent news for new entrants, who will see an increase in their purchasing power. Best of all, working families may be able to afford to have children that they presently cannot finance.

3) Passenger Aviation Technological Leap : Airlines and aircraft manufacturers have little recourse but to respond to these disruptions with innovations of their own, of which the only compelling possibility is to have each journey take far less time. It is apparent that there has been little improvement in the speed of passenger aircraft in the last 40 years. J. Storrs Hall at the Foresight Institute has an article up with a chart that shows the improvements and total flattening of the speed of passenger airline travel. The cost of staying below Mach 1 vs. being above it are very different, as much as 3X, which accounts for the sudden halt in speed gains just below the speed of sound after the early 1960s. However, the technologies of supersonic aircraft (which exist, of course, in military planes) are dropping in price, and it is possible that suborbital passenger flight could be available for the cost of a first-class ticket by 2025. The Ansari X-prize contest and Space Ship Two have already demonstrated early incarnations of what could scale up to larger planes. This will not reverse the video-conferencing trend, of course, but it will make the airlines more competitive for those interactions that have to be in person.

So we are about to see a cascade of disruptions pulsate through the global economy. While in 2009, you may have no choice but to take a 14-hour flight (each way) to Asia, in 2025, the similar situation may present you with a choice between handling the meeting with the videoconferencing system in your home office vs. taking a 2-hour suborbital flight to Asia.

On April 1, 2006, I wrote a detailed article on the revolutionary changes that were to occur in the concept of home entertainment by 2012 (see Part I and Part II of the article). Now, in 2009, half of the time within the six-year span between the original article and the prediction has elapsed. Of course, given the exponential nature of progress, much more happens within the second half of any prediction horizon relative to the first half.

The prediction issued in 2006 was:

Video Gaming (which will no longer be called this) will become a form of entertainment so widely and deeply partaken in that it will reduce the time spent on watching network television to half of what it is (in 2006), by 2012.

The basis of the prediction was detailed in various points from the original article, which in combination would lead to the outcome of the prediction. The progress as of 2009 around these points is as follows :

The number of polygons per square inch on the screen is a technology that is closely tied to The Impact of Computing, and can only rise steadily. The 'uncanny valley' is a hurdle that designers and animators will take a couple of years to overcome, but overcoming this barrier is inevitable as well.

2) Flat-screen HDTVs reach commodity prices : This has already happened, and prices will continue to drop so that by 2012, 50-inch sets with high resolution will be under $1000. A thin television is important, as it clears the room to allow more space for the movement of the player. A large size and high resolution are equally important, in order to create an immersive visual experience.

We are rapidly trending towards LED and Organic LED (OLED) technologies that will enable TVs to be less than one centimeter thick, with ultra-high resolution.

3) Speech and motion recognition as control technologies : When the original article was written on April 1, 2006, the Nintendo Wii was not yet available in the market. But as of June 2009, 50 million units of the Wii have sold, and many of these customers did not own any game console prior to the Wii.

4) More people are migrating away from television, and towards games : Television viewership is plummeting, particularly among the under-50 audience, as projected in the original 2006 article. Fewer and fewer television programs of any quality are being produced, as creative talent continues to leak out of television network studios. At the same time, World of Warcraft has 11 million subscribers, and as previously mentioned, the Wii has 50 million units in circulation.

There are only so many hours of leisure available in a day, and Internet surfing, movies, and video games are all more compelling than the ever-declining quality of television offerings. Children have already moved away from television, and the trend will creep up the age scale.

5) Some people can earn money through games : There are an increasing number of ways where avid players can earn real money from activities within a Game. From trading of items to selling of characters, this market is estimated at over $1 billion in 2008, and is growing. Highly skilled players already earn thousands of dollars per year this way, and with more participants joining through more advanced VR experiences described above, this will attract a group of people who are able to earn a full-time living through these VR worlds. This will become a viable form of entrepreneurship, just like eBay and Google Ads support entrepreneurial ecosystems today.

Taking all 5 of these points in combination, the original 2006 prediction appears to be on track. By 2012, hours spent on television will be half of what they were in 2006, with sports and major live events being the only forms of programming that retain their audience.

Overall, the prediction seems to be well on track. Disruptive technologies are in the pipeline, and there is plenty of time for each of these technologies to combine into unprecedented new applications. Let us see what the second half of the time interval, between now and 2012, delivers.

The Search for Extra-Terrestrial Intelligence (SETI) seeks to answer one of the most basic questions of human identity - whether we are alone in the universe, or merely one civilization among many. It is perhaps the biggest question that any human can ponder.

The Drake Equation, created by astronomer Frank Drake in 1960, calculates the number of advanced extra-terrestrial civilizations in the Milky Way galaxy in existence at this time. Watch this 8-minute clip of Carl Sagan in 1980 walking the audience through the parameters of the Drake Equation. The Drake equation manages to educate people on the deductive steps needed to understand the basic probability of finding another civilization in the galaxy, but as the final result varies so greatly based on even slight adjustments to the parameters, it is hard to make a strong argument for or against the existence of extra-terrestrial intelligence via the Drake equation. The most speculative parameter is the last one, fL, which is an estimation of the total lifespan of an advanced civilization. Again, this video clip is from 1980, and thus only 42 years after the advent of radio astronomy in 1938. Another 29 years, or 70%, have since been added to the age of our radio-astronomy capabilities, and the prospect of nuclear annihilation of our civilization is far lower today than in was in 1980. No matter how ambitious or conservative of a stance you take on the other parameters, the value of fL in terms of our own civilization, continues to rise. This leads us to our first postulate :

The expected lifespan of an intelligent civilization is rising.

Carl Sagan himself believed that in such a vast cosmos, that intelligent life would have to emerge in multiple locations, and the cosmos was thus 'brimming over' with intelligent life. On the other side are various explanations for why intelligent life will be rare. The Rare Earth Hypothesis argues that the combination of conditions that enabled life to emerge on Earth are extremely rare. The Fermi Paradox, originating back in 1950, questions the contradiction between the supposed high incidence of intelligent life, and the continued lack of evidence of it. The Great Filter theory suggests that many intelligent civilizations self-destruct at some point, explaining their apparent scarcity. This leads to the conclusion that the easier it is for civilization to advance to our present stage, the bleaker our prospects for long-term survival, since the 'filter' that other civilizations collide with has yet to face us. A contrarian case can thus be made that the longer we go without detecting another civilization, the better.

But one dimension that is conspicuously absent from all of these theories is an accounting for the accelerating rate of change. I have previously provided evidence that telescopic power is also an accelerating technology. After the invention of the telescope by Galileo in 1609, major discoveries used to be several decades apart, but now are only separated by years. An extrapolation of various discoveries enabled me to crudely estimate that our observational power is currently rising at 26% per year, even though the first 300 years after the invention of the telescope only saw an improvement of 1% a year. At the time of the 1980 Cosmos television series, it was not remotely possible to confirm the existence of any extrasolar planet or to resolve any star aside from the sun into a disk. Yet, both were accomplished by the mid-1990s. As of May 2009, we have now confirmed a total of 347 extrasolar planets, with the rate of discovery rising quickly. While the first confirmation was not until 1995, we now are discovering new planets at a rate of 1 per week. With a number of new telescope programs being launched, this rate will rise further still. Furthermore, most of the planets we have found so far are large. Soon, we will be able to detect planets much smaller in size, including Earth-sized planets. This leads us to our second postulate :

Telescopic power is rising quickly, possibly at 26% a year.

This Jet Propulsion Laboratory chart of exoplanet discoveries through 2004 is very overdue for an update, but is still instructive. The x-axis is the distance of the planet from the star, and the y-axis is the mass of the planet. All blue, red, and yellow dots are exoplanets, while the larger circles with letters in them are our own local planets, with the 'E' being Earth. Most exoplanet discoveries up to that time were of Jupiter-sized planets that were closer to their stars than Jupiter is to the sun. The green zone, or 'life zone' is the area within which a planet is a candidate to support life within our current understanding of what life is. Even then, this chart does not capture the full possibilities for life, as a gas giant like Jupiter or Saturn, at the correct distance from a Sun-type star, might have rocky satellites that would thus also be in the life zone. In other words, if Saturn were as close to the Sun as Earth is, Titan would also be in the life zone, and thus the green area should extend vertically higher to capture the possibility of such large satellites of gas giants. The chart shows that telescopes commissioned in the near future will enable the detection of planets in the life zone. If this chart were updated, a few would already be recorded here. Some of the missions and telescopes that will soon be sending over a torrent of new discoveries are :

Kepler Mission : Launched in March 2009, the Kepler Mission will continuously monitor a field of 100,000 stars for the transit of planets in front of them. This method has a far higher chance of detecting Earth-sized planets than prior methods, and we will see many discovered by 2010-11.

COROT : This European mission was launched in December 2006, and uses a similar method as the Kepler Mission, but is not as powerful. COROT has discovered a handful of planets thus far.

New Worlds Mission : This 2013 mission will build a large sunflower-shaped occulter in space to block the light of nearby stars to aid the observation of extrasolar planets. A large number of planets close to their stars will become visible through this method.

Square Kilometer Array : Far larger and more powerful than the Allen Telescope Array, the SKA will be in full operation by 2020, and will be the most sensitive radio telescope ever. The continual decline in the price of processing technology will enable the SKA to scour the sky thousands of times faster than existing radio telescopes.

These are merely the missions that are already under development or even under operation. Several others are in the conceptual phase, and could be launched within the next 15 years. So many methods of observation used at once, combined with the cost improvements of Moore's Law, leads us to our third postulate, which few would have agreed with at the time of 'Cosmos' in 1980 :

Thousands of planets in the 'life zone' will be confirmed by 2025.

Now, we will revisit the under-discussed factor of accelerating change. Out of 4.5 billion years of Earth's existence, it has only hosted a civilization capable of radio astronomy for 71 years. But as our own technology is advancing on a multitude of fronts, through the accelerating rate of change and the Impact of Computing, each year, the power of our telescopes increases and the signals of intelligence (radio and TV) emitted from Earth move out one more light year. Thus, the probability for us to detect someone, and for us to be detected by them, however small, is now rising quickly. Our civilization gained far more in both detectability, and detection-capability, in the 30 years between 1980 and 2010, relative to the 30 years between 1610 and 1640, when Galileo was persecuted for his discoveries and support of heliocentrism, and certainly relative to the 30 years between 70,000,030 and 70,000,000 BC, when no advanced civilization existed on Earth, and the dominant life form was Tyrannosaurus.

Nikolai Kardashev has devised a scale to measure the level of advancement that a technological civilization has achieved, based on their energy technology. This simple scale can be summarized as follows :

Type I : A civilization capable of harnessing all the energy available on their planet.

Type II : A civilization capable of harnessing all the energy available from their star.

Type III : A civilization capable of harnessing all the energy available in their galaxy.

The scale is logarithmic, and our civilization currently would receive a Kardashev score of 0.72. We could potentially achieve full Type I status by the mid-21st century due to a technological singularity. Some have estimated that our exponential growth could elevate us to Type II status by the late 22nd century.

This has given rise to another faction in the speculative debate on extra-terrestrial intelligence, a view held by Ray Kurzweil, among others. The theory is that it takes such a short time (a few hundred years) for a civilization to go from the earliest mechanical technology to reach a technological singularity where artificial intelligence saturates surrounding matter, relative to the lifetime of the home planet (a few billion years), that we are the first civilization to come this far. Given the rate of advancement, a civilization would have to be just 100 years ahead of us to be so advanced that they would be easy to detect within 100 light years, despite 100 years being such a short fraction of a planet's life. In other words, where a 19th century Earth would be undetectable to us today, an Earth of the 22nd century would be extremely conspicuous to us from 100 light years away, emitting countless signals across a variety of mediums.

A Type I civilization within 100 light years would be readily detected by our instruments today. A Type II civilization within 1000 light years will be visible to the Allen or the Square Kilometer Array. A Type III would be the only type of civilization that we probably could not detect, as we might have already been within one all along. We do not have a way of knowing if the current structure of the Milky Way galaxy is artificially designed by a Type III civilization. Thus, the fourth and final postulate becomes :

A civilization slightly more advanced than us will soon be easy for us to detect.

The Carl Sagan view of plentiful advanced civilizations is the generally accepted wisdom, and a view that I held for a long time. On the other hand, the Kurzweil view is understood by very few, for even in the SETI community, not that many participants are truly acceleration aware. The accelerating nature of progress, which existed long before humans even evolved, as shown in Carl Sagan's cosmic calendar concept, also from the 1980 'Cosmos' series, simply has to be considered as one of the most critical forces in any estimation of extra-terrestrial life. I have not yet migrated fully to the Kurzweil view, but let us list our four postulates out all at once :

The expected lifespan of an intelligent civilization is rising.

Telescopic power is rising quickly, possibly at 26% a year.

Thousands of planets in the 'life zone' will be confirmed by 2025.

A civilization slightly more advanced than us will soon be easy for us to detect.

As the Impact of Computing will ensure that computational power rises 16,000X between 2009 and 2030, and that our radio astronomy experience will be 92 years old by 2030, there are just too many forces that are increasing our probabilities of finding a civilization if one does indeed exist nearby. It is one thing to know of no extrasolar planets, or of any civilizations. It is quite another to know about thousands of planets, yet still not detect any civilizations after years of searching. This would greatly strengthen the case against the existence of such civilizations, and the case would grow stronger by year. Thus, these four postulates in combination lead me to conclude that :

Most of the 'realistic' science fiction regarding first contact with another extra-terrestrial civilization portrays that civilization being domiciled relatively nearby. In Carl Sagan's 'Contact', the civilization was from the Vega star system, just 26 light years away. In the film 'Star Trek : First Contact', humans come in contact with Vulcans in 2063, but the Vulcan homeworld is also just 16 light years from Earth. The possibility of any civilization this near to us would be effectively ruled out by 2030 if we do not find any favorable evidence. SETI should still be given the highest priority, of course, as the lack of a discovery is just as important as making a discovery of extra-terrestrial intelligence.

If we do detect evidence of an extra-terrestrial civilization, everything about life on Earth will change. Both 'Contact' and 'Star Trek : First Contact' depicted how an unprecedented wave of human unity swept across the globe upon evidence that humans were, after all, one intelligent species among many. In Star Trek, this led to what essentially became a techno-economic singularity for the human race. As shown in 'Contact', many of the world's religions were turned upside down upon this discovery, and had to revise their doctrines accordingly. Various new cults devoted to the worship of the new civilization formed almost immediately.

If, however, we are alone, then according to many Singularitarians, we will be the ones to determine the destiny of the cosmos. After a technological singularity in the mid-21st century that merges our biology with our technology, we would proceed to convert all matter into artificial intelligence, make use of all the elementary particles in our vicinity, and expand outward at speeds that eventually exceed the speed of light, ultimately saturating the entire universe with out intelligence in just a few centuries. That, however, is a topic for another day.

Anyone who follows technology is familiar with Moore's Law and its many variations, and has come to expect the price of computing power to halve every 18 months. But many people don't see the true long-term impact of this beyond the need to upgrade their computer every three or four years. To not internalize this more deeply is to miss financial opportunities, grossly mispredict the future, and be utterly unprepared for massive, sweeping changes to human society. Hence, it is time to update the first version of this all-important article that was written on February 21, 2006.

Today, we will introduce another layer to the concept of Moore's Law-type exponential improvement. Consider that on top of the 18-month doubling times of both computational power and storage capacity (an annual improvement rate of 59%), both of these industries have grown by an average of approximately 12% a year for the last fifty years. Individual years have ranged between +30% and -12%, but let us say that the trend growth of both industries is 12% a year for the next couple of decades.

So, we can conclude that a dollar gets 59% more power each year, and 12% more dollars are absorbed by such exponentially growing technology each year. If we combine the two growth rates to estimate the rate of technology diffusion simultaneously with exponential improvement, we get (1.59)(1.12) = 1.78

The Impact of Computing grows at a scorching pace of 78% a year.

Sure, this is a very imperfect method of measuring technology diffusion, but many visible examples of this surging wave present themselves. Consider the most popular television shows of the 1970s, where the characters had all the household furnishings and electrical appliances that are common today, except for anything with computational capacity. Yet, economic growth has averaged 3.5% a year since that time, nearly doubling the standard of living in the United States since 1970. It is obvious what has changed during this period, to induce the economic gains.

In the 1970s, there was virtually no household product with a semiconductor component. In the 1980s, many people bought basic game consoles like the Atari 2600, had digital calculators, and purchased their first VCR, but only a fraction of the VCR's internals, maybe 20%, comprised of exponentially deflating semiconductors, so VCR prices did not drop that much per year. In the early 1990s, many people began to have home PCs. For the first time, a major, essential home device was pegged to the curve of 18-month halvings in cost per unit of power. In the late 1990s, the PC was joined by the Internet connection and the DVD player.

Now, I want everyone reading this to tally up all the items in their home that qualify as 'Impact of Computing' devices, which is any hardware device where a much more powerful/capacious version will be available for the same price in 2 years. You will be surprised at how many devices you now own that did not exist in the 80s or even the 90s.

Include : Actively used PCs, LCD/Plasma TVs and monitors, DVD players, game consoles, digital cameras, digital picture frames, home networking devices, laser printers, webcams, TiVos, Slingboxes, Kindles, robotic toys, every mobile phone, every iPod, and every USB flash drive. Count each car as 1 node, even though modern cars may have $4000 of electronics in them.

Do not include : Tube TVs, VCRs, film cameras, individual video games or DVDs, or your washer/dryer/oven/clock radio just for having a digital display, as the product is not improving dramatically each year.

If this doesn't persuade people of the exponentially accelerating penetration of information technology, then nothing can.

To summarize, the number of devices in an average home that are on this curve, by decade :

1960s and earlier : 0

1970s : 0-1

1980s : 1-2

1990s : 3-4

2000s : 6-12

2010s : 15-30

2020s : 40-80

The average home of 2020 will have multiple ultrathin TVs hung like paintings, robots for a variety of simple chores, VR-ready goggles and gloves for advanced gaming experiences, sensors and microchips embedded into clothing, $100 netbooks more powerful than $10,000 workstations of today, surface computers, 3-D printers, intelligent LED lightbulbs with motion-detecting sensors, cars with features that even luxury models of today don't have, and at least 15 nodes on a home network that manages the entertainment, security, and energy infrastructure of the home simultaneously.

At the industrial level, the changes are even greater. Just as telephony, photography, video, and audio before them, we will see medicine, energy, and manufacturing industries become information technology industries, and thus set to advance at the rate of the Impact of Computing. The economic impact of this is staggering. Refer to the Future Timeline for Economics, particularly the 2014, 2024, and 2034 entries. Deflation has traditionally been a bad thing, but the Impact of Computing has introduced a second form of deflation. A good one.

It is true that from 2001 to 2009, the US economy has actually shrunk in size, if measured in oil, gold, or Euros. To that, I counter that every major economy in the world, including the US, has grown tremendously if measured in Gigabytes of RAM, TeraBytes of storage, or MIPS of processing power, all of which have fallen in price by about 40X during this period. One merely has to select any suitable product, such as a 42-inch plasma TV in the chart, to see how quickly purchasing power has risen. What took 500 hours of median wages to purchase in 2002 now takes just 40 hours of median wages in 2009. Pessimists counter that computing is too small a part of the economy for this to be a significant prosperity elevator. But let's see how much of the global economy is devoted to computing relative to oil (let alone gold).

Oil at $50/barrel amounts to about $1500 Billion per year out of global GDP. When oil rises, demand falls, and we have not seen oil demand sustain itself to the extent of elevating annual consumption to more than $2000 Billion per year.

Semiconductors are a $250 Billion industry and storage is a $200 Billion industry. Software, photonics, and biotechnology are deflationary in the same way as semiconductors and storage, and these three industries combined are another $500 Billion in revenue, but their rate of deflation is less clear, so let's take just half of this number ($250 Billion) as suitable for this calculation.

So $250B + $200B + $250B = $700 Billion that is already deflationary under the Impact of Computing. This is about 1.5% of world GDP, and is a little under half the size of global oil revenues.

The impact is certainly not small, and since the growth rate of these sectors is higher than that of the broader economy, what about when it becomes 3% of world GDP? 5%? Will this force of good deflation not exert influcence on every set of economic data? At the moment, it is all but impossible to get major economics bloggers to even acknowledge this growing force. But over time, it will be accepted as a limitless well of rising prosperity.

12% more dollars spent each year, and each dollar buys 59% more power each year. Combine the two and the impact is 78% more every year.

All of us remember the dot-com bubble, the crippling bust that eventually was a correction of 80% from the peak, and the subsequent moderated recovery. This was easy to notice as there were many publicly traded companies that could be tracked daily.

I believe that nanotechnology underwent a similar bubble, peaking in early 2005, and has been in a bust for the subsequent four years. Allow me to elaborate.

By 2004, major publications were talking about nanotech as if it was about to surge. Lux Capital was publishing a much-anticipated annual 'Nanotech Report'. There was even a company by the name of NanoSys that was preparing for an IPO in 2004. BusinessWeek even had an entire issue devoted to all things nanotech in February 2005. We were supposed to get excited.

But immediately after the BusinessWeek cover, everything seemed to go downhill. Nanosys did not conduct an IPO, nor did any other company. Lux Capital only published a much shorter report by 2006, and stopped altogether in 2007 and 2008. No other major publication devoted an entire issue to the topic of nanotechnology. Venture capital flowing to nanotech ventures dried up. Most importantly, people stopped talking about nanotechnology altogether. Not many people noticed this because they were too giddy about their home prices rising, but to me, this shriveling of nano-activity had uncanny parallels to prior technology slumps.

The recovery out of the four-year nanotech winter could not be happening at a better time. Nanotech is thus set to be one of the four sectors of technology (the others being solar energy, surface computing, and wireless data) that pull the global economy into its next expansion starting in late 2009.

The time has thus come for making specific predictions about the details of future economic advancement. I hereby present a speculative future timeline of economic events and milestones, which is a sibling article to Economic Growth is Exponential and Accelerating, v2.0.

2008-09 : A severe US recession and global slowdown still results in global PPP economic growth staying positive in calendar 2008 and 2009. Negative growth for world GDP, which has not happened since 1973, is not a serious possibility, even though the US and Europe experience GDP contraction in this period. The world GDP growth rate trendline resides at growth of 4.5% a year.

2010 : World GDP growth rebounds strongly to 5% a year. More than 3 billion people now live in emerging economies growing at over 6% a year. More than 80 countries, including China, have achieved a Human Development Index of 0.800 or higher, classifying them as developed countries.

2012 : Over 2 billion people have access to unlimited broadband Internet service at speeds greater than 1 mbps, a majority of them receiving it through their wireless phone/handheld device.

2013 : Many single-family homes in the US, particularly in California, are still priced below the levels they reached at the peak in 2006, as predicted in early 2006 on The Futurist. If one adjusts for cost of capital over this period, many California homes have corrected their valuations by as much as 50%.

2014 : The positive deflationary economic forces introduced by the Impact of Computing are now large and pervasive enough to generate mainstream attention. The semiconductor and storage industries combined exceed $800 Billion in size, up from $450 Billion in 2008. The typical US household is now spending $2500 a year on semiconductors, storage, and other items with rapidly deflating prices per fixed performance. Of course, the items puchased for $2500 in 2014 can be purchased for $1600 in 2015, $1000 in 2016, $600 in 2017, etc.

2015 : As predicted in early 2006 on The Futurist, a 4-door sedan with a 240 hp engine, yet costing only 5 cents/mile to operate (the equivalent of 60 mpg of gasoline), is widely available for $35,000 (which is within the middle-class price band by 2015). This is the result of combined advances in energy, lighter nanomaterials, and computerized systems.

2018 : Among new cars sold, gasoline-only vehicles are now a minority. Millions of vehicles are electrically charged through solar panels on a daily basis, relieving those consumers of a fuel expenditure that was as high as $3000 a year in 2008. Some electrical vehicles cost as little as 1 cent/mile to operate.

2019 : The Dow Jones Industrial Average surpasses 25,000. The Nasdaq exceeds 5000, finally surpassing the record set 19 years prior in early 2000.

2020 : World GDP per capita surpasses $15,000 in 2008 dollars (up from $8000 in 2008). Over 100 of the world's nations have achieved a Human Development Index of 0.800 or higher, with the only major concentrations of poverty being in Africa and South Asia. The basic necessities of food, clothing, literacy, electricity, and shelter are available to over 90% of the human race.

Trade between India and the US touches $400 Billion a year, up from only $32 Billion in 2006.

2022 : Several millon people worldwide are each earning over $50,000 a year through web-based activities. These activities include blogging, barter trading, video production, web-based retail ventures, and economic activites within virtual worlds. Some of these people are under the age of 16. Headlines will be made when a child known to be perpetually glued to his video game one day surprises his parents by disclosing that he has accumulated a legitimate fortune of more than $1 million.

2024 : The typical US household is now spending over $5000 a year on products and services that are affected by the Impact of Computing, where value received per dollar spent rises dramatically each year. These include electronic, biotechnology, software, and nanotechnology products. Even cars are sometimes 'upgraded' in a PC-like manner in order to receive better technology, long before they experience mechanical failure. Of course, the products and services purchased for this $5000 in 2024 can be obtained for $3200 in 2025, $2000 in 2026, $1300 in 2027, etc.

2025 : The printing of solid objects through 3-D printers is inexpensive enough for such printers to be common in upper-middle-class homes. This disrupts the economics of manufacturing, and revamps most manufacturing business models.

2027 : 90% of humans are now living in nations with a UN Human Development Index greater than 0.800 (the 2008 definition of a 'developed country', approximately that of the US in 1960). Many Asian nations have achieved per capita income parity with Europe. Only Africa contains a major concentration of poverty.

2030 : The United States still has the largest nominal GDP among the world's nations, in excess of $50 Trillion in 2030 dollars. China's economy is a close second to the US in size. No other country surpasses even half the size of either of the two twin giants.

The world GDP growth rate trendline has now surpassed 5% a year. As the per capita gap has reduced from what it was in 2000, the US now grows at 4% a year, while China grows at 6% a year.

10,000 billionaires now exist worldwide, causing the term to lose some exclusivity.

2032 : At least 2 TeraWatts of photovoltaic capacity is in operation worldwide, generating 8% of all energy consumed by society. Vast solar farms covering several square miles are in operation in North Africa, the Middle East, India, and Australia. These farms are visible from space.

2034 : The typical US household is now spending over $10,000 a year on products and services that are affected by the Impact of Computing. These include electronic, biotech, software, and nanotechnology products. Of course, the products and services purchased for this $10,000 in 2034 can be obtained for $6400 in 2035, $4000 in 2036, $2500 in 2037, etc.

2040 : Rapidly accelerating GDP growth is creating astonishing abundance that was unimaginable at the start of the 21st century. Inequality continues to be high, but this is balanced by the fact that many individual fortunes are created in extremely short times. The basic tools to produce wealth are available to at least 80% of all humans.

Tourism into space is affordable for upper middle class people, and is widely undertaken.

________________________________________________________

I believe that this timeline represents a median forecast for economic growth from many major sources, and will be perceived as too optimistic or too pessimistic by an equal number of readers. Let's see how closely reality tracks this timeline.

For 2009, the portfolio is quite simple. I believe that small-cap value and financial stocks are at historically compelling valuations, and have no choice but to rise. A few major technology stocks are also at attractive valuations.

So the portfolio will be :

This captures the following trends from previous articles on The Futurist :

I am of the belief that we will experience a Technological Singularity around 2050 or shortly thereafter. Many top futurists all arrive at prediction dates between 2045 and 2075. The bulk of Singularity debate revolves not so much around 'if' or even 'when', but rather 'what' the Singularity will appear like, and whether it will be positive or negative for humanity.

To be clear, some singularities have already happened. To non-human creatures, a technological singularity that overhauls their ecosystem already happened over the course of the 20th century. Domestic dogs and cats are immersed in a singularity where most of their surroundings surpass their comprehension. Even many humans have experienced a singularity - elderly people in poorer nations make no use of any of the major technologies of the last 20 years, except possibly the cellular phone. However, the Singularity that I am talking about has to be one that affects all humans, and the entire global economy, rather that just humans that are marginal participants in the economy. By definition, the real Technological Singularity has to be a 'disruption in the fabric of humanity'.

In the period between 2008 and 2050, there are several milestones one can watch for in order to see if the path to a possibile Singularity is still being followed. Each of these signifies a previously scarce resource becoming almost infinitely abundant (much like paper today, which was a rare and precious treasure centuries ago), or a dramatic expansion in human experience (such as the telephone, airplane, and Internet have been) to the extent that it can even be called a transhuman experience. The following are a random selection of milestones with their anticipated dates.

Each of these milestones, while not causing a Singularity by themselves, increase the probability of a true Technological Singularity, with the event horizon pulled in closer to that date. Or, the path taken to each of these milestones may give rise to new questions and metrics altogether. We must watch for each of these events, and update our predictions for the 'when' and 'what' of the Singularity accordingly.

Despite my general optimism, this particular machine does not pass my 'too good to be true' test, at least before 2020. A machine that could construct homes and commercial buildings at such a speed and cost would cause an unprecedented economic disruption across the world. There would be a steep but brief depression, as existing real estate loses 90% or more of its value, followed by a huge boom as home ownership becomes affordable to several times as many people as today. I don't think that we are on the brink of such a revolution.

For me to be convinced, I would have to see :

1) Articles on this device in mainstream publications like The Economist, BusinessWeek, MIT Technology Review, or Popular Mechanics.

2) The ability to at least print simple constructs like concrete perimeter walls or sidewalks at a rate and cost several times superior to current methods. Only then can more complex structures be on the horizon.

I will revisit this technology if either of these two conditions is solidly met.

Computing, once seamlessly synonymous with technological progress, has not grabbed headlines in recent memory. We have not had a 'killer ap' in computing in the last few years. Maybe you can count Wi-fi access to laptops in 2002-03 as the most recent one, but if that is not a sufficiently important innovation, we then have to go all the way back to the graphical World Wide Web browser in 1995. Before that, the killer ap was Microsoft Office for Windows in 1990. Clearly, such shifts appear to occur at intervals of 5-8 years.

I can, without hesitation, nominate surface computing as the next great generational augmentation in the computing experience. This is because surface computing entirely transforms the human-computer interaction in a matter that is more suitable for the human body than the mouse/keyboard model is. In accordance with the Impact of Computing, rapid drops in the costs of both high-definition displays and tactile sensors are set to bring this experience to consumers by the end of this decade.

As far as early applications of surface computing, a fertile imagination can yield many prospects. For example, a restaurant table may feature a surface that displays the menu, enabling patrons to order simply by touching the picture of the item they choose. The information is sent to the kitchen, and this saves time and reduces the number of waiters needed by the restaurant (as waiters would only be needed to deliver the completed orders). Applications for classroom and video game settings also readily present themselves.

Watch for demonstrations of various surface computers at your local electronics store, and keep an eye on the price drops. After seeing a demonstration, do share at what pricepoint you might purchase one. The next generation of computing beckons.

Most of these will be available to average consumers within the next 7-10 years, and will extend lifespans while dramatically lowering healthcare costs (mostly through enhanced capabilities of early detection and prevention, as well as shorter recovery times for patients).� This is consistent with my expectation that bionanotechnology is quietly moving along established trendlines despite escaping the notice of most people.� These technologies will also move us closer to Actuarial Escape Velocity, where the rate of lifespan increases exceed that of real time.�

Another angle that these technologies effect is the globalization of healthcare.� We have previously noted the success of 'medical tourism' in US and European patients seeking massive discounts on expensive procedures.� These technologies, given their potential to lower costs and recovery times, are even more suitable for medical offshoring than their predecessors, and thus could further enhance the competitive position of the countries that are quicker to adopt them.� If the US is at the forefront of using the 'bloodstream bot' to unclog arteries, the US thus once again becomes more attractive than getting a traditional procedure done in India or Thailand.� But if the lower cost destinations also adopt these technologies faster than the heavily regulated US, then even more revenue migrates overseas and the US healthcare sector would suffer further deserved blows, and be under even greater pressure to conform to market forces.� As technology once again acts as the great leveler, another spark of hope for reforming the dysfunctional US healthcare sector has emerged.�

These technologies are near enough to availability that you may even consider showing this article to your doctor, or writing a letter to your HMO.� Plant the seed into their minds...

I would normally not bother to rebut something like this, except that this particular essay is so stunningly wrong and annoyingly pessimistic, despite the seemingly meticulous research the author has conducted, that I am compelled to disect how insulated groupthink can spiral into a zone where even the most extreme conclusions are accepted.

1) That rising oil prices do not cause a long-term downward adjustment in demand. Oil demand may be inelastic in the short-term, but in the long term, people will buy more efficient cars, carpool, ride bicycles, reduce discretionary trips, conduct more commerce online, etc. To assume otherwise is to ignore the most basic law of economics. This is before even accounting for the indirect benefits of declining oil demand such as a drop in traffic fatalities (which cost $2 million apiece to the economy), less wear and tear on roads and tires, less pollution, less real estate consumed by gas stations, less competition for parking spaces, etc.

2) That rising grain prices will not move consumption away from increasingly expensive meat towards affordable grains, fruits, and vegetables, thereby reducing grain and water demand. This, too, is economic illiteracy. If the price of beef triples while the price of rice and potatoes does not, consumption patterns shift.

3) That there will be very little technological innovation in alternative energy, automobile efficiency, batteries, or information technology from this point on. In fact, there is innovation in all of those areas, so we have multiple layers of protection against the doomsday scenario, as detailed by these articles :

4) That most economic growth is not in knowledge-based industries, which consume far less energy per dollar of output. The US economy today produces twice the financial output per unit of oil consumption as it did in 1975, with information technology rising as a portion of total economic output.

5) That a major economic downturn, featuring skyrocketing food prices for people in poorer countries, will somehow not translate to a lower birth rate that inhibits population growth and hence curbs demand, and that population projections will somehow not change.

6) That there will be no humans living beyond the Earth (whether in orbit or on the Moon) by 2040. The reason this point is relevant is because a society cannot advance in space travel without simultaneous advances in energy technology. I say that advances in photovoltaic efficiency make Lunar colonies closer to viability by that time.

Two of the leading thinkers in the field of life extension, Ray Kurzweil and Aubrey de Grey, believe that by the 2020s, human life expectancy will increase by more than one year every year (in 2002 Kurzweil predicted that this would happen as soon as 2013, but this is just another example of him consistently overestimating the rate of change). This means that death will approach the average person at a slower rate than the rate of technology-driven lifespan increases. It does not mean all death suddenly stops, but it does mean than those who are not close to death do have a possibility of indefinite lifespan after AEV is reached. David Gobel, founder of the Methuselah Foundation, has termed this as Actuarial Escape Velocity (AEV), essentially comparing the rate of lifespan extension to the speed at which a spacecraft can surpass the gravitiational pull of the planet it launches from, breaking free of the gravitational force. Thus, life expectancy is currently, as of 2007 data, rising at 20% of Actuarial Escape Velocity.

I remain unconvinced that such improvements will be reached as soon as Ray Kurzeil and Aubrey de Grey predict. I will be convinced after we clearly achieve 50% of AEV in developed countries, where six months are added to life expectancy every year. It is possible that the interval between 50% and 100% of AEV comprises less than a decade, but I'll re-evaluate my assumptions when 50% is achieved.

Serious research efforts are underway. The Methuselah Mouse Prize will award a large grant to researchers that can demonstrate substantial increases in the lifespan of a mouse (more from The Economist). Once credible gains can be demonstrated, funding for the research will increase by orders of magnitude.

The enormous market demand for lifespan extension technologies is not in dispute. There are currently 95,000 individuals in the world with a net worth greater than $30 million, including 1125 billionaires. Accelerating Economic Growth is already growing the ranks of the ultrawealthy at a scorching pace. If only some percentage of these individuals are willing to pay a large portion of their wealth in order to receive a decade or two more of healthy life, particularly since money can be earned back in the new lease on life, then such treatment already has a market opportunity in the hundreds of billions of dollars. The reduction in the economic costs of disease, funerals, etc. are an added bonus. Market demand, however, cannot always supercede the will of nature.

This is only the second article on life extension that I have written on The Futurist, out of 154 total articles written to date. While I certainly think aging will be slowed down to the extent that many of us will surpass the century mark, it will take much more for me to join the ranks of those who believe aging can be truly reversed. To track progress in this field, keep one eye on the rate of decline in cancer and heart disease deaths, and another eye on the Methuselah Mouse Prize. That such metrics are even advancing on a yearly basis is already remarkable, but monitoring anything more than these two measures, at this time, would be premature.

So let's find out what the group prediction is, with a poll. Keep in mind that most people are biased towards believing this date will fall within their own lifetimes (poll closed 7/1/2012) :

This is exciting on multiple levels, because it opens to door to not just mainsteam electical vehicles in the next decade, but to a variety of wearable electronic devices, 20-30 hour laptop batteries, household robotics, and other applications that have not yet been imagined.

There are minor but growing elements of evidence that the rate of technological change has moderated in this decade. Whether this is a temporary trough that merely precedes a return to the trendline, or whether the trendline itself was greatly overestimated, will not be decisively known for some years. In this article, I will attempt to examine some datapoints to determine whether we are at, or behind, where we would expect to be in 2008.

This brings us to the chart below from Ray Kurzweil (from Wikipedia) :

This chart appears prominently in many of Kurzweil's writings, and brilliantly conveys the concept of how each major consumer technology reached the mainstream (as defined by a 25% US household penetration rate) in successively shorter times. The horizontal axis represents the year in which the technology was invented.

This chart was produced some years ago, and therein lies the problem. If we were to update the chart to the present day, which technology would be the next addition after 'The Web'?

Many technologies can claim to be the ones to occupy the next position on the chart. IPods and other portable mp3 players, various Web 2.0 applications like social networking, and flat-panel TVs all reached the 25% level of mainstream adoption in under 6 years in accordance with an extrapolation of the chart through 2008. However, it is debatable that any of these are 'revolutionary' technologies like the ones on the chart, rather than merely increments above incumbent predecessors. The iPod merely improved upon the capacity and flexibility of the walkman, the plasma TV merely consumed less space than the tube TV, etc. The technologies on the chart are all infrastructures of some sort, and it is clear that after 'The Web', we are challenged to find a suitable candidate for the next entry.

Thus, we either are on the brink of some overdue technology emerging to reach 25% penetration of US households in 6 years or less, or the rapid diffusion of the Internet truly was a historical anomaly, and for the period from 2001 to 2008 we were merely correcting back to a trendline of much slower diffusion (where it take 10-15 years for a technology to each 25% penetration in the US). One of the two has to be true, at least for an affluent society like the US.

This brings us to the third and final dimension of possibility. This being the decade of globalization, with globalization itself being an expected natural progression of technological change, perhaps a US-centric chart itself was inappropriate to begin with. Landline telephones and television sets still do not have 25% penetration in countries like India, but mobile phones jumped from zero to 10% penetration in under 7 years. The oft-cited 'leapfrogging' of technologies that developing nations can benefit from is a crucial piece of technological diffusion, which would thus show a much smaller interval between 'telephones' and 'mobile phones' than in the US-based chart above. Perhaps '10% Worldwide Household Penetration' is a more suitable measure than '25% US Household Penetration', which would then possibly show that there is no lull in worldwide technological adoption at all.

I may try to put together this new worldwide chart. The horizontal axis would not change, but the placement of datapoints along the vertical axis would. Perhaps Kurzweil merely has to break out of US-centricity in order to strengthen his case and rebut most of his critics.

In scouring the startup universe for the companies and technologies that can reshape human society and create entirely new industries, one has to play the role of a prospective Venture Capitalist, yet not be constrained by the need for a financial exit 3-6 years hence.

Therefore, I have assembled a list of nine small companies, each with technologies that have the potential to create trillion-dollar economic disruptions by 2020, disruptions that most people have scarcely begun to imagine today. Note that the emphasis is on the technologies rather than the companies themselves, as a startup requires much more than a revolutionary technology in order to prosper. Management skills, team synergy, and execution efficiency are all equally important. I predict that out of this list of nine companies, perhaps one or two will become titans, while the others will be acquired by larger companies for modest sums, enabling the technology to reach the market through the acquiring company.

1) NanoSolar : NanoSolar produces low-cost solar cells that are manufactured by a process analogous to 'printing'. The company's technology was selected by Popular Mechanics as the 'Innovation of the Year' for 2007, and Nanosolar's solar cells are significantly ahead of the Solar Energy Cost Curve. The flexible, thin nature of Nanosolar's cells may enable them to be quickly incorporated onto the surfaces of many types of commercial buildings. Nanosolar's first shipments have already occurred, and if we see several large deployments in the near future, this might just be the company that finally makes solar energy a mass-adopted consumer technology. Nanosolar itself calls this the 'third wave' of solar power technology.

2) Tesla Motors : I wrote about Tesla Motors in late 2006. Tesla produces fully electric cars that can consume as little as 1 cent of electricity per mile. They are about to deliver the first few hundred units of the $98,000 Tesla Roadster to customers, and while the Roadster is not a car that can be marketed to average consumers, Tesla intends to release a 4-door $50,000 sedan named 'WhiteStar' in 2010, and a $30,000 sedan by 2013. The press coverage devoted to Tesla Motors has been impressive, but until the WhiteStar sedan successfully sells at least 10,000 units, Tesla will not have silenced critics who say the technology cannot be brought down to mass-market costs.

3) Aptera Motors : When I first wrote about Tesla Motors, it was before I had heard about Aptera Motors. While Tesla is aiming to produce a $30,000 sedan for 2013, Aptera already has an all-electric car due for late 2008 that is priced at just $27,000, while delivering the equivalent of between 200 and 330 mpg. The fact that the vehicle has just three wheels may reduce mainstream appeal to some degree, but the futuristic appearance of the car will attract others. Aptera Motors is a top candidate for winning the Automotive X-Prize in 2010.

The simultaneous use of Nanosolar's solar panels with the all-electric cars from Tesla and Aptera may enable automotive driving to be powered by solar generated electricity for the average single-family household. The combination of these two technologies would be the 'killer ap' of getting off of oil and onto fully renewable energy for cars.

4) 23andMe : This company gets some press due to the fact that co-founder Anne Wojcicki is married to Sergey Brin, even as Google has poured $3.9M into 23andMe. Aside from this, what 23andMe offers is an individual's personal genome for just $1000. What a personal genome provides is a profile of which health conditions the customer is more or less susceptible to, and thus enables the customer to provide this information to his physician, and make the preventive lifestyle adjustments well in advance. Proactive consumers will be able to extend their lifespans by systematically reducing their risks of ailments they are genetically predisposed to. As the service is a function of computational power, the price of a personal genome will, of course, drop, and might become an integral part of the average person's medical records, as well as an expense that insurance covers.

5) Desktop Factory : In 2008, Desktop Factory will begin to sell a $5000 device that functions as a 3-D printer, printing solid objects one layer at a time. A user can scan almost any object (including a hand, foot, or head) and reproduce a miniature model of it (up to 5 X 5 X 5 inches). The material used by the 3-D printer costs about $1 per cubic inch.

The $5000 printer is a successor to similar $100,000 devices used in mechanical engineering and manufacturing firms. Due to the Impact of Computing, consumer-targeted devices costing under $1000 will be available no later than 2014. I envision an ecosystem where people invent their own objects (statuettes, toys, tools, etc.) and share the scanned templates of these objects on social networking sites like MySpace and Facebook. People can thus 'share' actual objects over the Internet, through printing a downloaded template. The cost of the printing material will drop over time as well. A lot of fun is to be had, and expect an impressive array of brilliant ideas to come from people below the age of 16.

6) Zazzle : Welcome to the age of the instapreneur. Zazzle enables anyone to design their own consumer commodities like T-shirts, mugs, calendars, bumper stickers, etc. on demand. If you have an idea, you can produce it on Zazzle with no start-up costs, and no inventory risks. You profit even from the very first unit you sell, with no worries about breakeven thresholds. You can produce an infinite number of products, limited only by your imagination. At this point, those of you reading this are probably in the midst of an avalanche of ideas of products you would like to produce.

While the bulk of Zazzle users today are would merely be vanity users who manage to sell under ten units of their creations, this new paradigm of low-cost customization will inevitably creep up to major industrial supply chains. Even more interesting, think about #5 on this list, Desktop Factory, combining with Zazzle's application, into an amazing transformation of the very economics of manufacturing and mass-production.

9) Ugobe : Ugobe sells a robotic dinosaur toy known as the Pleo. A mere toy, especially a $350 toy, would not normally be on a list of technologies that promise to crease the fabric of human society. However, a closer look at the Pleo reveals many impressive increments in the march to make inexpensive robots more lifelike. The skin of the Pleo covers the joints, the Pleo has more advanced 'learning' abilities than $2500 robots from a few years ago, and the Pleo even cries when tortured, to the extent that it is difficult to watch this.

The reason Ugobe is on this list is that I am curious to see what is the next product on their roadmap, so that I can gauge how quickly the technology is advancing. The next logical step would be an artificial mammal of some sort, with greater intelligence and realistic fur. The successful creation of this generation of robot would provide the datapoints to enable us to project the approximate arrival of future humanoid robots, for better or for worse. Another company may leapfrog Ugobe in the meantime, but they are currently at the forefront of the race to create low-priced robotic toys.

This concludes the list of nine companies that each could greatly alter our lives within the next several years. Of these nine, at least three, Nanosolar, Tesla Motors, and 23andMe, have Google or Google's founders as investors. The next 24 months have important milestones for each of these companies to cross (by which time I might have a new list of new companies). For those that clear their respective near-term bars, there might just be a chance of attaining the dizzy heights that Google, Microsoft, or Intel has.

On January 23, 2007, I created an investment portfolio to be frozen at that time, and evaluated on December 31, 2007 against the benchmark of the S&P500 index. The portfolio incorporated principles, economic trends, and technologies discussed in other articles here on The Futurist. Dividends were reinvested, and so the price paid reflects dividend-adjusted cost-basis. Yahoo and Google Finance do tend to miss recording some dividends, so one must go to a more reliable site like Morningstar to account for the exact dividends.

So how did the portfolio do? I achieved a return of 13.3%, vs. just 4.3% for the S&P500, from January 23 to December 31. Most fund managers are unable to beat the S&P500 index despite the advanced tools at their disposal. The fraction of those that can beat the index by a margin 9.0 percentage points is even more exclusive, putting this portfolio in the top 10% of all mutual fund results for this period.

As always, weightage matters just as much as stock-picking. The first two securities, amounting to 50% of my portfolio, were a total disaster. In fact, when I first created the portfolio, I listed FXI as a security that was strongly considered but left out. FXI returned an eye-popping 83% over the same period, so if I had included FXI instead of ICF, the portfolio's total return would have exceeded 25%. But it was not included, so 'what ifs' do not count.

The India Investment Fund (IIF) was a star, more than compensating for the failure of the first two securities. But the real home runs came from the video game stocks. Three of the four outperformed the S&P500, and two of those, Activision and GameStop, surged into the stratosphere. My selection and detailed analysis of this sector way back on April 17, 2006 yielded a spectacular payoff. As a quartet, these 4 gaming stocks returned a combined 49% over this period.

The Year in Nanotechnology : Stanford University research into nanowires that dramatically increase battery capacity is the most promising breakthrough of 2007, in any discipline. Think 30-hour laptop batteries.

Most of the innovations in the articles above are in the laboratory phase, which means that about half will never progress enough to make it to market, and those that do will take 5 to 15 years to directly affect the lives of average people (remember that the laboratory-to-market transition period itself continues to shorten in most fields). But each one of these breakthroughs has world-changing potential, and that there are so many fields advancing simultaneously guarantees a massive new wave of improvement to human lives.

This scorching pace of innovation is entirely predictable, however. To internalize the true rate of technological progress, one merely needs to appreciate :

We are fortunate to live in an age when a single calendar year will invariably yield multiple technological breakthroughs, the details of which are easily accessible to laypeople. In the 18th century, entire decades would pass without any observable technological improvements, and people knew that their children would experience a lifestyle identical to their own. Today, we know with certainty that our lives in 2008 will have slight but distinct and numerous improvements in technological usage over 2007, just as 2007 was an improvement over 2006.

I am now going to present my 2008 portfolio, which is to be tracked over the remaining 13+ months between now and the end of 2008, again in relation to the S&P500 index. The hypothetical portfolio of $100,000 will be invested in exchange-traded securities and mutual funds that reflect what I believe to be an optimal portfolio construction for 2007. We will, at the end of the period, see how the portfolio tracks the broader market. Dividends will be re-invested.

So the portfolio is :

This is a simpler portfolio, with less emphasis on gaming, and more on fundamental value-based principles. The selections represent general principles and specific predictions outlined in the previously written articles :

The Lifeboat Foundation has a special report detailing their view of the top ten transhumanist technologies that have some probability of 25 to 30-year availability. Transhumanism is a movement devoted to using technologies to transcend biology and enhance human capabilities.

I am going to list out each of the ten technologies described in the report, provide my own assessment of high, medium, or low probability or mass-market availability by a given time horizon, and link to prior articles written on The Futurist about the subject.

10. Cryonics : 2025 - Low, 2050 - Moderate

I can see the value in someone who is severely maimed or crippled opting to freeze themselves until better technologies become available for full restoration. But outside of that, the problem with cryonics is that very few young people will opt to risk missing their present lives to go into freezing, and elderly people can only benefit after revival when or if age-reversal technologies become available. Since going into cryonic freezing requires someone else to decide when to revive you, and any cryonic 'will' may not anticipate numerous future variables that could complicate execution of your instructions, this is a bit too risky, even if it were possible.

The good news here is that gene sequencing techniques continue to become faster due to the computers used in the process themselves benefiting from Moore's Law. In the late 1980s, it was thought that the human genome would take decades to sequence. It ended up taking only years by the late 1990s, and today, would take only months. Soon, it will be cost-effective for every middle-class person to get their own personal genome sequenced, and get customized medicines made just for them.

While this is a staple premise of most science fiction, I do not think that space colonization may ever take the form that is popularly imagined. Technology #2 on this list, mind uploading, and technology #5, self-replicating robots, will probably appear sooner than any capability to build cities on Mars. Thus, a large spaceship and human crew becomes far less efficient than entire human minds loaded into tiny or even microscopic robots that can self-replicate. A human body may never visit another star system, but copies of human minds could very well do so.

Artificial limbs, ears, and organs are already available, and continue to improve. Artificial and enhanced muscle, skin, and eyes are not far.

5. Autonomous Self-Replicating Robots : 2030 - Moderate

This is a technology that is frightening, due to the ease at which humans could be quickly driven to extinction through a malfunction that replicates rouge robots. Assuming a disaster does not occur, this is the most practical means of space exploration and colonization, particular if the robots contain uploads of human minds, as per #2.

From the Great Wall of China in ancient times to Dubai's Palm Islands today, man-made structures are already visible from space. But to achieve transhumanism, the same must be done in space. Eventually, elevators extending hundreds of miles into space, space stations much larger than the current ISS (240 feet), and vast orbital solar reflectors will be built. But, as stated in item #7, I don't think true megascale projects (over 1000 km in width) will happen before other transhumanist technologies render the need for them obsolete.

2. Mind Uploading : 2050 - Moderate

This is what I believe to be the most important technology on this list. Today, when a person's hardware dies, their software in the form of their thoughts, memories, and humor, necessarily must also die. This is impractical in a world where software files in the form of video, music, spreadsheets, documents, etc. can be copied to an indefinite number of hardware objects.

If human thoughts can reside on a substrate other than human brain matter, then the 'files' can be backed up. That is all there is to it.

1. Artificial General Intelligence : 2050 - Moderate

This is too vast of a subject to discuss here. Some evidence of progress appears in unexpected places, such as when, in 1997, IBM's Deep Blue defeated Gary Kasparov in a chess game. Ray Kurzweil believes that an artificial intelligence will pass the Turing Test (a bellwether test of AI) by 2029. We will have to wait and see, but expect the unexpected, when you least expect it.

A robotic insect, similar in size and weight to a wasp or hornet, has successfully taken flight at Harvard University (article and photo at MIT Technology Review). This is an amazing breakthrough, because just a couple of years ago, such robots were pigeon-sized, and thus far less useful for detailed military and police surveillance.

At the moment, the flight path is still only vertical, and the power source is external. Further advances in the carbon polymer materials used in this robot will reduce weight further, enabling greater flight capabilities. Additional robotics advances will reduce size down to housefly or even mosquito dimensions. Technological improvements in batteries will provide on-board power with enough flight time to be useful. All of this will take 5-8 years to accomplish. After that, it may take another 3 years to achieve the capabilities for mass-production. Even then, the price may be greater than $10,000 per units.

Needless to say, by 2017-2020, this may be a very important military technology, where thousands of such insects are released across a country or region known to contain terrorists. They could land on branches, light fixtures, and window panes, sending information to one another as well as to military intelligence. Further into the future, if these are ever available for private use, than that could become quite complicated.

If we were to make a list of subjects ranked by the gap between the civilizational importance of the topic and the lack of serious literature devoted to it, historical acceleration of economic growth would be very near the top of the list. I wrote an article on the subject way back on January 29, 2006 (version 1.0), but now it is time for a much more substantial treatise.

In the modern age, we take for granted that the US will grow at 3.5% a year, and that the world economy grows at 4% to 4.5% a year. However, these are numbers that were unheard of in the 19th century, during which World GDP grew under 2% a year. Prior to the 19th century, annual World GDP growth was so little that changes from one generation to the next were virtually zero. Brad Delong has some data on World GDP from prehistoric times until 2000 AD.

If I put historical per-capita GDP through 2000 in a logarithmic timescale, we see the following :

The theme of acceleration readily presents itself here, and even disruptive events like the Greagt Depression still do not cause more than a temporary deviation from the long-term trendline. A different representation of the data would be to notice the shrinking intervals that it takes for per-capita World GDP to double.

10000 BC to 1500 : 11500 years without doubling

1500 to 1830 : 330 years

1830 to 1880 : 50 years

1880 to 1915 : 35 years

1915 to 1951 : 36 years (Great Depression and World Wars in this period)

1951 to 1975 : 24 years (recovery to trendline)

1975 to 2003 : 28 years

2003 to 2024-2027? : 21-24 years (on current trends)

This not only further reveals acceleration, but also indicates that massively disruptive world events still result in merely temporary deviations from the long-term trendline.

Additionally, we can take the more granular IMF data of recent World GDP growth, and plot a trendline on it. Both nominal and PPP growth rates are available, and are diverging due to the increasing size and growth rates of India and China. Unfortunately, the IMF data only goes back to 1980, and 28 years are not enough to plot an ideal trendline, but nonetheless, the upward slope is distinct, and recessions (which still do not push World GDP growth into negative territory) are invariably followed by steep recoveries.

It is also important to note that the standard deviation of the IMF data for World GDP growth rates is about 1% a year, for both the nominal and PPP series (1.07% and 1.14% respectively, to be exact). The rules of standard deviations dictate that 68% of the time, a data point will be within one standard deviation of the mean, 95% will be between two standard deviations, and 99.7% will be within three.

Thus, in a simple example, if the World GDP growth trendline is currently at 4% a year, there is a 68% chance that the next year will be between 3% and 5%, and there is only a 0.3% chance that the next year will be below 1% or above 7% growth. This means that a worldwide recession with a year of negative growth is extremely improbable, just as improbable as a year with stupendous 8% growth. There is not a single year in the 1980-2007 IMF data with negative GDP growth, and virtually none under 1% growth.

Now, what happens if we project these trendlines through the 21st century? The dotted red line represents the median trend assuming that nominal and PPP growth rates converge at some intermediate level.

I can apply this trendline for World GDP growth, make assumptions of total world population to arrive at per capita World GDP growth, and add it back to the first graph. The assumed growth rates, by decade, in per capita income are :

2007-2020 : 3.5%

2020-2030 : 3.5-4.0%

2030-2040 : 4.0-5.0%

2040-2050 : 5.0-6.0%

This leads to estimates for per-capita GDP at PPP, in 2007 dollars, to be :

2007 : $10,000

2020 : $15,155

2030 : $22,400

2040 : $32,600 - $36,000

2050 : $53,200 - $64,500

Which, when plotted, provides the following :

Or, when a longer view is taken, in terms of logarithmic periods going back from the year 2050, we see :

This article is the inaugural entry into a new category here at The Futurist titled "Core Articles". These are the articles which are designed to form the cornerstone of a comprehensive understanding of the future, and are suggested reading for anyone interested in the subject. Additional articles will be upgraded to "Core" status as augmentations to them accumulate.

On September 28, 2006, I made the case that telescopic power is indeed an accelerating technology, set to improve at an estimated rate of 26% a year for the next 30 years. I believe that increasingly more powerful telescopes will ensure that we discover the first genuinely Earth-like planet in another star system by 2011, and that by 2025, we will have discovered thousands of such planets.

The mirror is a pool of salt-based liquids that only freeze at very lower temperatures, coated with a silver film. While practical usage is at least 20 years away, the details reveal a technology that is brilliantly simple, yet tantalizingly capable of addressing almost all of the problems facing the construction of giant telescopes. Glass mirrors are exceedingly difficult to scale to larger sizes, and even the most minor defect can render a mirror useless. Reflective liquid, by contrast, can be scaled up almost indefinitely, limited only by the perimeter of the enclosure it is placed in. External blows that would crack or scratch a glass mirror would have no effect on a liquid that could quickly return to the original shape.

I don't expect updates on this technology in the near future, but the next logical step would be for a smaller telescope to be demonstrated to use this technology. If that succeeds, the ultimate goal would be, by 2030, a massive telescope more than 200 meters in diameter placed on the Moon, where the sky is free of atmospheric distortions, and the ground is free of tiny seismic shaking. This would enable us to observe Earth-like planets at a distance of up to 100 light years, as well as observe individual stars near the center of the Milky Way galaxy (30,000 light years away).

The World Wide Web, after just 12 years in mainstream use, has become an infrastructure accessed by hundreds of millions of people every day, and the medium through which trillions of dollars a year are transacted. In this short period, the Web has already been through a boom, a crippling bust, and a renewal to full grandeur in the modern era of 'Web 2.0'.

But imagine, if you could, a Web in which web sites are not just readable in human languages, but in which information is understandable by software to the extent that computers themselves would be able to perform the task of sharing and combining information. In other words, a Web in which machines can interpret the Web more readily, in order to make it more useful for humans. This vision for a future Internet is known as the Semantic Web.

Some are already referring to the Semantic Web as 'Web 3.0'. This type of labeling is a reliable litmus test of a technology falling into the clutches of emotional hype, and thus caution is warranted in assessing the true impact of it. I believe that the true impact of the Semantic Web will not manifest itself until 2012 or later. Nonetheless, the Semantic Web could do for scientific research what email did for postal correspondence and what MapQuest did for finding directions - eliminate almost all of the time wasted in the exchange of information.

BusinessWeek has a slideshow revealing new electronic devices that a consumer could use to enhance (or complicate) certain aspects of daily life. Among these is the very promising Sunlight Direct System, which I discussed back on September 5, 2006. Others, such as the Lawnbott ($2500), cost far more than the low-tech solution of hiring people to mow your lawn for the entire expected life of the device, ensuring that mass-market adoption is at least 4-5 years away.

All of this is a very strong and predictable manifestation of The Impact of Computing, which mandates that entirely new categories of consumer electronics appear at regular intervals, and that they subsequently become cheaper yet more powerful at a consistent rate each year. Let us observe each of these functional categories, and the rate of price declines/feature enhancements that they experience.

Many streams of accelerating technological change, from energy to The Impact of Computing, will find themselves intersecting in one of the largest consumer product industries of all. Over 70 million automobiles were produced worldwide in 2006, with rapid market penetration underway in India and China. Indisputably, cars greatly affect the lives of consumers, the economies of nations, and the market forces of technological change.

I thus present a speculative timeline of technological and economic events that will happen for automobiles. This has numerous points of intersection with the Future Timeline for Energy.

2007 :The Tesla Roadster emerges to not only bring Silicon Valley change agents together to sow the seeds of disruption in the automotive industry, but also to immediately transform the image of electrical vehicles from 'punishment cars' to status symbols of dramatic sex appeal. Even at the price of $92,000, demand outstrips supply by an impressive margin.

2009 : The Automotive X-Prize of $25 Million (or more) is successfully claimed by a car designed to meet the 100 mpg/mass-producable goal set by the X Prize Foundation. Numerous companies spring forth out of prototypes tested in the contest.

2011 : Two or more iPod ports, 10-inch flat-screen displays for back seat passengers, parking space detection technology, and embedded Wi-Fi adapters that wirelessly can transfer files into the vehicle's hard drive from up to 500 feet away are standard features for many new cars in the $40,000+ price tier.

2012 : Over 100 million new automobiles are produced in 2012, up from 70 million in 2006. All major auto manufacturers are racing to incorporate new nanomaterials that are lighter than aluminium yet stronger and more malleable than steel. The average weight of cars has dropped by about 5% from what it was for the equivalent style in 2007.

2013 : Tesla Motors releases a fully electric 4-door sedan that is available for under $40,000, which is only 33% more than the $30,000 that the typical fully-loaded gasoline-only V6 Accord or Camry sells for in 2013.

2014 : Self-driving cars are now available in the luxury tier (priced $100,000 or higher). A user simply enters in the destination, and the car charts out a path (similar to Google Maps) and proceeds on it, in compliance with traffic laws. However, a software malfunction results in a major traffic pile-up that garners national media attention for a week. Subsequently, self-driving technologies are shunned despite their superior statistical performance relative to human drivers.

2016 : An odd change has occurred in the economics of car depreciation. Between 1980 and 2007, annual car depreciation rates decreased due to higher quality materials and better engine design, reaching as little as 12-16% a year for the first 5 years of ownership. Technology pushed back the forces of depreciation.

However, by 2016, 40% of a car's initial purchase price is comprised of electronics (up from under 20% in 2007 and just 5% in 1985), which depreciate at a rate of 25-40% a year. The entire value of the car is pulled along by the 40% of it that undergoes rapid price declines, and thus total car depreciation is now occuring at a faster rate of up to 20% a year for the first 5 years. This is a natural progression of The Impact of Computing, and wealthier consumers are increasingly buying new cars as 'upgrades' to replace models with obsolete technologies after 5-7 years, much as they would upgrade a game console, rather than waiting until mechanical failure occurs in their current car. Consumers also conduct their own upgrades of certain easily-replaced components, much as they would upgrade the memory or hard drive of a PC. Technology has thus accelerated the forces of depreciation.

2018 : Among new cars sold, gasoline-only vehicles are now a minority. Millions of electricity-only vehicles are charged through solar panels on a daily basis, relieving those consumers of a fuel expenditure that was as high as $2000/year in 2007. Even when sunlight is obscured and the grid is used, some electrical vehicles cost as little as 1 cent/mile to operate.

2020 : New safety technologies that began to appear in mainstream cars around 2012, such as night vision, lane departure correction, and collision-avoiding cruise control, have replaced the existing fleet of older cars over the decade, and now US annual traffic fatalities have dropped to 25,000 in 2020 from 43,000 in 2005. Given the larger US population in 2020 (about 350 Million), this is a reduction in traffic deaths by half on a per-capita basis.

2024 : Self-driving cars have overcome the stigma of a decade prior, and are now widely used. But they still have not fully displaced manual driving, due to user preferences in this regard. Certain highways permit only self-driven cars, with common speed limits of 100 mph or more.

2025-30 : Electricity (indeed, clean electricity) now fuels nearly all passenger car miles driven in the US. There is no longer any significant fuel consumption cost associated with driving a car, although battery maintenance is a new aspect of car ownership. Many car bodies now include solar energy absorbant materials that charge a parked car during periods of sunlight. Leaving such cars out in the sun has supplanted the practice of parking in the shade or in covered parking.

Pervasive use of advanced nanomaterials has ensured that the average car weighs only 60% as much as a 2007 counterpart, but yet is over twice as resistant to dents.

______________________________________________________________

I believe that this timeline represents the the combination of median forecasts across all technological and economic trends that influence cars, and will be perceived as too optimistic or too pessimistic by an equal number of readers. Let's see how closely reality matches this timeline.

I stumbled upon something while reading the Asian Development Bank's report on the world economy. No big surprises here, but one tiny chart stood out. The column chart of WW and Asian semiconductor sales from 2001 to 2006 indicates that while Asia accounted for just one third of semiconductor sales in 2001, they comprise half of it today.

BusinessWeek has an article and slideshow on the rapidly diversifying applications of advanced VR technology.

This is a subject that has been discussed heavily here on The Futurist, through articles like The Next Big Thing in Entertainment, Parts I, II, and III, as well as Virtual Touch Brings VR Closer. The coverage of this topic by BusinessWeek is a necessary and tantalizing step towards the creation of mass-market products and technologies that will enhance productivity, defense, healthcare, and entertainment.

Technologically, these applications and systems are heavily encapsulated within The Impact of Computing with very few components that are not exponentially improving. Thus, cost-performance improvements of 30-58% a year are guaranteed, and will result in stunningly compelling experiences as soon as 2012.

To the extent that many people who seek reading material about futurism are primarily driven by the eagerness to experience 'new types of fun', this area, more than any other discussed here, will deliver the majority of new fun that consumers can experience in coming years.

2012 : Cellulostic ethanol technology becomes cost-effective and scalable. Biomass-derived fueling stations finally begin to find their way into most US population centers, but still displace only 15-20% of US gasoline consumption. New oil extraction technologies continue to exert downward pressure on oil prices, resulting in a continual tussle between biomass fuel and oil-derived fuel for cost competitiveness. All of this is bad news for oil-producing dictatorships.

2013 : Tesla Motors releases a fully electric 4-door sedan that is available for just $40,000, which is only 33% more than the $30,000 that the typical fully-loaded gasoline-only V6 Accord or Camry sells for in 2013.

2014 : Solar panels have become inexpensive enough for a typical house in California or Arizona to financially break even in under 5 years after installation, even after accounting for the cost of capital. Over 3 million US single-family homes have solar panels on their rooftops by now, and many of these homes are able to charge up their plug-in hybrids or fully electric vehicles entirely free of cost.

2015 : As predicted in early 2006 on The Futurist, a 4-door sedan with a 240 hp engine, yet costing only 5 cents/mile to operate (the equivalent of 60 mpg of gasoline), is widely available for $35,000 (which is within the middle-class price band by 2015 under moderate assumptions for economic growth). This is the result of not only energy innovation, but also lighter, stronger nanomaterials being used in some body components, as well as computerized systems that make energy usage more efficient within the car.

2018 : Among new cars sold, gasoline-only vehicles are now a minority. Millions of electricity-only vehicles are charged through solar panels on a daily basis, relieving those consumers of a fuel expenditure that was as high as $2000/year in 2007. Even when sunlight is obscured and the grid is used, some electrical vehicles cost as little as 1 cent/mile to operate.

2020 : Gasoline fuels under one third of the passenger car miles driven in the US. Electricity and biomass fuels account for the remaining two-thirds, with electricity being the one crowding the other two out (electricity itself is primarily derived through solar, wind, and nuclear sources by now). US total oil consumption, in barrels, has decreased only somewhat, however, due to commercial airline flights (which still use petroleum-derived fuels). At the same time, oil consumption in relation to total US GDP is actually under half of what it was in 2007.

2025-30 : Electricity (indeed, clean electricity) now fuels nearly all passenger car miles driven in the US. There is no longer any significant fuel consumption cost associated with driving a car, although battery maintenance is a new aspect of car ownership. The average car weighs only 60% as much as the 2007 counterpart, but yet is over twice as resistant to dents. Most cars are self-driven by on-board intelligence, so human drivers can literally sleep in the car while being delivered to their destination.