Blog

If, like me, you find President Obama’s already overused “Win the Future” catchphrase catching in your throat, you might also be wondering how he decided on this feel-good, but nonsensical slogan.* It seems incredible that an administration that so readily talks about future technologies doesn’t give better consideration to the strategies behind their promotion. Reducing the dialog to the metaphor of competition diminishes it before it has even gotten started. The future isn’t a prize, a thing to be won, it’s a process, a never-ending unfolding of the possible. As futurist Jamais Cascio recently wrote, in encouraging us to “Win the Future”, President Obama “is not just asking us to do something that simply cannot be done, he’s asking us to accept a meager, ephemeral sense of triumph, when we could do so much more.”

It should also make us wonder about the government’s collective grasp of the concepts and processes essential to futures thinking. As complex as our challenges are and will be in the coming decades, we need to be using all of the tools at our command.

I’ve wondered from time to time about the idea of some sort of federal “Department of the Future” or “Office of Foresight”. Part of me rebels against such an Orwellian-sounding governmental agency, but on the other hand, we need to be making policy decisions with a much longer term, systems oriented view than we currently do.

Certainly, there are other departments and agencies that incorporate futures methodology – intelligence agencies and the military, for example. But there can be little doubt these entities have a particular focus and therefore are limited by their own filters. Would we be better served by a nonpartisan futures equivalent to say the Congressional Budget Office? Something which could provide an assessment of potential impacts for a particular piece of legislation? Could we reduce wasted tax dollars, not to mention avoiding unintended consequences, especially ones that could have been readily foreseen?

Remember the tax credit for ethanol production? Because a significant percentage of corn crops were diverted to energy production, food prices around the world skyrocketed. (Commodity speculation was also a contributing factor, though it can be argued this was exacerbated by the policy.) People in some parts of the developing world suffered considerable hardship and many starved. Was such an outcome so impossible to anticipate?

Despite this, I’m not saying I’m entirely convinced an “Office of Foresight” is the right way to go. But I do think it’s worthy of exploration and dialog. It’s not as if there aren’t already precedents. In the UK, the government’s Foresight Programme was established to help them think systematically about the future and its application to developing policy and strategy.

Of course, there are already futures organizations that inform and advise government, but could we be better served by a more fundamental integration of these disciplines into our policy making process?

Maybe this is a good idea. Maybe it isn’t. What do you think? As for me, I know we can do better than to approach the future with the same mentality we bring to a basketball game.

*(Full disclosure: I was and still am an Obama supporter and contributed to his 2008 presidential campaign.)

On January 13, 2011, IBM’s Watson supercomputer competed in a practice round of Jeopardy, the long-running trivia quiz show. Playing against the program’s two most successful champions, Ken Jennings and Brad Rutter, Watson won the preliminary match. Is this all a big publicity stunt? Of course it is. But it also marks a significant milestone in the development of artificial intelligence.

For decades, AI – artificial intelligence – has been pursued by computer scientists and others with greater and lesser degrees of success. Promises of Turing tests passed and human-level intelligence being achieved have routinely fallen far short. Nonetheless, there has continued to be an inexorable march toward more and ever more capable machine intelligences. In the midst of all this, IBM’s achievement in developing Watson may mark a very important turning point.

Early attempts at strong AI or artificial general intelligence (AGI) brought to light the daunting complexity of trying to emulate human intelligence. However, during the last few decades, work on weak AI – intelligence targeted to very specific domains or tasks – has met with considerably more success. As a result, today AI permeates our lives, playing a role in everything from anti-lock braking systems to warehouse stocking to electronic trading on stock exchanges. Little by little, AI has taken on roles previously performed by people and bested them in ways once unimaginable. Computer phone attendants capable of routing hundreds of calls a minute. Robot-operated warehouses that deliver items to packers in seconds. Pattern matching algorithms that pick out the correct image from among thousands in a matter of moments. But until now, nothing could compete with a human being when it came to general knowledge about the world.

True, these human champions may yet best Watson, a product of IBM’s DeepQA research project. (The three day match will air February 14-16.) But we only need to think back to 1997 when IBM’s Deep Blue defeated world chess champion Garry Kasparov to understand that it doesn’t really matter. Kasparov had handily beaten Deep Blue only a year earlier, though the 1996 match did mark the first time a computer won a single game in such a match. Today, just as then, the continuing improvements in computer processing speed, memory, storage and algorithms all but ensure that any such triumph would be fleeting. We have turned a page on this once most human of intellectual feats and the world won’t be the same again.

So what can we look ahead to now that we’ve reached this milestone? In the short term, IBM plans to market their technology and profit by their achievement. Initially, the system price will be high, probably in the millions of dollars, but like so much computer technology, the price will plummet over the coming decade. As the technology becomes more widely used, a range of tasks and jobs previously considered safe from AI will no longer be performed by human workers. Protectionist regulations may attempt to save these jobs but these efforts will probably be short-lived. The resulting large-scale unemployment will require a rethinking of government institutions and safety nets, as well as corporate business models.

At the same time, this type of general knowledge AI (it’s far too early to call it AGI) will contribute to greater and more rapid advances in machine intelligence. Such technology could bootstrap the Semantic Web into broad usage. In all likelihood, it will be used to create personal intelligent agents, giving users the virtual equivalent of a staff of assistants. And eventually, it could facilitate the development of a true artificial general intelligence or at least contribute to the education of such an AGI.

Will such an intelligence be conscious? Will it be self-improving, leading to a positive feedback loop that brings about a powerful and hopefully benign superintelligence? Only time will tell. But perhaps one day, on a future holographic version of Jeopardy, we’ll be presented with clues to which the correct response will be, “What was the Singularity?”

January 1, 2011 marked yet another milestone for the Baby Boomers, the massive post-war generation born between 1946 and 1964. As of this New Year’s Day, the first of the boomers turned 65, with an additional ten thousand becoming senior citizens every single day. As many have long observed, this will create pressures and challenges that will ripple throughout our society. Pensions, health care, housing and jobs are only a few of the areas that will be impacted by this outsized demographic shift.

At the same time, many among the youngest of the adult generations – Generation Y, or the Millennials – now find their career opportunities considerably more limited that their parents once did. Currently, adults from 18 to 24 years old are experiencing between 2-3 times the rate of unemployment as the rest of the working population. (Bureau of Labor Statistics, Nov. 2010) With the Great Recession and its glacially slow recovery likely to hold unemployment at these levels for at least several more years, the Millennials have a rough road ahead of them. And let us not forget, it’s a road we all travel on together.

On this scale, stagnated careers and lost opportunities impact more than just the lives of individuals and their families. Federal, state and local tax revenues as well as Social Security and Medicare would all be affected by the resulting reduction in earnings. The inability for a large fraction of this demographic to participate in home ownership would also depress home values as demand withers relative to the supply. But perhaps most concerning is the potential for civil unrest.

Sociologists have often noted a strong correlation between high levels of unemployment among young adults, particularly young men, and the prevalence of war, gang activity and crime in general. Extreme disparities between different segments of society tend to lead to greater levels of discontent, particularly when that segment is disaffected youth. Given the high cost to society of such behavior, stepping up our investment in programs that facilitate education, re-training, job creation and placement would be money wisely spent.

Of course, technology is having a considerable impact on employment as well. Productivity gains due to computerization have been responsible for progressive job loss in some sectors for years. This is probably also one of the reasons employers have been slow to rehire during the current recovery. (Significant job growth has also resulted from the computer revolution. Whether this has led to a total net gain or net loss of jobs is beyond the scope of this post.) Computerized supply chain management, high-speed communications and other technology advances have made global capitalism possible and contributed to outsourcing, another reason for fewer jobs at the local level. Looking ahead, as systems become more intelligent and robotics become more adept, far more jobs are likely to disappear in the coming years.

It’s important too to remember this is not an exclusively American phenomena. Today, Japan’s young adults experience unemployment or underemployment at twice the national rate. In Europe, the disparity is even worse. A recent New York Times article on the lack of employment opportunities for young adults in southern Europe reported “an epic brain drain of college graduates” as they seek work elsewhere.

The economic balance of the world is shifting. The old powerhouses of the U.S., Europe and Japan are rapidly being outstripped by the BRICs (Brazil, Russia, India and China), with the CIVETS (Columbia, Indonesia, Vietnam, Egypt, Turkey and South Africa) and other acronyms coming up fast. In nature, systems tend to adjust according to differentials in potential and the same is true of nations and economies. If we’re not careful, the confluence of demographic, technological and economic shifts could quickly lead to a future in which the Baby Boomers find themselves in a precarious and underfunded retirement.

What do nuclear technology, embryonic stem cells, synthetic life and molecular nanotechnology have in common? For many people, these are strange and frightening concepts which conjure erroneous, often very dystopic visions of the future. They’re also technologies with enormous potential; they could seriously damage our world or they could be immensely beneficial. But perhaps most importantly, all of them are inevitable.

Change means risk, so through the ages, a part of our brain has evolved to avoid big changes. Because of this, some of us are inclined to want to stop progress all together or at least to slow it down. Some new technology or knowledge has the potential to be dangerous and so it’s argued that it should be proscribed, banned, halted. But of course, it’s never that simple. The fact is, when the time comes, we can’t stop a technology from coming into existence any more than we can stop a freight train with our bare hands.

In his new book, “What Technology Wants”, Kevin Kelly makes the argument that technology is autonomous and has its own distinct direction and momentum. He details (what many have long known or suspected) that most inventions are made not because of someone’s singular genius, but because the time is right.

Logarithms. Calculus. Oxygen. Evolution. Photography. Steamboats. Telegraphs. Telephones. Incandescent bulbs. Typewriters. Transistors. Nuclear bombs. All of these, and so very many more, were independently discovered or invented at nearly the same time in history. The prevalence of these “simultaneous inventions” strongly suggests that when the time is right, a particular technology will be thrust upon us, whether we want it or not.

This isn’t to say that any of this is predetermined; only that once a particular set of conditions, capabilities and knowledge is in place, the next technological step is probably going to occur. While we can’t say the flux capacitor will be invented on August 23, 2029, we can make a reasonable estimate of when certain technologies are likely to be feasible. This can aid us in preparing for their arrival and in our endeavors to ensure their impact is as beneficial as possible.

Efforts to ban knowledge and the technologies it makes possible are doomed to failure. Stop research in one country and it will almost certainly continue somewhere else. Drive it underground and it will still go on, only without adequate regulation and oversight. Prohibiting emerging technologies will ensure you fall behind the competition. It will probably also mean not having a say in how that technology is developed or what direction it ultimately takes.

New technology is inevitable. Each new addition is just waiting its turn on the timeline of possibility.

that China is barreling ahead in its development of supercomputers should give the U.S. considerable cause for concern. China has devoted significant resources to their supercomputer program in recent years, resulting in their ranking earlier this year at the number two spot on the TOP500 list. TOP500.org ranks the world’s 500 fastest supercomputers according to their performance on a dense system of linear equations. These tests yield a score based on the computer’s speed measured in double precision floating point operations per second (flops).

To give a little perspective: China didn’t have a single supercomputer ranked in the TOP500 until the mid-1990s. By June 2004, they had their first ranking ever in the top ten. In May 2010, their Nebulae system became the second fastest in the world with a performance of 1.271 petaflops. (A petaflop is 1015 floating point operations per second.) While the Chinese still only have one tenth the number of TOP500 supercomputers the U.S. has, they’ve been quickly catching up based on this metric as well. (Note: TOP500.org ranks the world’s most powerful, commercially available, non-distributed computer systems. There are numerous military and intelligence agency supercomputers in many countries not included in this list.)

China’s Nebulae system operates from the newly built National Supercomputing Centre in Shenzhen. This is also the site of some very recent and very extensive construction which will presumably house some very serious supercomputing power in the near future. “There clearly seems to be a strategic and strong commitment to supercomputing at the very highest level in China,” stated Erich Strohmaier, head of the Future Technology Group of the Computational Research Division at Lawrence Berkeley National Laboratory.

The next major goal for supercomputers is the building of an exascale system sometime between 2018 and 2020. Such a system would be almost a thousand times faster than the Jaguar supercomputer at Oak Ridge National Laboratory, currently the world’s fastest. The U.S. Exascale Initiative is committed to developing this technology which brings with it many different challenges of scale. At the same time, Europe and China have accelerated their investment in high-performance systems, with Europeans on a faster development track than the U.S. There are concerns the U.S. could be bypassed if it doesn’t sustain the investment to stay ahead.

This isn’t just about who has the highest ranking on a coveted list – it’s not a sporting event with a big fanfare for the winner. These computers are crucial for modeling, simulation, and large-scale analysis – everything from modeling complex weather systems to simulating biological processes. As our understanding of highly complex systems grows, the only way we’re going to be able to keep moving forward is with more and ever more computing power. At the same time, exascale computing is anticipated to be a highly disruptive technology, not only because of what it will be able to do, but because of the technologies that will be created in the course of developing it. Ultimately, these technologies will end up in all kinds of new products, not unlike what happened with the Apollo space program. Falling behind at this stage of the game would put the U.S. at a big disadvantage in almost every aspect of science and product development.

Just as concerning, I believe, is what this would mean for developing an AGI or artificial general intelligence. There’s been a lot of speculation by experts in the field of AI as to when (if ever) we might develop a human-level artificial intelligence. A recent survey of AI experts indicates we could realize human-level AI or greater in the next couple of decades. More than half of the experts surveyed thought this milestone would occur by mid-century. While there are many different avenues which may ultimately lead to an AGI, it’s a good bet that most of these will require some pretty serious computing power both for research and potentially for the substrate of the AGI itself.

It’s been speculated that there are considerable risks in developing a computer with human-level or greater intelligence, but there are a number of risks in not doing so as well. Whoever builds the first AGI will very probably realize an enormous competitive advantage, both economically and politically. Additionally, the world faces a growing number of existential threats which AGIs could play a critical role in helping us to avoid.

During this time of budget deficits and spending cuts, it would be very easy to decide that Big Science programs, such as the Exascale Initiative, aren’t as crucial to the nation’s well-being as they really are. This would be a grave mistake. The question isn’t how we can afford to commit ourselves to this research, but how we can afford not to.

(NOTE: Beginning with this entry, I’ll be cross-posting my blog at the World Future Society – www.wfs.org.)

has come to a close, but the ideas and inspirations it generated will carry on well into the future. Held last week in Boston, the annual futurist conference was often profound, consistently thought-provoking, and even occasionally unsettling. With nearly a hundred presentations, workshops, tours, seminars and keynote speeches, over 900 attendees from around the world had plenty to think and talk about. This year’s conference theme was “Sustainable Futures, Strategies and Technologies”, made all the more relevant given the economic and environmental challenges the world has recently had to face.

The sustainability theme ran through a broad range of fields and topics. A small sampling of these presentations included “Global Efforts to Develop Sustainable Public Health Initiatives”, “Achieving Low-Carbon Economic Growth”, and “Sustainability and Future Human Evolution.”

While sustainability was the official conference theme, accelerated growth could easily have been designated the unofficial one. Technology ethicist, Wendell Wallach addressed it in his opening speech, “Navigating the Future: Moral Machines, Techno Sapiens, and the Singularity”. Inventor and author, Ray Kurzweil revisited the concept repeatedly in his keynote presentation, “Building the Human Mind.” (Kurzweil mentioned exponential growth enough times that some attendees later joked about turning it into a drinking game.) Many of the other presenters also talked about how the nature of technological progress, especially the convergence of previously unrelated fields, is driving this acceleration. For me, it was truly exciting to be among so many people who readily accept and incorporate this important concept.

Given my own inclinations, my favorite sessions tended toward the more technical. Among these were “Technology Futures and Their Massive Potential Societal Impacts”, “Humans in 2020: The Next 10 Years of Personal Biotechnology”, “Challenges and Opportunities in Space Medicine” and “The Human-Computer Interface.” Unfortunately, I couldn’t attend every presentation I wanted to see. That’s the downside of a conference of this scale: there’s no way to do it all. But then on the plus side, there’s definitely something for everyone.

For me, the best thing about WorldFuture is that while the conference themes and presentations may change from year to year, there’s always a strong belief in the need to look ahead. The world faces many serious environmental, technical and social challenges in the coming decades. We’re going to need serious foresight and planning if we want to make it a positive, sustainable future that’s supportive of our citizens, our economies and our planet.