This is a personal blog updated regularly by Dr. Daniel Reed, Vice President for Research and Economic Development at the University of Iowa.
These musing on the current and future state of technology, scientific research and innovation are my own and don’t necessarily represent the University of Iowa's positions, strategies or opinions.

Technology Policy

July 18, 2013

In addition to this blog, I also write for the Communications of the ACM (CACM). I recently posted an essay on the future of exascale computing there.As I noted in the CACM blog, we need a catastrophe – in
the mathematical sense – a discontinuity triggered by a sustained research
program and development program that combines academia, industry and government
expertise that leaps the chasm of technical challenges we now face.

You can read the hearing charter and my extended, written testimony on the hearing web site and watch an archived video of the hearing. In my written and oral testimony, I made four points, along with a specific set of recommendations. Many of these points and recommendations are echoes of my previous testimony, along with recommendations from many previous high-performance computing studies.

Oral Testimony

First, high-performance computing (HPC) is unique among scientific instruments, distinguished by its universality as an intellectual amplifier.

New, more powerful supercomputers and computational models yield insights across all scientific and engineering disciplines. Advanced computing is also essential to analyzing the torrent of experimental data produced by scientific instruments and sensors. However, it is about more than just science. With advanced computing, real-time data fusion and powerful numerical models, we have the potential to predict the tracks of devastating tornadoes such as the recent one Oklahoma, saving lives and ensuring our children's futures.

Second, the future of U.S. computing and HPC leadership is uncertain.

Today, HPC systems from DOE's Oak Ridge, Lawrence Livermore and Argonne National Laboratories occupy the first, second and fourth places on the list of the world's fastest computers. One might surmise that all is well. Yet U.S. leadership in both deployed HPC capability and in the technologies needed to create future HPC systems is under challenge.

Other nations are investing strategically in HPC to advance national priorities. The U.S. research community has repeatedly warned of the eroding U.S. leadership in computing and HPC and the need for sustained, strategic investment. I have chaired many of those studies as a member of PITAC, PCAST, and National Academies boards. Yet these warnings have largely been unheeded.

Third, there is a deep interdependence among basic research, a vibrant U.S. computing industry and HPC capability.

It has long been axiomatic that the U.S. is the world's leader in computing and HPC. However, global leadership is not a U.S. birthright. As Andrew Grove, the former CEO of Intel, noted in his famous aphorism, "only the paranoid survive." U.S. leadership has been repeatedly earned and hard fought, based on a continued Federal government commitment to basic research, translation of research into technological innovations, and the creation of new products.

Fourth, computing is in deep transition to a new era, with profound implications for the future of U.S. industry and HPC.

U.S. consumers and businesses are an increasingly small minority of the global market for mobile devices and cloud services. We live in a "post-PC" world where U.S. companies compete in a global device ecosystem. Unless we are vigilant, these economic and technical changes could further shift the center of enabling technology R&D away from the U.S.

Recommendations for the Future

First, and most importantly, we must change our model for HPC research and deployment if the U.S. is to sustain its leadership. This must include much deeper and sustained interagency collaborations, defined by a regularly updated strategic R&D plan and associated verifiable metrics, and commensurate budget allocations and accountability to realize the plan's goals. DOE, NSF, DOD, NIST and NIH must be active and engaged partners in complementary roles, along with long-term industry engagement.

Second, advanced HPC system deployments are crucial, but the computing R&D journey is more important than any single system deployment by a pre-determined date. A vibrant U.S. ecosystem of talented and trained people and technical innovation is the true lifeblood of sustainable exascale computing.

Finally, we must embrace balanced, "dual use" technology R&D, supporting both HPC and ensuring the competitiveness of the U.S. computing industry. Neither HPC nor big data R&D can sacrificed to advance the other, nor can hardware R&D dominate investments in algorithms, software and applications.

April 24, 2013

N.B. I also write for the Communications of the ACM (CACM). The following essay recently appeared on the CACMblog.

How often have you picked up a scholarly journal in a discipline far removed from your expertise, only to be stymied and mystified by the disciplinary jargon? It can be humbling and intimidating when one fails to understand the meaning of all the words in an article's title or the abstract. When coupled with the contextual knowledge often implicitly assumed by the authors, the gulf of understanding yawns wide and deep.

This epistemological and linguistic chasm separates and isolates even within the broad tent of our own discipline, which spans everything from the fundamental theory of computability to the professional practice of informatics. If you have any doubt, open the ACM Digital Library and scan a few articles in a specialty far removed from your own.

In a world where discoveries increasingly lie at the boundaries of traditional research disciplines, simplifying communication and encouraging multidisciplinary dialog and partnership have never been more necessary. In almost every case, computing is an essential element of disciplinary and multidisciplinary research. Thus, it is time for us to embrace writing as a collaborative enabler, rather than a research burden.

All too often, we academics write in a strange argot of disciplinary esoterica that can obscure the very ideas we seek to communicate. If you have ever encountered an article like the following, you know what I mean.

However linguistically facile and intellectually adept, the authors and putative ruminant experts failed to say what they really meant ("wait for the cows to come home") and why that might matter.

In a similar spirit, the late Richard Hamming once famously noted, "The purpose of computing is insight, not numbers." The academic publishing cognate is best summarized as, "The purpose of writing is communication, not obscuration." There is also an important corollary, "Write to communicate, not to impress or intimidate." Yes, subtlety and nuance are important, but they are mere handmaidens to clarity and lucidity.

Even when we avoid these linguistic traps, another, equally deadly one waits to ensnare – turgid and passive prose that invites only slumber. As anyone knows who has either served as a journal editor or reviewed a seemingly endless stack of conference paper submissions, passive, wordy and meandering prose makes identifying the key ideas and assessing their importance even more difficult.

Technical papers are not page turning spy novels, nor should they be, but they can still be interesting, clear and engaging as they convey the essential facts. As a writer, one's job is to make the reading easy; you want your papers to be read and appreciated.

The Message is the Message

It is always dangerous to write an essay about writing, lest one be lampooned for the very deficiencies one seeks to highlight. Such is life. My goal is to focus attention on an important issue.

While continuing to pursue core research in our own discipline of computing, I believe we must also communicate effectively with our peers in the arts and humanities, science and engineering, medicine and public policy. We cannot all be polymaths, but as writers, we can do more to lower the disciplinary drawbridges and invite readers into our intellectual castles.

January 22, 2013

Warning: This is a long post, reflecting the complexity and nuance of Internet governance.

Just before the 2012 holidays, the World Conference on International Telecommunications (WCIT), pronounced as “wicket” by the cognoscenti, concluded in Dubai. In the buildup to the WCIT, there was much handwaving and fearmongering, political position jockeying, and backroom negotiations by multinational companies, governments, non-governmental organizations (NGOs), technology policy wonks and political pundits. There were frequent stories in the trade and policy rags and the mainstream press, including the New York Times. How, you might ask, could such a seemingly obscure conference engender such an international frenzy?

In the spirit of full disclosure, I should reveal that I spent a good portion of the last three years working on this issue, while heading Microsoft's Technology Policy Group. I traveled the world, meeting telecommunications policy leaders, trade associations, companies and senior representatives of international governments, including ones from the United States Departments of State, Commerce and Federal Communications Commission (FCC). With that disclosure, as well as noting that what follows are my personal opinions, a bit of background and history is in order.

Telecommunications History

The International Telecommunications Union (ITU), which is a United Nations organization, organizes the WCIT. The ITU began as the International Telegraph Union, which suggests some its origins and history. Originally responsible for coordinating global use of the radio spectrum and, more recently, assisting in international assignment of satellite orbits, the ITU now operates under three units, the ITU-R (radiocommunications), ITU-T (standardization) and ITU-D (development).

As a UN organization, the ITU's members are countries, rather than individuals, companies or NGOs. Despite the one country, one vote governance model, there is multistakeholder participation in some informal aspects of the ITU. I was a frequent visitor to the ITU, and I participated in ITU retreats and CTO roundtables. I know the ITU leadership well.

Because major ITU policies and approaches, including tariffs, are codified as international treaties – the International Telecommunication Regulations (ITRs) – policy changes are fraught with all the complexities that accompanies any international treaty. Indeed, before the 2012 WCIT, the ITRs were last updated in 1988. The languid pace of treaty change is but one of many challenges posed by the ITRs. It has also brought into question the relevance of the ITU and its policies and approaches. The country voting model and UN culture do not match the freewheeling Internet world.

Internet Time

Looking back twenty four years to the previous WCIT takes one almost to prehistory in Internet time. The TCP/IP protocols were in place, but international networking was largely a research or private network curiosity and dialup modems defined the consumer network experience. The World Wide Web was still a gleam in the eye of Tim Berners-Lee; the Mosaic web browser had not burst on the scene; the dot.com frenzy was yet to come; mobile phones were rare, expensive and bulky; and plummeting international telephony charges due to IP-based audio and video calling were still over the horizon. In short, the telecommunications world of 2012-2013 is radically different from the staid landscape of 1988.

The technological and economic changes wrought by the Internet in those fourteen years are equaled by the social and cultural changes. Geographically and politically separated regions are now digitally interconnected in ways unthinkable just a few years ago. The global flow of information has challenged governments, companies, NGOs and individuals to adapt legal frameworks, regulations, technical operations, social expectations and national security operations. (See Globally Connected, Globally Worriedand Information Privacy: Changing Norms and Expectationsfor a bit of perspective.)

Multifaceted Internet Governance

One can parse Internet governance challenges in several ways: technical, legal and economic, social and ethical and national security. Let's begin with the technical issues, which are perhaps the simplest. Operating the global Internet effectively requires de facto adherence to an evolving set of technical standards. The Internet Corporation for Assigned Names and Numbers (ICANN) manages the domain name system (DNS) and IP address blocks on behalf of the international community.

Shifting to economics, companies, countries, NGOs and privacy advocates all worry about the transnational flow of data, differing laws and intellectual property protections, jurisdictional constraints and safe harbors. If you are a Kenyan national working for a German company with operations in Brazil, and you travel to India, just whose laws govern you and your company? The answer is murky, but generally all of the above, plus some others.

Nevertheless, Internet service operators must respond to civil and criminal legal requests for data every day. For those subject to multiple legal jurisdictions, it is sometimes impossible to satisfy the laws of countries involved. For those interested and suffering from insomnia, I highly recommend reading the Bank of Nova Scotia case (United States v. Bank of Nova Scotia, 740 F.2d 817 (11th Circuit, 1984).

Mixed with all of these legal and economic issues are elements of international trade, protectionism and economic development. The rise of cloud computing, whose economics have driven massive scale and data consolidation, has exacerbated these concerns about extraterritorial jurisdiction and control worldwide, particularly when U.S. companies dominate cloud computing and many fear the U.S. PATRIOT Act.

Then there are the crucial issues of human rights and free speech. How does one reconcile global communication, which brings differing international norms and expectations for freedom of speech into direct and day-to-day conflict? It is more than an abstract question for millions of Internet users, and one with no simple answers.

One can debate the ethics, human rights records and legitimacy of certain governments, but the sovereignty of nations and their right to establish laws within their territory is an unquestioned aspect of international law. I have seen senior representatives of governments with widely divergent views on freedom of expression all agree that their governments have a critical role to play in Internet governance. Those same representatives differed markedly in their delineation of appropriate governmental roles, their definitions of free speech and its appropriate limitations.

Finally, there are issues of national security, information security and cyberwarfare that are beyond the scope of this essay. In a knowledge economy, information is advantage, and economic or technical disruption via the Internet itself can be a form of low-grade warfare. Likewise, with military capabilities themselves increasingly dependent on smart, network-connected weapons, they are themselves objects of defense and attack.

Back to the WCIT

This interplay of technology, international law, economics and trade, social norms and human rights, and national security is a witch's brew of complexity, with diverse stakeholders and perspectives. Many of them encamped in Dubai at last December's WCIT.

Thus, it is not surprising there was acrimony and controversy, with claims and counterclaims; one could hardly expect otherwise. Central to these debates were concerns, some say unfounded, though others disagree vehemently, that the U.N., via the ITU, might assume greater control of Internet governance, shifting from the multistakeholder model toward greater centralization and government control.Some of this was also tied up in the ITU’s own search for new relevance.

What emerged from two weeks of painful negotiations was what can best be described as an uneasy peace. After much debate, 89 of 152 countries signed an amended version of the ITRs. The U.S., Japan, Canada, Germany, India, and the U.K. were not among the signatories, objecting to attempts to curtail multistakeholder governance. In short, it was an ugly mess.

Futures

What's next? The global community is strongly divided along ideological lines. Thus, we are likely to see even greater dichotomy in government intervention, more uncertainty in international law, and limits on global information flow. Despite this, I believe the Internet will continue to grow and evolve organically. Too many stakeholders want that to happen, and their voices must be heard.

Thus, I am a strong proponent of the multistakeholder model. I do recognize, though, that governments have an important role to play, just as they do in other domains. This is a messy process, and it will undoubtedly continue that way. Such is the nature of debate.

After the United States Congress Joint Select Committee on Deficit Reduction, otherwise known as the Supercommittee, failed to create an acceptable bipartisan proposal for addressing the U.S. Federal budget deficit, both parties decided to defer further discussions until after the November 2012 elections. As the January 2013 deadline for automatic, across the board cuts is now drawing ever nearer, the discussions have begun again, albeit with accusations and finger pointing on both sides of the political aisle.

Research Interests

Amidst this backdrop, all of us in the research community have been sounding the alarm regarding the consequences of the research cuts that sequestration would necessitate. There is no doubt that substantial cuts to basic research would adversely affect the long-term future of U.S. innovation and global competitiveness, upset already strained university budgets, damage current research projects in a wide range of disciplines, and disrupt the lives of thousands of faculty, post-doctoral associates and students.

That said, is important to acknowledge that we in research are a special interest group, though one whose interests are vital to the future. I realize that some would take umbrage at my description of research as a special interest group, but in the political lexicon, we are, just as are health care and environmental protection. Unless one accepts the realpolitikof budget exigencies and the conflicting goals and objectives of large, disparate multiparty negotiations, the research community will neither be effective in making the case nor realistic in managing the process and likely outcomes.

One cannot simply cry, "This is good, or this is bad," one must make cogent arguments about why certain choices yield differential benefits to the budget negotiators' positions and policies and why those choices are better than other ones. (See Being Bilingual: Speaking Technology and Policy.) Remember, there are far more good ideas than there are resources, and this is equally true in government and science.

Creating Opportunity

Despite the political polarization in Washington, I still believe a budget compromise will emerge. It will not be perfect – such is the very nature of compromise, but I suspect it will include some acceptable combination of revenue increases and budget reductions. Despite its politicization, there is still broad recognition of the importance of basic research; I believe it will fare relatively well when the Sturm und Drang are done.

However, with research proposal success rates plummeting and Hobbesian choices between research infrastructure and investigator support now necessary, we face major challenges. In the apocryphal phrasing of Ernst Rutherford, "We have no money. We must think."

Thinking will undoubtedly mean questioning some perceived verities and deeply held beliefs. NIH R01 awards will no longer be de facto expectations for promotion and tenure. Research infrastructure sharing across institutions may well become the norm, and not just for large-scale instrumentation. Cross-disciplinary tradeoffs about relative investment will become even more pressing. Industry-academic partnerships will rise in importance, as we develop more effective and mutually beneficial industrial collaboration frameworks. However, these industry partnerships will not a surrogate for federal funding of basic research.

Whatever the outcomes, by revisiting some of our assumptions, we can create more free energy in the research system and dedicate precious resources to new and emerging opportunities. I am confident that many of these new opportunities lie at our disciplinary interstices, and hybridization and cross-fertilization can yield new insights and outcomes. More broadly, the coupling of the arts and humanities with public policy, science and engineering, and biomedicine can be transformative. This is consilience in its highest form. However, we must think.

Take heart and keep the faith. The future can be and will be bright – if we make it so.

November 06, 2012

N.B. I also write for the Communications of the ACM (CACM). The following essay recently appeared on the CACMblog.

To appropriate a line from The Music Man, there is trouble here in River City, and anyone who cares about the future of scientific discovery and innovation should be worried. What is that trouble you say? It is the divide separating the land of high-performance computing (HPC) and big data, and the political and funding implications created by this divide.

Whether in the U.S., Europe or Japan, the competition for research and infrastructure funding is intense. In the midst of our lingering economic malaise, budgets are being stretched to the breaking point. Should we invest in a new telescope or a new accelerator, a new polar station or a new biology initiative? These are legitimate, though painful questions, borne of budget exigencies and fiscal realities.

Many of us are concerned that in this time of limited resources, we could face a similar funding competition that pits HPC, particularly exascale plans, against big data. This competition would be disastrous for science, for computing and computational science research, for infrastructure deployment and for global innovation and economic growth. Both HPC and big data are essential elements of the research portfolio. We must make sure this rumble in the policy halls does not take place.

Understanding Cultures, Preventing Conflict

Even those who have never heard the name of the philosopher George Santayana can parrot his famous dictum, "Those who cannot remember the past are condemned to repeat it." Thus, it is worth drawing a few lessons from the Peloponnesian war, which pitted two great Greek city-states, Athens and Sparta, against one another. Although Sparta was ultimately the military victor, the adverse social, economic and political effects devastated all of Greece, and the wars marked the end of the Greek golden age. We cannot afford to reprise the Peloponnesian war in the guise of big data versus HPC.

The root of this potential conflict is the differing norms of computer science and computational science. Big data and machine learning are the creations of the academic computing and business cultures, while computational science is more the offspring of science and engineering. Both share historical roots, and both are cross-fertilized by the other. They need not and should not be inimical, especially since many of the technical challenges are common to both.

Some see big data as an egalitarian opportunity, one that could readily benefit both big and small science and yield scientific and economic benefits very quickly. Let me be clear, this statement is unquestionably true. The unprecedented richness of scientific and engineering data being produced by large-scale instruments and ubiquitous sensors is ripe for harvest and correlation via advanced data mining. Implicit this is the need for investment in both new data analysis tools and techniques and in large-scale data repositories. (See My Scientific Big Data Are Lonely.)

Others see exascale computing system proposals as a quixotic quest for national bragging rights that will benefit only a few. Let me be equally clear, this statement is unquestionably false. Yes, there are elements of national and regional competition in the Top500 rankings. However, the underlying technology challenges and scientific opportunities are profound. Low power memory and processor designs, post-Dennard device scaling and the software and reliability challenges of large, complex systems all are at the forefront of 21st century computing system design, which deep implications for the future of the information economy. Similarly, some of our most pressing scientific and social problems in climate change, energy and biomedicine are dependent on powerful, advanced computing capabilities. Implicit in this is the need for continued, balanced investment in technology research and high-end system deployment for HPC.

A Grand Concord

We need a concord and strategic research investment plan that recognizes the shared importance of HPC and big data. Both warrant investments in basic research, and both need investments in large-scale infrastructure deployments. (Make no mistake, though, research and infrastructure are different, as I noted in Research {preposition} Infrastructure.) In a time of straitened budgets, this will not be easy, and it will undoubtedly require political compromise.

Neither the proponents of big data and nor those of HPC may get all they want on the time scale they desire, but neither can be sacrificed for the other. If the proponents on each side adopt strategies that treat the other community as the enemy, the relevant lesson of the Peloponnesian wars is unmistakable -- in such a battle, there are only losers. That would be disastrous for us all, particularly when there is a win-win so tantalizingly close.

October 31, 2012

I am a member of the U.S. National AcademiesBoard on Global Science and Technology (BGST). As the name suggests, the role of BGST is to examine the shifting nature of the global science and technology enterprise and its implications. These include the global flow of intellectual talent and capital, the interplay of government policies, research and development priorities, innovation and technology transfer, and global competitiveness and security. This is a wide-ranging mandate, which is both exciting and challenging.

Lest this seem mere academic punditry, remember that 30-40 percent of U.S. net economic growth in recent decades has been due to advances in information technology. This is not just a tale of Silicon Valley, but also one for the entire country's economy.

Beyond Dennard Scaling

Moore's Law, the notion that the number of transistors in a given silicon area doubles roughly every two years, is not a law or even a theorem. Rather, it was an empirical observation, originally made by Gordon Moore in 1965. For over forty years, it has continued to hold true by virtue of enormous intellectual effort, ongoing architectural and software innovation, and billions of dollars of investments in process technology and silicon foundries. In turn, consumers, businesses and governments have been the happy beneficiaries of faster microprocessors, more powerful, inexpensive and ubiquitous computing devices and a rich and varied suite of software applications.

However, there is bad news. The continued and seemingly miraculous improvement in general processor performance is now over, a consequence of physical limits on transistor shrinkage that was itself a happy consequent of Dennard scaling. To be clear, Moore's law continues, with the number of transistors on a chip continuing to double, but the transistors no longer become more energy efficient as they shrink. The result has been bounds on microprocessor clock rates due to energy dissipation constraints and the concomitant rise of multicore chips and function-specific accelerators such as GPUs. (See Battling Evil: Dark Silicon and Nothing Is Forever for a few reflections and details.)

This radical shift breaks a virtuous cycle of mutually reinforcing benefits, one where software developers created feature-filled applications and stimulated demand for faster general-purpose processors, which then drove the creation of even more advanced applications. Simply put, we are the reluctant, wide-eyed denizens of a brave new world, one where the cherished and popular expectation that applications would run faster without change each time a new backward-compatible processor appeared.

There is a technical way forward, but it means embracing application parallelism and retargeting software to each new generation of non-compatible, heterogeneous multicore processors. As over fifty years of research in parallel computing has shown, this is a path fraught with pain and difficulty. In turn, this has profound implications for the future of the silicon ecosystem and the nature and locus of continued innovation. It is the trillion-dollar inflection point, pivoting on chip performance, business models and global markets.

Ecosystem Implications

The end of Dennard scaling and the emergence of heterogeneous multicore processors has coincided with another shift, the transition from an era dominated by PCs to one defined by smartphones and tablets. For much of the world, the smartphone is now the primary computing system, and in developing economies, the aspirational feature phone is the only computing device. More to the point, the majority of PC and smartphone users are not in the U.S., nor will they ever be again. Not surprisingly, these two phenomena, the end of Dennard scaling and the rise of smartphones and tablets are deeply interrelated.

The PC ecosystem has long been driven by the phenomenal success of the x86 microprocessor family and successive generations of processors from Intel and AMD, both U.S. companies. Conversely, the smartphone and tablet ecosystem is largely based on the ARM microprocessor family and low power systems-on-a-chip (SoCs) developed by ARM licensees around the world. Beyond the ongoing competition between PC and smartphone vendors, this is a battle of business models, pitting a closed x86 ecosystem with captive silicon foundries against fabless semiconductor design firms that mix and match function-specific accelerators with ARM cores and use Taiwan Semiconductor Manufacturing Company (TSMC) as a foundry.

Enormous resources are being invested in both silicon ecosystems, with x86 designers seeking to "grow down" by reducing power and integrating functionality on chip to compete in the smartphone and tablet market. Conversely, ARM designers are seeking to "grow up" by increasing performance and adding features to compete with x86 designs in the server market. This battle royal is not winner take all. Rather, it is a competition to define the global nexus of innovation, with profound implications for global IT dominance in the next decade.

Global Competitiveness and BGST

It is with this backdrop that the BGST committee examined the technical consequences from the end of Dennard scaling, the cultural and economic challenges of parallelism, the possible shifts of capital and talent, and national and regional investments in IT research. The report's broad conclusions include the following: (a) the U.S. still leads in basic IT research, but the global gap continues to shrink, (b) IT investment strategies and challenges differ markedly across countries and regions, (c) single chip performance is unlikely to continue as the predominate focus of innovation, (d) there are serious risks that growing international markets will diminish U.S. influence and (e) U.S. national security and defense readiness depends on continued rapid uptake and deployment of advanced IT.

I encourage you to download and read the complete BGST report for additional details and insights.

In IT and innovation circles, Gordon Moore is also famous for another dictum, "only the paranoid survive." What he really said is more nuanced and relevant to the global innovation competition, "Success breeds complacency. Complacency breeds failure. Only the paranoid survive." It is worth pondering as one considers the interplay of science and technology, economics, government policy, and business models.

Acknowledgments

I would be remiss if I did not express my sincere thanks to the members of the BGST report committee: Cong Gao (Nottingham), Tai Ming Cheung (UCSD), John Crawford (Intel), Dieter Ernst (East-West Center), Mark Hill (Wisconsin), Steve Keckler (NVidia), David Liddle (U.S. Venture Partners), and Kathryn McKinley (Microsoft). There were thoughtful, dedicated and indefatigable. Finally, all of the committee members are deeply indebted to Bill Berry, Ethan Chiang and Lynette Millett from the National Academies.

September 11, 2012

Over the past year, I have been ruminating the seismic shifts rocking public higher education in the United States. The compact between our society and its public research universities is being renegotiated in ways that are as deep as anything seen in the past forty years. State support continues to decline, accelerated by the economic downturn. In turn, a public backlash is building against rising tuition. There is an increasing need for lifelong learning and skills refresh, and new technologies are challenging historical modes of content delivery.

There are also new expectations for research discoveries to stimulate innovation, coupled with often-unrealistic hopes for short-term economic payoffs from basic research. Amidst all of this, globalization and rapid technology shifts are forcing us to address complex societal issues in new ways. Finally, the nature of scientific discovery itself is in flux, with high-performance computing and big data reshaping research in the physical, biological and social sciences, and even in the arts and humanities.

Late last year I decided it was time to return to academia, taking what I have learned at Microsoft back to the university and laboratory world to help address these challenges. Since that time, I have been working quietly to ensure a smooth transition within Microsoft and working with the leadership of several universities and institutions to define the roles I would take on this fall.

Reflecting on Change

Our personal and professional lives are defined by a series of inflection points – graduation, marriage, career choices – shaped by shifting technology and societal norms. Each of us also faces the age-old question. How and where can each of us most make a difference in addressing the big issues and the complex problems surrounding them? How do we build on our experiences while also challenging ourselves to learn new things?

Before I came to Microsoft in late 2007, for me it was nearly twenty-five years of academic roles at the University of Illinois and the University of North Carolina, first as a computer science professor, then department head, supercomputing center director (NCSA), founder of a multidisciplinary research center (RENCI), and finally as a vice-chancellor. Where and how could I best build on that experience, plus insights gained at Microsoft? Was it technology or policy centric, or some combination of both?

Iowa: Research and Education

After weighing several university offers, spanning big data, HPC and policy, I am delighted to be returning to the Midwest. In October, I will be joining the University of Iowa as Vice President for Research and Economic Development and holder of Iowa's inaugural University Computational Science and Bioinformatics Chair, with joint appointments in Computer Science, Electrical and Computing Engineering and Medicine. For details on this, see the University of Iowa announcement.

Many things attracted me to Iowa. First, it is one of this country's great public universities, spanning, as all great universities do, the arts and humanities, science and engineering and associated professional schools. The university is also home to the famed Iowa Writers Workshop, something the aspiring writer in me prizes greatly. It also has a large and highly ranked health care system and a great medical school. (More on that research opportunity in a bit.)

Finally, the University of Iowa is anchored in the Big Ten, where I spent most of my academic career (Illinois) and time in graduate school (Purdue). Yes, it is football season in the U.S., but the Big Ten is more than an athletic conference. The Committee on Institutional Cooperation (CIC), which consists of the Big Ten schools plus the University of Chicago, is a collaborative vehicle for shaping national higher education policy and helping define the future on research issues ranging from institutional data repositories to intellectual property management.

In addition to my role in the university leadership team, as my new title suggests, I will also be delving into computational science and big data, helping the campus address research opportunities and health care futures. It is no secret that the rising cost of health care in the United States, coupled with an aging population and the not yet fully realized potential of personalized medicine, are both challenges and major opportunities. In that spirit, I am delighted that the chair of the University of Washington Department of Anesthesiology, Dr. Debra Schwinn, is joining Iowa as the new Dean of the Carver School of Medicine. I am looking forward to working with her and the rest of the campus.

Whether identifying predictive patterns from clinical records (e.g., predictors of hospital readmission), correlating and extracting insights from massive amounts of new bioinformatics data, or leveraging new sensors and devices for disease and lifestyle monitoring, large-scale data analysis and machine learning are crucial. Likewise, multidisciplinary computational models of biological processes with predictive power are now realizable. Simply put, these are big data and technical computing problems par excellence.

Finally, I will also be keeping my hand in high-performance computing and national policy. I will be spending time in Washington, DC, as a consultant, focusing on issues related to big data and exascale computing.

For me, all of this is very exciting. It is a new adventure and an opportunity to help define higher education in the 21st century. As Theodore Roosevelt said, it is an opportunity to "dare mighty things," working together.

Finally, Thanks to Microsoft

I want to express my deep thanks to Craig Mundie, Rick Rashid and a host of friends at Microsoft for a great five years. When I first joined Microsoft Research, it was to work on new technical approaches to cloud computing hardware, software and applications, drawing on ideas from technical computing. Seeing the scale and scope of truly large-scale infrastructure, far larger than our research high-performance computing systems, was amazing. That eXtreme Computing Group (XCG) activity later morphed into an equally exciting technology policy agenda that has spanned topics as diverse as the application of clouds to scientific and engineering research through telecommunications to Internet governance and digital privacy.

My time at Microsoft has been a truly wonderful experience, working on important problems with talented and passionate people. I have made new friends, built new relationships and learned an incredible amount.

June 04, 2012

N.B. I also write for the Communications of the ACM (CACM). The following essay recently appeared on the CACMblog.

"Big data" is the meme of the day. Like all such phrases, it is a tabula rasa on which everyone writes their own version of the tale. What then, is big data? Superficially, it is data so large that it challenges one's standard methods for storage, processing and analysis. Like all adjectives, big likes in the eye of the beholder. If your traditional approach to data management has been based on spreadsheets, you may view gigabytes as big data. Conversely, if you are operating a social network site or a major search engine, big has an entirely different meaning, where a petabyte is often the smallest unit of measure worth discussing.

Although much of the strum und drang surrounding big data has focused on the deluge of data from online consumer behavior – web site visits and cookies, social network interactions, search engine queries and online retailing – there are equally daunting, though different big data problems in science and engineering. The scale and scope of data produced by a new generation of instruments, from domains as diverse as astronomy and high-energy physics through geosciences and engineering to biology and medicine, are challenging both our technical approaches and our social and economic structures. One need look no further than high-throughput genetic sequencers, the Large Hadron Collider, and whole sky astronomy surveys to see the challenges and the opportunities.

The challenges to technical approaches are self-evident; any shift by orders of magnitude inevitably brings change, and we need new tools and techniques to extract insights from the data tsunami. As the late Herb Simon once remarked, "…a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it."

The social and economic challenges are just as difficult, though much less discussed. Two issues merit special attention, the culture of sharing, within and across disciplines, and the economics of research data sustainability.

An old joke defines data mining as (insert possessive gesture here) data are mine. Sadly, this hoary saw is more often truthful than humorous. Historically, competitive research advantage accrued to those individuals and groups who first conducted the experiments and captured new data, for they could ask and then answer questions before others. The rise of large-scale, shared instrumentation is necessitating new models of sharing and collaboration across disciplines and research cultures. When many groups have access to the same data, advantage shifts to those who can ask and answer better questions.

The rising importance of data fusion across disciplines brings a deeper issue than simple sharing. Often, data proves to be most valuable in disciplines and groups other than the ones where it was first captured. Social network data illuminates the spread of disease; geosciences data guide urban planning; and atmospheric measurements reveal the health effects of effluents. All of these sometimes unexpected uses have timelines and utility extending far beyond the specific research projects and groups that produced the data. The question then becomes how we maintain this data and cross the cultural boundaries needed to make it accessible to others, particularly when the timescales for initial research data creation and later use by other disciplines may differ by decades.

Data Sustainability

The default reaction to the question of data sustainability is often to propose retaining everything. After all, device storage capacities continue to grow rapidly. However, like an iceberg, the raw cost of storage is simply the small and most obviously visible portion of total cost of data ownership. The majority of the cost lurks beneath – metadata management and creation, access systems and security, curation and coordination – and some entity must bear these costs for sustainability. More pointedly, rarely do the creators of the data have either the technical skills or the incentives to maintain data for long periods. At a higher level, research agencies and universities now face fiscal exigencies that further exacerbate the financial strain of research data sustainability.

Even in the most financially opportune times, not everything can or should be saved. The challenge is in creating economic and social models that extract a larger measure of research and economic value from the data, providing subsidies for data sustainability and further research. Equally importantly, such models could provide the backdrop for choosing which data to retain and which to discard. Lest this seem a Luddite perspective, remember that librarians and archivists have been triaging materials for thousands of years.

Simply put, we must find a new way forward that defines the principles and processes for protecting intellectual property while also creating appropriate cultural and economic rewards for data sharing and sustainability. This is a challenge facing not just individual disciplines, but society at large. We must work together to find a solution.

April 25, 2012

Today, in Cambridge, UK, we celebrated the ten months of experiences and successes from the Cambridge white spaces trial, which was organized by a consortium of companies, including Microsoft. The event brought together telecom regulators from around the world, hardware, software and content providers and interested parties to discuss the potential and the reality of dynamic spectrum management. The "white spaces" – the unused channels in the TV bands at each location – can be used for wireless communication as long as there is no interference with the primary (television broadcast) usage.

This white spaces spectrum is particularly valuable due to its signal propagation characteristics. Sometimes called "Super Wi-Fi," it can provide broad wireless coverage with relatively few access points, for rural areas that are often digitally disenfranchised, for machine-to-machine communication that can enable smart grids, smart cities and intelligent transportation systems, and for mobile data traffic offload in urban areas.

White spaces use has been approved by the U.S. Federal Communications Commission (FCC), and approval by the U.K. regulator (Ofcom) is near. Other regional and national bodies are also moving forward. In this spirit, today, we also announced formation of a new consortium to launch a new trial in Singapore. Details on all of this can be found on the Microsoft on the Issues blog, where I wrote about Broadband White Spaces – Ready to Go Global. For more details and news coverage, type the words, "white spaces" and "Cambridge" into your favorite search engine.