A few weeks ago I had an interesting conversation on the the state of service science with analysts from an IT research organization who were preparing a report on the subject for their clients.Our discussion led me to reflect on the evolution of service science over the past several years.I think that we are hearing a bit less about it these days.But is that because we’ve become tired of the subject and moved on, or because the application of science and technology to services is now so well accepted that it’s no longer a topic of debate?I very much think it’s the latter.

March 24, 2015

I recently read a very interesting NY Times column, The Reality of Quantum Weirdness, by UC Berkeley professor Edward Frenkel. In the column, professor Frenkel discusses a very deep and important question: Is there such a thing as a true reality, or “is our belief in a definite, objective, observer-independent reality an illusion?” The article is about the strange world of quantum mechanics, a world that’s very different from our everyday life experiences. But part of my fascination with the subject is that I often ask myself similar questions when thinking about the equally mysterious world of highly complexemergent systems, that is, systems where the whole can at times be quite different from the sum of their parts.

Frenkel is an author, - Love and Math is his most recent book, - and a filmmaker in addition to being a mathematician. He uses the so-called Rashomon effect to illustrate his points, an effect named after Rashomon, a classic 1950 film by Japanese director Akira Kurosawa, one of the most prominent and influential directors of all time.

The movie is famous for its novel plot device. Near Kyoto, a samurai has been killed, but it’s not clear why or by whom. Four different characters tell widely different versions of the same event: the samurai’s wife who says she was raped by a bandit, subsequently fainted and then awoke and found her husband dead; a bandit who says he seduced the wife and then killed the samurai in an honorable duel; a woodcutter who says he witnessed the rape and murder but did not want to get involved; and the dead samurai, who speaking through a medium said that the shame of the events he witnessed drove him to kill himself.

The film is an exploration of multiple realities, where it’s not at all clear if there is a real truth, let alone what it might be. The Rashomon effect has thus come to stand for the contradictory interpretations of the same event by different people.

March 03, 2015

People have long argued about the future impact of technology. But, as AI is now seemingly everywhere, the concerns surrounding its long term impact may well be in a class by themselves. Like no other technology, AI forces us to explore the boundaries between machines and humans. What will life be like in such an AI future?

Not surprisingly, considerable speculation surrounds this question. At one end we find books and articles exploring AI’s impact on jobs and the economy. Will AI turn out like other major innovations, e.g., steam power, electricity, cars, - highly disruptive in the near term, but ultimately beneficial to society? Or, as our smart machines are being increasingly applied to cognitive activities, will we see more radical economic and societal transformations? We don’t really know.

These concerns are not new. In a 1930 essay, for example, English economist John Maynard Keynes warned about the coming technological unemployment, a new societal disease whereby automation would outrun our ability to create new jobs.

September 24, 2014

A couple of weeks ago I attended MIT’s Second Machine Age Conference, an event inspired by the best-selling book of the same title published earlier this year by MIT’s Erik Brynjolfsson and Andy McAfee. The conference presented some of the leading-edge research that’s ushering the emerging second machine age, and explored its impact on the economy and society. It was quite an interesting event. Let me discuss a few of the presentations as well as my overall impressions.

In his opening keynote, Brynjolfsson explained what the second machine age is all about. “Like steam power and electricity before it, the explosion of digitally enabled technologies is radically transforming the landscape of human endeavor. Astonishing progress in robotics, automation, and access to information presents major challenges for institutions from small businesses and communities to large corporations and governments, but it also creates opportunities to rethink how we live and work in profoundly positive ways.”

The machines of the industrial economy, - the first age, - made up for our physical limitations, - steam engines enhanced our physical power, railroads and cars helped us go faster, and airplanes gave us the ability to fly. For the most part, they complemented, rather than replaced humans. The second age machines are now enhancing our cognitive powers, giving us the ability to process vast amounts of information and make ever more complex decisions. They’re being increasingly applied to activities requiring intelligence and cognitive capabilities that not long ago were viewed as the exclusive domain of humans. Will these second age machines complement or replace humans?

November 18, 2013

Advances in technology, big data and analytics hold the promise to significantly augment our judgement and expertise and help us make smarter, more effective decisions. But, as we contemplate these exciting innovations, it’s good to take a step back and ask ourselves a few basic questions: How do we make decisions in the first place? What goes on in our minds when we are making decisions, from the simplest to the most complex?

I heard a fascinating talk by Daniel Kahneman that directly addressed these questions at the IBM Cognitive Systems Colloquium which I attended last month. Kahneman is Professor of Psychology Emeritus at Princeton University and Senior Scholar at the Woodrow Wilson School. In 2002, he was awarded the Nobel Prize in Economics “for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty.”

The talk was based on his 2011 bestseller Thinking, Fast and Slow. The book explains the major discoveries by psychologists and cognitive scientists over the past several decades that have led to our current understanding of judgement and decision-making. In particular, it describes the pioneering work of Kahneman and his long time collaborator Amos Tversky, who died in 1996.

August 05, 2013

From the early days of the industry, supercomputers have been pushing the boundaries of IT, identifying the key barriers to overcome and experimenting with technologies and architectures that are then incorporated into the overall IT market a few years later. While we generally focus on their computational capabilities as measured in FLOPS, - Floating-point Operations Per Second, - supercomputers have been at the leading edge in a number of additional dimensions, including the storage and analysis of massive amounts of data; very high bandwidth networks; and highly realistic visualizations.

Through the 1960s, 1970s and 1980s, the fastest supercomputers were based on highly specialized, powerful technologies. But, by the late 1980s, these complex and expensive technologies ran out of gas and parallel computing became the only realistic alternative to scaling up performance.

Instead of building machines with a small number of very fast and expensive processors, the early parallel supercomputers ganged together 10s, 100s, and over time 1000s of much less powerful and inexpensive CMOS microprocessors, similar to the micros used in the rapidly growing personal computer and workstation industry. A similar evolution to microprocessor components and parallel architectures took place a few years later in the mainframes used in commercial applications.

The transition to parallel supercomputing was seismic in nature. Everything changed, from the underlying computer architecture, to the operating systems, programming tools, mathematical methods and applications. It took considerable research and experimentation to learn to effectively use these new kinds of machines. Moreover, there were widely different parallel architecture designs, some coming from universities and others from industry. It wasn’t clear at all which designs worked well for different kinds of applications and would thus be commercially viable.

The Department of Energy (DOE) national labs have long been among the world’s leading users of advanced supercomputers and played a leading role in the transition to parallel architectures. In 1983, the DOE’s Argonne National Lab established the Advanced Computing Research Facility (ACRF), an experimental parallel computing lab which brought together computer scientists, applied mathematicians and supercomputer users and vendors to learn how to best use this new generation of parallel machines.

This past May, Argonne convened a Symposium to mark the 30th anniversary of the ACRF. The Symposium looked both at the progress made in parallel computing over the past 30 years and the major trends for the future. I attended the Symposium and led a panel on The Impact of Parallel Computing on the World.

The Industrial Revolution led to dramatic improvements in productivity and standard of living over the past two hundred years. This is due largely to the machines we invented to make up for our physical limitations - the steam engines that enhanced our physical power, the railroads and cars that made up for our slow speed, and the airplanes that gave us the ability to fly.

Similarly, for the past several decades computers have been augmenting our intelligence and problem solving capabilities. And, according to IBM’s John Kelly and Steve Hamm, there is much more to come. In Smart Machines: IBM’s Watson and the Era of Cognitive Computing, Research director John Kelly and writer and strategist Steve Hamm, note that “We are at the dawn of a major shift in the evolution of technology. The changes that are coming over the next two decades will transform the way people live and work, just as the computing revolution has transformed the human landscape over the past half century. We call this the era of cognitive computing.”

John Kelly, senior VP and director of IBM Research, gave an overview of these four areas. Here is a similar version of his talk given a week earlier at the University of Melbourne. John explained that these four areas have the potential to transform the IT industry because of their exponential growth, a result of both continual improvements and disruptive innovations. “Exponential curves,” he said, “will either put you ahead of the competition of kill you. It’s one of the other.”

Over the next decade, nano-devices are expected to advance by three orders of magnitude, from a billion to a trillion transistors in a chip. We will be able to design sophisticated, powerful nano systems-on-a-chip that will be totally contained within such a trillion transistor nano device. To do so, we will have to shift from silicon to other carbon-based materials. This requires disruptive technologies and innovations at all levels, including new materials, fabrication processes and design tools. The work is underway.

September 05, 2011

The IBM Personal Computer was announced on August 12, 1981. Having sold its PC business to Lenovo in 2005, IBM itself did not mark the occasion, and as far as I know, neither did Lenovo or the other companies still selling IBM PC’s. But, the 30th anniversary of this important event in the history of computing was duly noted by IBM Fellow Mark Dean, - a member of the small team that designed the original machine, - in a nostalgic and thoughtful blog. At the beginning, Mark writes:

“It’s amazing to me to think that August 12 marks the 30th anniversary of the IBM Personal Computer. The announcement helped launch a phenomenon that changed the way we work, play and communicate. Little did we expect to create an industry that ultimately peaked at more than 300 million unit sales per year. I’m proud that I was one of a dozen IBM engineers who designed the first machine and was fortunate to have led subsequent IBM PC designs through the 1980s. It may be odd for me to say this, but I’m also proud IBM decided to leave the personal computer business in 2005, selling our PC division to Lenovo. While many in the tech industry questioned IBM’s decision to exit the business at the time, it’s now clear that our company was in the vanguard of the post-PC era.”

“I, personally, have moved beyond the PC as well. My primary computer now is a tablet. When I helped design the PC, I didn’t think I’d live long enough to witness its decline. But, while PCs will continue to be much-used devices, they’re no longer at the leading edge of computing. They’re going the way of the vacuum tube, typewriter, vinyl records, CRT and incandescent light bulbs.”

March 07, 2011

Tools have played a critical role in human evolution for a very, very long time. As the Toolsentry in Wikipedia observes:

“Tools are the most important items that the ancient humans used to climb to the top of the food chain; by inventing tools, they were able to accomplish tasks that human bodies could not, such as using a spear or bow and arrow to kill prey, since their teeth were not sharp enough to pierce many animals' skins . . . The transition from stone to metal tools roughly coincided with the development of agriculture around the 4th millennium BC. Mechanical devices experienced a major expansion in their use in the Middle Ages with the systematic employment of new energy sources: water (waterwheels) and wind (windmills). . . Machine tools occasioned a surge in producing new tools in the industrial revolution. Advocates of nanotechnology expect a similar surge as tools become microscopic in size.”

Beyond extending our physical capabilities, tools have played a major role in the evolution of our brain and mental powers: “Using tools has been interpreted as a sign of intelligence, and it has been theorized that tool use may have stimulated certain aspects of human evolution - most notably the continued expansion of the human brain.”

“The news of the day often includes an item about some development in artificial intelligence: a machine that smiles, a program that can predict human tastes in mates or music, a robot that teaches foreign languages to children. This constant stream of stories suggests that machines are becoming smart and autonomous, a new form of life, and that we should think of them as fellow creatures instead of as tools.”

“. . . What bothers me most about this trend, however, is that by allowing artificial intelligence to reshape our concept of personhood, we are leaving ourselves open to the flipside: we think of people more and more as computers, just as we think of computers as people. . . When we think of computers as inert, passive tools instead of people, we are rewarded with a clearer, less ideological view of what is going on - with the machines and with ourselves.”

Jason Lanier’s OpEd inspired me to reflect on the successes and failures of artificial intelligence (AI) over the past several decades, as well as on some of the essential differences between machine and human intelligence.

The term artificial intelligence is often used in quite different ways. At one end, is the more applied kind of AI, which is essentially the application of advanced engineering to machines and systems in particularly clever ways that are inspired by and remind us of human intelligence. At the other end is what is sometimes called strong AI, which aims to develop machines that match or exceed human intelligence and cognitive abilities like reasoning, planning, learning, vision and natural language understanding.

August 23, 2010

Recently, some colleagues were talking about the upcoming LinuxCon 2010 in Boston - “The Linux Foundation's annual technical conference that provides and unmatched collaboration and education space for all matters Linux.” Hearing about this conference brought me back to 1999, when we started a number of studies that culminated in the announcement of the new IBM Linux initiative in January of 2000.

By the summer of 1999, Linux was picking up steam in the marketplace, especially in areas where IBM was very involved, including Internet infrastructure and supercomputing. At the time, a number of research institutions and leading edge companies were already using clusters of Intel processors running Linux as a way of building relatively inexpensive, and increasingly powerful supercomputers, as well as highly scalable web servers, distributed file and print servers, network firewalls, and other Internet infrastructure applications.

We commissioned a couple of studies, one focused on the use of Linux in supercomputing, and the other on Linux as a high-volume platform for Internet applications. Both studies strongly recommended that IBM embrace Linux across its product lines, that IBM should work closely with the open Linux community as a partner in its development, and that we should establish an organization to coordinate Linux activities across the company.

I was given responsibility for our new Linux strategy and organization. A few weeks after we announced the initiative in January of 2000, I gave a keynote presentation at the Linux World conference in New York City. I used this opportunity to explain IBM’s decision to embrace Linux across all its products and services. Around the same time, I also discussed IBM's Linux strategy in an interview with Charlie Rose - A Conversation about Linux.

April 05, 2010

Isaac Newton laid down the foundations for what we now call classical mechanics with the publication of his Principia Mathematica in 1687, where his Laws of Motion, where first articulated. Ever since, our scientific understanding of the world around us has been based on classical mechanics, - “a set of physical laws governing and mathematically describing the motion of bodies and aggregates of bodies geometrically distributed within a certain boundary under the action of a system of forces.”

Classical mechanics works exceptionally well for describing the behavior of objects that are more or less observable to the naked eye. It accurately predicts the motion of planets as well as the flight of a baseball. It formed the scientific basis for the technology and engineering underlying the Industrial Revolution.

The elegant mathematical models used in classical mechanics depict a world in which objects exhibit deterministic behaviors. The same objects, subject to the same forces, will always yield the same results. These models make perfect prediction within the accuracy of their human-scale measurements.

But, this stable world that could be perfectly described given enough information and scientific knowledge began to fall apart in the early 20th century. Classical mechanics could not explain the counter-intuitive and seemingly absurd behavior of energy and matter at atomic and subatomic scales. Neither could it explain the behavior of bodies traveling near the speed of light or the vast scales of the universe.

February 15, 2010

Supercomputing has been a major part of my education and career, from the late 1960s when I was doing atomic and molecular calculations as a physics doctorate student at the University of Chicago, to the early 1990s when I was general manager of IBM's SP family of parallel supercomputers.

The performance advances of supercomputers in these past decades have been remarkable. The machines I used as a student in the 1960s probably had a peak performance of a few million calculations per second or megaflops. Gigaflops (billions) peak speeds were achieved in 1985, teraflops (trillions) in 1997, and petaflops (a 1 followed by fifteen zeros) in 2008.

The supercomputing community is now aiming for exascale computing, - 1,000,000,000,000,000,000 calculations per second. The pursuit of exascale-class systems was a hot topic at the recent SC09 supercomputing conference.

In the quest for the fastest machines, supercomputers have always been at the leading edge of advances in IT, identifying the key barriers to overcome and experimenting with technologies and architectures that generally then appear in more commercial products a few years later.

The next day, there were break-out discussions in six key areas relevant to building a smarter city: transportation, education, public safety, energy and utilities, government services and healthcare. I was co-moderator of the healthcare session, along with Dan Pelino, IBM’s General Manager of Healthcare and Life Sciences. Our panel of experts included Dr. Cortese; Chris Coburn - Executive Director of Innovations at the Cleveland Clinic;Dr. Ronald Paulus - Chief Technology and Innovation Officer at the Geisinger Health System; and Dr. Armando Ahued Ortega - Health Secretary of Mexico City. Let me summarize their practical and concrete remarks.

A petaflop is a million billion calculations per second, that is, a 1 followed by fifteen zeros. That is how many calculations per second Roadrunner can perform. When talking about petaflops, the numbers are so large that it is hard to comprehend what they mean. We are almost into numbers of astronomical dimensions.

The IBM press release used a few analogies to describe the power of Roadrunner, such as "The combined computing power of 100,000 of today's fastest laptop computers"; and, "It would take the entire population of the earth, - about six billion - each of us working a handheld calculator at the rate of one second per calculation, more than 46 years to do what Roadrunner can do in one day."

The previous major milestone for supercomputers was the teraflop - which is a 1 followed by twelve zeros. Crossing the teraflops barrier was a huge deal for the supercomputing community when we did it in the late 90s. And here we are - only ten years later - with a machine which is 1000 times more powerful than a teraflop machine.

February 22, 2006

One of the major forces for innovation in computing has been the acceleration of application run-times from minutes or hours to seconds or less, thus enabling them to become interactive. The whole nature of an application changes qualitatively and its value improves significantly when it is able to quickly respond to our every action and request.

I have observed this transition from long-running to interactive applications several times in my career. The advent of time-sharing and other technologies in the '60s and '70s ushered in transaction processing applications like airline reservation systems and ATMs. Then personal computers introduced us to spreadsheets, word processing and many other personal productivity applications. More recently the Internet, coupled with broadband networks, has brought us access to the Web, as well as e-business, search, blogging and new applications of all sorts.

June 22, 2005

The Top500 list of the world's fastest supercomputers was just released. This list is compiled and published twice a year by a group of independent scientists in the US and Europe. IBM was very prominent in the Top500 list, with 6 out of the top 10 systems and 259 out of the top 500 systems. We are really proud of this achievement.

Why is supercomputing so important despite its being a relatively small portion of the IT industry?

As it happens, this question was dealt with directly in a report to the President of the United States released last week by PITAC, the President's Information Technology Advisory Committee, titled Computational Science: Ensuring America's Competitiveness. To quote from the PITAC report, "Computational science is now indispensable to the solution of complex problems in every sector, from traditional science and engineering domains to such key areas as national security, public health, and economic innovation." The PITAC report concludes that while the potential benefits of supercomputing and its applications are enormous, the challenges are huge, and it makes specific recommendations to Federal government R&D agencies and universities.

I talked (in Spanish) about some of the major forces driving innovation, including the Internet and standards in general, as well as new applications and products based on embedded IT in everything from consumer electronics to medical equipment to automobiles; and concluded with a discussion of the major societal changes brought about by collaborative innovation. Professor Valero centered his talk on advances in supercomputing technology and architecture, and he talked at length about the Barcelona Supercomputing Center, which he heads, and MareNostrum, the new supercomputer being installed at the Center.