How Bank of America Turned Branches into Service-Development Laboratories

In this Harvard Business Review excerpt, HBS professor Stefan Thomke describes how Bank of America applies a systematic R&D process to create services.

by Stefan Thomke

Editor's Note— Companies often use rigorous R&D processes to guide new product development, but are much less scientific when it comes to creating services. Not Bank of America, which has turned Atlanta-area branches into consumer laboratories. Here's how BofA set up its pathbreaking experiments.

Bank of America's Innovation & Development Team quickly realized that it would be very difficult to conduct a diverse array of experiments within the confines of a traditionally designed bank branch. Experiments require frequent changes in practices and processes, which neither the branch employees nor the physical facilities were prepared for. So the team decided to reconfigure the twenty Atlanta branches into three alternative models: Five branches were redesigned as "express centers," efficient, modernistic buildings where consumers could quickly perform routine transactions such as deposits and withdrawals. Five were turned into "financial centers," spacious, relaxed outlets where customers would have access to the trained staff and advanced technologies required for sophisticated services such as stock trading and portfolio management. The remaining ten branches were configured as "traditional centers," familiar-looking branches that provided conventional banking services, though often supported by new technologies and redesigned processes.

The group unveiled its first redesigned branch—a financial center—in the posh Buckhead section of Atlanta in the fall of 2000. A customer entering the new center was immediately greeted at the door by a host—an idea borrowed from Wal-Mart and other retail stores. At freestanding kiosks, associates stood ready to help the customer open accounts, set up loans, retrieve copies of old checks, or even buy and sell stocks and mutual funds. An "investment bar" offered personal computers where the customer could do her banking, check her investment portfolio, or just surf the Internet. There were comfortable couches, where she could relax, sip free coffee, and read financial magazines and other investment literature. And if she had to wait for a teller, she could pass the few minutes in line watching television news monitors or electronic stock tickers. What that customer probably wouldn't have realized was that all of these new services were actually discrete experiments, and her reactions to them were being carefully monitored and measured.

Design and production problems should be worked out off-line, in a lab setting without customers, before the service delivery is tested in a live environment. —Stefan Thomke

To select and execute the experiments in the test branches, the I&D Team followed a detailed five-step process. The critical first step was coming up with ideas for possible experiments and then assessing and prioritizing them. Ideas were submitted by team members and by branch staff and were often inspired by reviews of past customer-satisfaction studies and other market research. Every potential experiment was entered into an "idea portfolio," a spreadsheet that described the experiment, the process or problem it addressed, the customer segments it targeted, and its status. The team categorized each experiment as high, medium, or low priority, based primarily on its projected impact on customers but also taking into account its fit with the bank's strategy and goals and its funding requirements. In some cases, focus groups were conducted to provide a rough sense of an idea's likely effect on customers. By May 2002, more than 200 new ideas had been generated, and forty of them had been launched as formal experiments.

Once an idea was given a green light, the actual experiment had to be designed. The I&D Team wanted to perform as many tests as possible, so it strove to plan each experiment quickly. To aid in this effort, the group created a prototype branch in the bank's Charlotte headquarters where team members could rehearse the steps involved in an experiment and work out any process problems before going live with customers. The team would, for example, time each activity required in processing a particular transaction. When an experiment required the involvement of a specialist—a mortgage underwriter, say—the team would enlist an actual specialist from the bank's staff and have him or her perform the required task. By the time an experiment was rolled out in one of the Atlanta branches, most of the kinks had been worked out. The use of the prototype center reflects an important tenet of service experiments: design and production problems should be worked out off-line, in a lab setting without customers, before the service delivery is tested in a live environment.

An experiment is only as good as the learning it produces. Through hundreds of years of experience in the sciences, and decades in commercial product development, researchers have discovered a lot about how to design experiments to maximize learning. We know, for example, that an effective experiment has to isolate the particular factors being investigated; that it must faithfully replicate the real-world situation it's testing; that it has to be conducted efficiently, at a reasonable cost; and that its results have to be accurately measured and used, in turn, to refine its design. These are always complex challenges, and, as Bank of America found out, many of them become further complicated when experiments are moved out of a laboratory and into a bank branch filled with real employees serving real customers in real time. To its credit, the I&D team thought carefully about ways to increase the learning produced by its experiments, with a particular focus on enhancing the reliability of the tests' results and the accuracy of their measurement. As Milton Jones, one of the bank's group presidents, constantly reminded the team: "At the end of the day, the most critical aspect of experimentation and learning is measurement. Measurements will defend you if done right; otherwise they will inhibit you."

Reducing Perceived Wait Times

by Stefan Thomke

The transaction zone media (TZM) experiment provides a useful example of how Bank of America's innovation process works. The experiment had its origins in an earlier study in which market researchers "intercepted" some 1000 customers standing in bank lines and asked them a series of questions. The study revealed that after a person stands in line for about three minutes, a wide gap opens between actual and perceived wait times. A two-minute wait, for example, usually feels like a two-minute wait, but a five-minute wait may feel like a ten-minute wait. Two subsequent focus groups with sales associates and a formal analysis by the Gallup organization provided further corroboration of this effect. When the I&D Team reviewed the data, they realized there might be opportunities to reduce perceived wait times without reducing actual wait times. Psychological studies have revealed, after all, that if you distract a person from a boring chore, time seems to pass much faster. So the team came up with a hypothesis to test: If you entertain people in line by putting television monitors in the "transaction zone" —above the row of tellers in a branch lobby—you will reduce perceived wait times by at least 15 percent.

Because long waits have a direct impact on customer satisfaction, the team gave the transaction zone media experiment a high priority. In the summer of 2001, the bank installed monitors set to the Atlanta-based news station CNN over the teller booths in one traditional center. Another traditional center serving a similar clientele was used as a control branch. After a week's washout period, the team began to carefully measure actual and perceived wait times at the two branches. The results were significant. The degree of overestimation of wait times dropped from 32 percent to 15 percent at the test branch. During the same period, the control branch actually saw an increase in overestimated wait times, from 15 percent to 26 percent.

Although these were encouraging results, the team still had to prove to senior management that the installation of television monitors would ultimately boost the bank's bottom line. The team knew, from prior studies, that improvements in the bank's customer-satisfaction index (based on a standard thirty-question survey) correlated with increases in future sales. Every one-point improvement in the index added $1.40 in annual revenue per household, mainly through increased customer purchases and retention. So a branch with a customer base of 10,000 households would increase its annual revenues by $28,000 if the index increased by just two points. The team carried out a statistical analysis of the test branch's results and projected that the reductions in perceived wait times would translate into a 5.9-point increase in overall banking-center customer satisfaction.

While the benefits were substantial, the team had to consider whether they outweighed the costs of buying and installing the monitors. The team determined that it would cost some $22,000 to upgrade a branch in the Atlanta innovation market but that, for a national rollout, economies of scale would bring the per-branch cost down to about $10,000 per site. Any branch with more than a few thousand households in its customer base would therefore be able to recoup the up-front cost in less than a year. Encouraged by the program's apparent economic viability, the team recently launched a second phase of the TZM experiment, in which it is measuring the impact of more varied television programming, different sound levels, and even advertising.