Dr. Esko Kurvinen, Service Design Lead at Elisa Oyj, a large IT service and telecommunications operator in Finland, gave a talk today on evolution of user-centered design practice for our class Research Project in Human-Computer Interaction.

The point of departure for the talk was a tension between “classic” user-centered design versus modern agile and lean practices in start-ups, such as “build-measure-learn”. The former assumes thorough user research to precede design attempts. The latter emphasizes that one cannot understand the problem without first trying to solve it.

One of the first discussion points was the observation that the two have significant similarities. If we take a look at the traditional and the agile, they are based on a very similar iterative process. Since iterations are cycles, and consist of similar steps, at some point the two processes become similar. The main difference is in the point at which the design process is entered; in these two either from user research or prototyping.

Our discussion concluded that there is no universally better or worse approach, the choice of approach depends on multiple factors. For instance, if an idea is radical and there exists no relevant data, choosing to research the users may be more reasonable, but on the other hand, if such data is already available, it may be not necessary to conduct extensive user research.

The second main point was that significant changes in the availability of data and experimentation platforms have lowered the risks in user-centered design. In the past, user data was often not as available and was harder to collect. Launching a new product often came with a bigger risk, or a lower risk but costly user research. This caused the threshold to market to be fairly high, and generally it made more sense to focus on doing thorough research before publicly releasing a product. However, today data is readily available and easily collectable, which lowers the threshold to market and opens the doors for many competitors. This favors the fast paced build-measure-learn cycle, since if a company puts lots of effort and time to research, while competitors are doing faster releases, they can lose their competitive advantage. The faster cycles allow the competitors to gain popularity and visibility, which can lead to success more often than technically superior product.

However, because the agile methods leverage particular technologies and platforms, they may “lock” the company to a particular niche which may become over-competed. Therefore, knowing when to explore with user research and when to exploit (with agile methods) is a key strategic decision.

Organizational practices have also changed to support flexible risk-taking. The two main approaches can be understood through the “matrix” versus “silos” paradigms: In a matrix organization, every project has to share resources (rows) like designers, developers, researchers, etc. with other projects (columns). In this case resources are borrowed for a short amount of time, and than the project is given to the next resource. In the “silos” organization (only columns) each project has their own, though maybe more limited, set of resources. This allows projects to be less dependent on other factors and resources and allows for quick (lean, agile) approaches to design, as in the build-measure-learn loop. A bridge, or collaboration, between “understanding” and “action” is further supported by the use and development of new efficient ways of working, more self-sufficient and autonomous teams, communication tools that better support collaboration between developers, designers and business owners, new UI design and prototyping tools that enable super efficient design, user analytics solutions and research tools that gain deeper and more reliable customer insights, and platform standards that assist teams to realize their greatest potential.

What has also changed is that user research is not expected to give direct implications to design. The audiences/customers of user research have become design cooperators. They are likely to come up with design implications if the data are real and captivating. With evidence and data in more actionable formats, and with lower experimentation thresholds, it is quicker to make meaningful adjustments.

All these factors contribute to the reason why the “build-measure-learn” approach has become more popular among startups and even large companies. On the other hand, larger companies generally have bigger resource pools available, which means that negative results do not necessarily have a big impact. In this case the use of “bias for action” is mostly to keep up with the competition, and ideally stay ahead. As Esko pointed out, “the idea is not to minimize the risks, but to find a balance between risks and potential”.The gain of going ahead of competitors and understanding the shortcomings of the developed solution early can compromise the financial loses and be treated as an investment.