Archive

Maintaining relevance

Large companies often find it difficult to innovate, but not for lack of trying. Most major corporations have regular means by which new product or product line ideas are vetted. Unfortunately, such processes are designed to select incremental improvements to existing products and services, rather than to introduce radically new offerings. One significant reason for this is that when senior management considers a new idea, they often look at the revenue stream, and compare it, implicitly, with revenue streams from existing (successful) products. Proposed products that are not immediately comparable in their revenue streams with existing product lines are often not approved.

This is the trap of success. A company learns how to do something well, and then gets stuck in that rut. When the market changes, few companies are able to systematically get out of that rut and regain their past levels of success in new areas. IBM’s reliance on mainframe computing, Xerox’s on xerography patents, and Microsoft’s ignoring the Internet for a long time are examples of companies that actually managed to survive these painful transitions; many other companies did not.

This analogy applies to research as well.

In particular, I am thinking of a comment made by Abdur Chowdhury about SIGIR 2010:

Interesting, no research on real-time search, the largest new form of search in the last decade. #sigir

This prompted a discussion about relative rigor of evaluation and of other conferences, about the lack of test collections, etc.

My sense, however, is that the problem is deeper than that. The SIGIR community is trapped by a very successful paradigm. People can do complex work, the quality of that work can be measured, and progress made. (Of course sometimes progress isn’t cumulative, but that’s a different tangent.)

The bigger problem is that a successful paradigm stifles innovation as much as a large revenue stream. The system works; why change it?

One reason for change is to adapt to changes in the real world to avoid losing touch with it unintentionally. The Hypertext community, for example, ignored the web (for a host of good reasons), and was nearly killed by it.

To be successful, corporations need to incubate new ideas to give them time to grow and mature before deciding whether they are truly useful. The opportunity costs of nurturing a few “startup” level ideas are much lower than the opportunity cost of not being able to react to the market in a timely manner.

The opportunity costs for an academic discipline can be significant as well. Communities such as SIGIR need to intentionally broaden their scope to avoid being trapped in a local maximum, no matter how successful it seems at the time. We need to balance innovation (with its attendant imperfections) with more methodologically-mature work.

While orthodoxy has its place, if left unchecked, it can limit the long-term viability of the endeavor. We need to establish processes to foster alternative approaches and alternative methodologies, to recognize promising (if uncertain) steps in interesting directions. We should intentionally try things we haven’t tried before, and not be afraid that the initial attempts are incomplete, seem inconsistent with respect to established practices, or are even plain wrong. The signal (in Chen and Konstan‘s sense) we send to the community should make it clear that while we value rigor and sound methodology, we also value novelty and innovation in their less well-formed manifestations.

SIGIR should look to start-ups for inspiration about process, rather that to established corporations if it is concerned about its long-term relevance.

You post is an example of an issue that is occurring in several fields, as witnessed by publications in multiple conferences.

Practice is pushing research!

Real time search is an example. Another is sponsored search. Sponsored search (aka keyword advertising) was ignored for years by academia, long after it was established in practice. Now there is steady stream of sponsored search papers. Could say similar comments for collaborative and social media systems.

So, we see implementations of new systems or paradigms. They become established. Then, we see academic papers on them.

Naturally, some exceptions (aka Twitter research caught on pretty fast). but, even with Twitter, it was implemented first and now academics are studying it.

Jim, you’re right, practice can push research when practice produces data that research can’t produce on its own. That seems appropriate to me. What I don’t want to see is research rejecting ideas — whether borne of research or of practice — that don’t conform to the dominant paradigm. That sort of organizational methodological orthodoxy doesn’t help the organization in the long run.

[…] between research and reality. The flip side of the coin: SIGIR research has fallen into a methodological rut; the conference is “trapped by a very successful paradigm […where] people can do complex […]