This blog entry focuses on the need for more and better software to reap the benefits of the legal information treasures available. As you’ll see, this turns out to be more complex than one may think.

For commercial software developers, it is surprisingly hard to stay radically innovative, especially when they are successful. To start with, software development itself is a risky undertaking. Despite five decades of research in managing this development process, projects frequently are late, over budget, and much less impressive than originally envisioned. IBM once famously bet the company on a new computer platform, but the development of the associated operating system was so much behind schedule that it threatened IBMs’ existence. Management was tempted to throw ever more human resources at the development problem, only to discover that this in itself causes further delays – leaving us with the useful term “mythical man-month”.

But the difficulty in envisioning hurdles in a complex software engineering project is not the only source of risk for innovative software developers. Successful developers may pride themselves on a large and increasing user base. Such success, however creates its own unintended constraints.

Customers will dislike rapid change in the software they use, as they will have to relearn how to operate it, may have to expend efforts on converting data to new formats, and/or may need to adjust the preferences and customization options they utilized. This gets worse if the successful software is the platform for a thriving ecosystem of other developers and service providers. Any severe change in the underlying platform means that those living in it have to adapt their code. Each time a customer has to invest time in relearning a software product, it offers competing software providers a chance to nab a customer. This prompts software developers, especially very successful ones, to be relatively conservative in their plans for updates and upgrades. They don’t want to undermine their market success, and thus will be tempted to opt for gradual rather than radical innovation when designing the next version of their successful wares.

We have seen it over and over again: Microsoft’s Word, Powerpoint and Excel have gone through numerous iterations over the past decades, but the basic elements of the user experience have changed relatively little. Similarly, concerns for legacy code by third party developers have been a key holdback for Microsoft’s Windows product team. Don’t break something – even if it is utterly ancient and inefficient, buggy and broken – as long as it works for the customers. That’s the understandable, but frustrating, mantra.

Or think of Google: the search engines’ user interface hasn’t seen any major changes since its inception more than a decade ago. Only Apple, it seems, has been getting away with radical innovation that breaks things and forces users to relearn, to convert data, and to expend time. That is the advantage of a small but fervently loyal user base. But even Apple has recently seen the need to take a breather in radical change with Snow Leopard.

And in the legal information context, think of Westlaw and Lexis/Nexis. Despite direct competition with one another, when was the last time we saw a truly radical innovation coming from either of these two companies?

Radical innovation requires the will to risk alienating users. As companies grow and pay attention to shareholder expectations, the will-to-risk often wanes. With radical innovation in the marketplace, the challenge lies in the time axis. If one is very successful with a radically new product at time T, it is hard to throw that product away, and try to risk radically reinventing it, for T+1.

On a macro level, we combat this conservative tendency against radical change by providing incentives for innovative entrepreneurs to develop and market competing offerings. If enough customers are unhappy with Excel, perhaps entrepreneurs with radically new and improved concepts of how to crunch and manage numbers in a structured way will seize the opportunity and develop a new killer app that they’ll pit against Excel. That’s enormously risky, but also offers the potential of very steep rewards. Angel investors and venture capitalists thrive on providing the lubricant (in the form of financial resources) for such high risk, high reward propositions. They flourish on the improbable. What they don’t like are “small ideas.” (It happened to me, too, when I pitched innovative ideas to VCs; they thought my ideas had a very high likelihood of success, but not enough of a lever to reap massive returns. Obviously I was dismayed, but they were right: it is what we need if we want to incentivize radical innovation.)

This also implies, however, that for venture capital to work, markets need be large enough to offer high rewards for risky ventures. If the market is not large enough, venture capital may not be available for a sufficient number of radical innovators to keep pushing the limit. Therefore, existing providers may survive for a long time with incremental innovations. Perhaps that is why Westlaw and Lexis are still around, even though they could fight the tendency toward piecemeal development if they wanted to.

Other large corporations, realizing the bias towards incremental innovation, have repeatedly resorted to radical steps to remedy the problem. They have established skunk works, departments that are largely disconnected from the rest of the company, freeing the members to try revolutionary rather than evolutionary solutions. Sometimes companies acquire a group of radically innovative engineers from the outside, to inject some fresh thinking into internal development processes that may have become too stale.

Peer production models, almost always based on an open source foundation, are not dependent on market success. (On the drivers of peer production see Yochai Benkler’s “The Wealth of Networks”). They are not profit driven, and thus may put less pressure on the developers to abstain from radical change. Because Firefox does not have to win in the marketplace, its developers can, at least in theory, be bolder than their commercial counterparts.

Unfortunately, open-source peer produced software may also lose its appetite for radical innovation over time – not because of monetary incentives, but because of the collaborative structures utilized in the design process. If a large number of volunteering bug reporters, testers, and coders with vastly differing values and preferences work on a joint project, it is likely that development will revert towards a common denominator of what needs to be done, and thus be inherently gradual and evolutionary, rather than radical. Of course, a majority of participants may at rare moments get together and agree on a revolution – much like those in what then was a British colony in 1776. But that is the brilliant exception to a rather boring rule.

Indecisiveness that stems from too small a common ground, however, is not the only danger. On the other end of the spectrum, communities and groups with too many ties among each other cause a mental alignment, or “group think,” that equally stifles radical innovation. Northwestern University professor Brian Uzzi has written eloquently about this problem. Finding the right sweet spot between the two extremes is what’s necessary, but in the absence of an outside mechanism that balance is difficult to achieve for open source peer-producing groups.

If we would like to remedy this situation, how could we offer incentives to peer producing communities to more often give radical rather than incremental innovation a try? What could be the mechanism that takes on the role of venture capitalists and skunk works in the peer production context?

It surely isn’t telling dissenters with a radically new idea to “fork out” of a project. That’s like asking a commercial design group to leave the company and try on their own, but without providing them with enough resources or incentives. Not a good idea if we want to make radical innovation – the experimentation with revolutionary rather than incremental ideas – easier, not harder.

But what is the venture capital/skunk works equivalent in the peer-producing world?

A few thoughts come to mind, but I invite you to add your ideas, because I may not be thinking radically enough.

(1) User: Users, from large to small, could volunteer, perhaps through a website, to dedicate some modicum of their time to advancing an open source project not by contributing to its design, but by committing to being first adopters of more radical design solutions. One may imagine a website that helps link users (including law firms) willing to dedicate some “risk” to such riskier open source peer produced projects, perhaps on a sectoral basis (Could this be yet another mission for the LII?).

(2) Designers: Quite a number of corporations and organizations explicitly support open source peer producing projects, mostly by dedicating some of their human resources to improving the code base. These organizations could, if they wanted to improve the capability of such projects to push for more radical innovation, set up incentives for employees to select riskier projects.

(3) Tools: The very tools used to organize peer production of software code already offer many techniques for managing a diverse array of contributors. These tools could be altered to evaluate the a group’s level of diversity and willingness to take risks, based on the findings of social network theory. Such an approach would at least provide the community with a sense of its potential and propensity for radical innovation, and could help group organizers in influencing group composition and group dynamics. (Yes, this is “data.gov” and the government IT dashboards applied to this context.)

These are nothing more than a few ideas. Many more are necessary to identify the best ones to implement. But given the rise and importance of peer production, and the constraints inherent in how it is organizing itself, the conversation about how to best provide incentives for radical innovation in the legal information context – and beyond – is one we must have.

[NB: What do you all think? How does this apply to the world of legal information, and to specialized software applications that support it — things like point-in-time legislative systems, specialized processing tools, and so on? Comments please…. (the ed.)]

Viktor Mayer-Schönberger is Associate Professor of Public Policy and Director of the Information + Innovation Policy Research Centre at the LKY School of Public Policy / National University of Singapore. He is also a faculty affiliate of the Belfer Center of Science and International Affairs at Harvard University. He has published many books, most recently “Delete – The Virtue of Forgetting in the Digital Age.”He is a frequent public speaker, and sought expert for print and broadcast media worldwide. He is also on the boards of numerous foundations, think tanks and organizations focused on studying the foundations of the new economy, and advises governments, businesses and NGOs on new economy and information society issues. In his spare time, he likes to travel, go to the movies, and learn about architecture.

I think it’s hard to know whether or not the issues that Victor identifies are causing a slowdown in innovation in legal information tools. The reason is that regardless of whether or not there is sufficient venture capital or developer capacity, the raw materials remain largely inaccessible. So long as case law remains tied up behind pay walls, exclusive contracts, or highly decentralized court recordkeeping systems, it is hard for new entrants to emerge. The commercial venture capital approach would require a tremendous commitment of funds before the core software development work could start. Noncommercial collaborative peer production of these tools is essentially impossible.

In short, the lack of innovation and competition might well be a problem of access to knowledge rather than a dearth of viable development models.

[…] An interesting white paper published earlier this year by Tim O’Reilly & John Battelle may shed light on Google’s next steps. That paper notes Google’s aptitude for developing sophisticated automatic metadata creation systems, such as the one underlying Google Mobile App, that incorporate improvements drawn from the study of large numbers of user searches. This aptitude suggests that Google probably won’t need very long to build a good quality automatic legal citator and subject indexing system, if it has a mind to. If Google takes those further steps, then I think it could take a big share of the high-end CALR market. At the very least, its efforts, coupled with Bloomberg’s, should result in increased competition, lower prices, and more innovation yielding better retrieval tools for users in the U.S. CALR sector. (On the obstacles to innovation in the U.S. CALR sector, see Professor Viktor Mayer-Schönberger’s recent post on VoxPopuLII.) […]