We report the findings of a month-long online competition in which participants developed algorithms for augmenting the digital version of patent documents published by the United States Patent and Trademark Office (USPTO). The goal was to detect figures and part labels in U.S. patent drawing pages. The challenge drew 232 teams of two, of which 70 teams (30%) submitted solutions. Collectively, teams submitted 1,797 solutions that were compiled on the competition servers. Participants reported spending an average of 63 hours developing their solutions, resulting in a total of 5,591 hours of development time. A manually labeled dataset of 306 patents was used for training, online system tests, and evaluation. The design and performance of the top-5 systems are presented, along with a system developed after the competition which illustrates that winning teams produced near state-of-the-art results under strict time and computation constraints. For the 1st place system, the harmonic mean of recall and precision (f-measure) was 88.57% for figure region detection, 78.81% for figure regions with correctly recognized figure titles, and 70.98% for part label detection and character recognition. Data and software from the competition are available through the online UCI Machine Learning repository to inspire follow-on work by the image processing community.

Scientists typically self-organize into teams, matching with others to collaborate in the production of new knowledge. We present the results of a field experiment conducted at Harvard Medical School to understand the extent to which search costs affect matching among scientific collaborators. We generated exogenous variation in search costs for pairs of potential collaborators by randomly assigning individuals to 90-minute structured information-sharing sessions as part of a grant funding opportunity for biomedical researchers. We estimate that the treatment increases the baseline probability of grant co-application of a given pair of researchers by 75% (increasing the likelihood of a pair collaborating from 0.16 percent to 0.28 percent), with effects higher among those in the same specialization. The findings indicate that matching between scientists is subject to considerable frictions, even in the case of geographically-proximate scientists working in the same institutional context with ample access to common information and funding opportunities.

This paper discusses several challenges in designing field experiments to better understand how organizational and institutional design shapes innovation outcomes and the production of knowledge. We proceed to describe the field experimental research program carried out by our Crowd Innovation Laboratory at Harvard University to clarify how we have attempted to address these research design challenges. This program has simultaneously solved important practical innovation problems for partner organizations, like NASA and Harvard Medical School, while contributing research advances, particularly in relation to innovation contests and tournaments.

Tournaments are widely used in the economy to organize production and innovation. We study individual contestant-level data on 2,775 contestants in 755 software algorithm design contests with random assignment. The performance response to added contestants varies non-monotonically across contestants of different abilities, precisely conforming to theory predictions. Most participants respond negatively to competition, while the highest-skilled contestants respond positively. In counterfactual simulations, we interpret a number of tournament design policies (number of competitors, prize allocation and structure, number of divisions, open entry) and assess their effectiveness in shaping optimal tournament outcomes for a designer.

Selecting among alternative innovative projects is a core management task in all innovating organizations. In this paper, we focus on the evaluation of frontier scientific research projects. We argue that the “intellectual distance” between the knowledge embodied in research proposals and an evaluator’s own expertise systematically relates to the evaluations given (and consequent resource allocation). We empirically evaluate effects in data collected from a grant proposal process at a leading research university in which we randomized the assignment of evaluators and proposals to generate 2,130 evaluator-proposal pairs. We find evaluators systematically give lower scores to research proposals closer to their own areas of expertise, and to highly novel research proposals. We interpret the empirical patterns in relation to a range of theoretical mechanisms and discuss implications for policy, managerial intervention and allocation of resources in the ongoing accumulation of scientific knowledge.

Most of society's innovation systems -- academic science, the patent system, open source, etc. -- are “open” in the sense that they are designed to facilitate knowledge disclosure among innovators. An essential difference across innovation systems is whether disclosure is of intermediate progress and solutions or of completed innovations. We present experimental evidence that links intermediate versus final disclosure not just with quantitative tradeoffs that shape the rate of innovation, but with transformation of the very nature of the innovation search process. We find intermediate disclosure has the advantage of efficiently steering development towards improving existing solution approaches, but also the effect of limiting experimentation and narrowing technological search. We discuss the comparative advantages of intermediate versus final disclosure policies in fostering innovation.

Platforms have evolved beyond just being organized as multi-sided markets with complementors selling to users. Complementors are often unpaid, working outside of a price system and driven by heterogeneous sources of motivation—which should affect how they respond to platform growth. Does reliance on network effects and strategies to attract large numbers of complementors remain advisable in such contexts? We test hypotheses related to these issues using data from 85 online multi-player game platforms with unpaid complementors. We find that complementor development responds to platform growth even without sales incentives, but that attracting complementors has a net zero effect on on-going development and fails to stimulate network effects. We discuss conditions under which a strategy of using unpaid crowd complementors remains advantageous.

Jeff Davis, director of Space Life Sciences Directorate at NASA, has been working for several years to raise awareness amongst scientists and researchers in his organizations of the benefits of open innovation as a successful and efficient way to collaborate on difficult research problems regarding health and space travel. Despite a number of initiatives, SLSD members have been skeptical about incorporating the approach into their day-to-day research and work, and have resisted Davis's and his strategy team's efforts. The (A) case outlines these efforts and the organization members' reactions. The (B) case details what Davis and the SLSD strategy team learned, and how they adapted their efforts to successfully incorporate open innovation as one of many tools used in collaborative research at NASA.

Jeff Davis, director of Space Life Sciences Directorate at NASA, has been working for several years to raise awareness amongst scientists and researchers in his organizations of the benefits of open innovation as a successful and efficient way to collaborate on difficult research problems regarding health and space travel. Despite a number of initiatives, SLSD members have been skeptical about incorporating the approach into their day-to-day research and work, and have resisted Davis's and his strategy team's efforts. The (A) case outlines these efforts and the organization members' reactions. The (B) case details what Davis and the SLSD strategy team learned, and how they adapted their efforts to successfully incorporate open innovation as one of many tools used in collaborative research at NASA.

This chapter contrasts traditional, organization- centered models of innovation with more recent work on open innovation. These fundamentally different and inconsistent innovation logics are associated with contrasting organizational boundaries and organizational designs. We suggest that when critical tasks can be modularized and when problem solving knowledge is widely distributed and available, open innovation complements traditional innovation logics. We induce these ideas from the literature and with extended examples from Apple, the National Aeronautics and Astronomical Agency (NASA) and LEGO. We suggest that task decomposition and problem- solving knowledge distribution are not deterministic but are strategic choices. If dynamic capabilities are associated with innovation streams, and if different innovation types are rooted in contrasting innovation logics, there are important implications for firm boundaries, design and identity.

As innovation becomes more democratic, many of the best ideas for new products and services no longer originate in well-financed corporate and government laboratories. Instead, they come from almost anywhere and anyone.1 How can companies tap into this distributed knowledge and these diverse skills? Increasingly, organizations are considering using an open-innovation process, but many are finding that making open innovation work can be more complicated than it looks. PepsiCo, the food and beverage giant, for example, created controversy in 2011 when an open-sourced entry into its Super Bowl ad contest that was posted online featured Doritos tortilla chips being used in place of sacramental wafers during Holy Communion. Similarly, Kraft Foods Australia ran into challenges when it launched a new Vegemite-based cheese snack in conjunction with a public naming contest. The name Kraft initially chose from the submissions, iSnack 2.0, encountered widespread ridicule, and Kraft abandoned it. (The company instead asked consumers to choose among six other names. The company ultimately picked the most popular choice among those six, Vegemite Cheesybite.)

Reports of such problems have fed uncertainty among managers about how and when to open their innovation processes. Managers tell us that they need a means of categorizing different types of open innovation and a list of key success factors and common problems for each type. Over the last decade, we have worked to create such a guide by studying and researching the emergence of open-innovation systems in numerous sectors of the economy, by working closely with many organizations that have launched open-innovation programs and by running our own experiments.2 This research has allowed us to gain a unique perspective on the opportunities and problems of implementing open-innovation programs. (See “About the Research.”) In every organization and industry, executives were faced with the same decisions. Specifically, they had to determine (1) whether to open the idea-generation process; (2) whether to open the idea-selection process; or (3) whether to open both. These choices led to a number of managerial challenges, and the practices the companies implemented were a major factor in whether the innovation efforts succeeded or failed.

From Apple to Merck to Wikipedia, more and more organizations are turning to crowds for help in solving their most vexing innovation and research questions, but managers remain understandably cautious. It seems risky and even unnatural to push problems out to vast groups of strangers distributed around the world, particularly for companies built on a history of internal innovation. How can intellectual property be protected? How can a crowdsourced solution be integrated into corporate operations? What about the costs?

These concerns are all reasonable, the authors write, but excluding crowdsourcing from the corporate innovation tool kit means losing an opportunity. After a decade of study, they have identified when crowds tend to outperform internal organizations (or not). They outline four ways to tap into crowd-powered problem solving—contests, collaborative communities, complementors, andlabor markets—and offer a system for picking the best one in a given situation. Contests, for example, are suited to highly challenging technical, analytical, and scientific problems; design problems; and creative or aesthetic projects. They are akin to running a series of independent experiments that generate multiple solutions—and if those solutions cluster at some extreme, a company can gain insight into where a problem’s “technical frontier” lies. (Internal R&D may generate far less information.)

Prizes can be effective tools for finding innovative solutions to the most difficult problems. While prizes are often associated with scientific and technological innovation, prizes can also be used to foster novel solutions and approaches in much broader contexts, such as reducing poverty or finding new ways to educate people.

Now that the America COMPETES Reauthorization Act has given all government departments and agencies broad authority to conduct prize competitions, agencies may find themselves looking for resources to learn about prizes and challenges. This paper describes how government agencies can design, build, and execute effective prizes – though these models can easily be adapted to meet the needs of foundations, public interest groups, private companies, and a host of other entities with an interest in spurring innovation.

As an informational guide to promote the use of prizes within government agencies, with an emphasis on opportunities to form different types of private-public partnerships, this paper:

- Provides an overview of the prize lifecycle to help agencies better understand when to use prizes and the various elements involved in developing a prize; - Presents a framework outlining the various roles agencies can fill in the prize process and the importance of using partnerships to maximize the effectiveness of a prize; and, - Highlights important steps and considerations regarding partnerships with other organizations.

Drawing on interviews and secondary research on existing prizes that rely on multi-sector partnerships, it explores every aspect of forming partnerships and implementing prizes across the broad range of activities that occur within various stages of the prize lifecycle.

While prizes may not be suited to solve every type of problem, they offer a powerful complement to govern­ment agencies’ traditional channels of innovation. As the use of prizes in the government sector increases, new practices and novel ways of structuring contests and partnerships will undoubtedly emerge. To share best practices, agencies are encouraged to collaborate by offering lessons learned from previous competitions and seeking opportunities to assist other agencies in conducting prizes when objectives overlap.

In this paper, I study the effect of adding large numbers of producers of application software programs (“apps”) to leading handheld computer platforms, from 1999 to 2004. To isolate causal effects, I exploit changes in the software labor market. Consistent with past theory, I find a tight link between the number of producers on platform and the number of software varieties that were generated. The patterns indicate the link is closely related to the diversity and distinct specializations of producers. Also highlighting the role of heterogeneity and nonrandom entry and sorting, later cohorts generated less compelling software than earlier cohorts. Adding producers to a platform also shaped investment incentives in ways that were consistent with a tension between network effects and competitive crowding, alternately increasing or decreasing innovation incentives depending on whether apps were differentiated or close substitutes. The crowding of similar apps dominated in this case; the average effect of adding producers on innovation incentives was negative. Overall, adding large numbers of producers led innovation to become more dependent on population-level diversity, variation, and experimentation—while drawing less on the heroic efforts of any one individual innovator.

This chapter reports on an actual field experiment that tests for the influence of “sorting” on innovator effort. The focus is on the potential heterogeneity among innovators and whether they prefer a more cooperative versus competitive research environment. The focus of the field experiment is a real-world multiday software coding exercise in which participants are able to express a preference for being sorted into a cooperative or competitive environment—that is, incentives in the cooperative environment are team based, while those in the competitive environment are individualized and depend on relative performance. Half of the participants are indeed sorted on the basis of their preferences, while the other half are assigned to the two modes on a random basis.

Contests are a historically important and increasingly popular mechanism for encouraging innovation. A central concern in designing innovation contests is how many competitors to admit. Using a unique data set of 9,661 software contests, we provide evidence of two coexisting and opposing forces that operate when the number of competitors increases. Greater rivalry reduces the incentives of all competitors in a contest to exert effort and make investments. At the same time, adding competitors increases the likelihood that at least one competitor will find an extreme-value solution. We show that the effort-reducing effect of greater rivalry dominates for less uncertain problems, whereas the effect on the extreme value prevails for more uncertain problems. Adding competitors thus systematically increases overall contest performance for high-uncertainty problems. We also find that higher uncertainty reduces the negative effect of added competitors on incentives. Thus, uncertainty and the nature of the problem should be explicitly considered in the design of innovation tournaments. We explore the implications of our findings for the theory and practice of innovation contests.

This paper studies two fundamentally distinct approaches to opening a technology platform and their dif- ferent impacts on innovation. One approach is to grant access to a platform and thereby open up markets for complementary components around the platform. Another approach is to give up control over the platform itself. Using data on 21 handheld computing systems (1990–2004), I find that granting greater levels of access to independent hardware developer firms produces up to a fivefold acceleration in the rate of new handheld device development, depending on the precise degree of access and how this policy was implemented. Where operating system platform owners went further to give up control (beyond just granting access to their plat- forms) the incremental effect on new device development was still positive but an order of magnitude smaller. The evidence from the industry and theoretical arguments both suggest that distinct economic mechanisms were set in motion by these two approaches to opening.

This paper provides a basic conceptual framework for interpreting non-price instruments used by multi-sided platforms (MSPs) by analogizing MSPs as "private regulators" who regulate access to and interactions around the platform. We present evidence on Facebook, TopCoder, Roppongi Hills and Harvard Business School to document the "regulatory" role played by MSPs. We find MSPs use nuanced combinations of legal, technological, informational and other instruments (including price-setting) to implement desired outcomes. Non-price instruments were very much at the core of MSP strategies.