Dr. Licthenthaler has been a prolific researcher: Web of Science (the people who compile SSCI) lists 45 published articles, 42 of them from 2007-2012. Three were in A journals (AMJ, Org Sci, SMJ) and nine from the top speciality journals (ETP, ICC, JBV, JPIM and RP), although five of these 12 articles were later retracted.

Some of his work is highly cited, leaving many researchers in a quandary: if there is a relevant article, how would I know if it will be retracted in the future? As someone actively publishing, reviewing and editing OI research, for me this is not a hypothetical question. So I downloaded 43 of the 45 articles (two were not available and inter-library loan is closed for Christmas) as well as two other articles listed by Google Scholar but not Web of Science.

Six of the eight retracted articles were about the licensing practices of European firms that utilize data gathered with the assistance of the Licensing Executives Society (LES). As the earlier retractedResearch Policy articles states, “We directly contacted all LES industry members in Germany, Switzerland and Austria. … 155 firms participated in the study, corresponding to a response rate of 37.6%.” (Lichtenthaler, 2009: 562). Other retracted articles mentioned 152 responses and four unretracted articles mention 154.

Twelve other Lichtenthaler articles (11 from WoS, a 12th from Google Scholar) mentioned the LES:

The retractedStrategic Management Journal article seems to use data that overlaps the LES sample. An additional three articles seem to also use data that overlaps LES but do not mention the sample by name:

Two Three other papers said they used a similar sampling frame, but selected smaller companies than those used in the other LES studies. As Lichtenthaler, Ulrich & Miriam Muethel (2012: 197) said in their JET-M article, “To avoid overlaps with earlier empirical studies (e.g., Lichtenthaler et al., 2010), we selected companies that are ranked on ranks 201–500 of the largest firms in terms of revenues in each of the following three industries: automotive, chemicals, and electronics.” These papers are

I have no reason to expect that any specific article will be retracted, but if a ninth article were to be retracted, I would assume it would come from one of these 17 18. Together with the eight retracted articles, that accounts for 25 26 articles of the 47 published articles.

The reasons announced for the retractions seem to fit three general categories:

Re-use of material from an earlier article

Empirical results that contradict an earlier result (e.g. by omitting a variable that was significant in an earlier paper)

Other statistical analysis problems, such as exaggerated or inconsistent statistical significance or R2 values (as in the Strategic Organization and Research Policyretractions)

All but the latter problem are associated with the second or subsequent publication from the same text or data. As such, the first (or first few) articles from a given series would not have such problems and are unlikely to ever be retracted.

However, if there are general statistical problems, those could apply to one or more of the other 22 21 articles. Six of the articles seem unlikely to be retracted for any reason because they are literature reviews without statistical analysis, and because they seem significantly different from each other:

How does this impact what papers to cite? Some remain oblivious to the whole scandal. This seems unlikely for board members of one of the five impacted journals, but might be possible at other journals or by scholars who are not active in innovation research (and don’t read the Retraction Watch or Open Innovation weblogs.)

In the end, individual scholars will make their own decisions. Some will refuse to cite any of Licthenthaler’s work, assuming it all to be tainted. Others will presume him innocent until proven guilty, citing any unretracted paper. Others seem to be taking an intermediate position: avoiding citing most of work, but giving credit for unique ideas not yet published by any other scholars.

November 21, 2012

Retraction #8 for Ulrich Lichtenthaler is for a July 2008 (sole-authored) paper at the Journal of Business Venturing:

This article has been retracted at the request of the Editor-in-Chief and Author.

The author contacted the Editor-in-Chief about statistical irregularities in this article in July 2012. The Editor-in-Chief thoroughly investigated this article and other preceding papers from the same database. On this basis, the Editor-in-Chief made the decision to retract the paper. The grounds for retraction are an error in statistical analyses, an omitted variable bias, and a “new” measure that was not “new” because it was already used in Lichtenthaler, U. and Ernst, H., Res. Policy, 36 (2007) 37–55, http://dx.doi.org/10.1016/j.respol.2006.08.005. These errors undermined the review process and are too substantial for a corrigendum.

This appears that like retractions #1, #5 and #6, it was by mutual agreement. However, unlike at SMJ, the editor thoroughly investigated the problems rather than just letting the author withdraw his paper. As with the RP retraction, Elsevier has placed a big red “Retracted” on every page (the other journals didn't seem to use red).

For paper #7 — the recent Organization Science retraction — Retraction Watch reported (as I meant to) the number of citations to the retracted paper (15 in Google Scholar). The JBV article is the most cited of the eight retracted papers, with 34 citations.

But five Lichtenthaler papers — in AMJ (2009), IEEE Transactions on Engineering Management (2008), Journal of Management Studies (2009), R&D Management (2006) and the International Journal of Management Reviews (2006) — all have more than 100 citations. Both the AMJ and JMS papers are about absorptive capacity, but otherwise I have no reason to expect that any of Lichtenthaler’s most cited work will be retracted.

While the scandal has tarnished his reputation, every month Prof. Lichtenthaler continues to get new citations to his research, with at least 56 citations in 2012 so far to the AMJ paper. The retractions are well known in the open innovation community — causing many OI-focused researchers to stop relying on his work — but the larger innovation management community seems to be unaware of the issues.

Below is the complete list of retracted articles to date. For consistency with my discussion of the retractions, I will now list them in the order announced.

The article “Not-Sold-Here: How Attitudes Influence External Knowledge Exploitation” (Organization Science (2010) 21(5):1054–1071, DOI: 10.1287/orsc.1090.0499) is being retracted after an assessment that the work violates INFORMS publication standards in two important respects. First, the citation to highly related prior work by the first two authors is quite incomplete. As a result, it was not possible to assess the novelty of the work. In addition, there is reason to believe that key results in the paper would not hold if variables included in this related work had been incorporated into the analysis.

According to the acknowledgements, the 2010 article by Lichtenthaler, Ernst and Hoegl was presented at the 2006 Academy of Management conference and accept by OS editor Linda Argote.

Isabella (leafing through some documents): Let’s see, let’s see - oh, here it is. ‘Finding a New Route to the Indies by Sailing West.’ (Looking up at him) You’re serious, right? I mean, this isn’t some sort of joke...

Columbus: Of course it’s no joke! I propose to test the hypothesis that the world is both small and round. If the hypothesis is true, I should be able to reach the Orient much faster than the current route around Africa.

Ferdinand: And if the hypothesis is wrong, you’ll fall off the edge of the earth.

Columbus: Possibly. But even if it’s wrong, by going where no one has gone yet, I might bump into something really interesting.

Ferdinand: What you’ll bump into is the edge of the earth, and you’ll fall off.

Columbus: I agree that there is risk involved, Your Majesty, but consider the impact if I’m right. In the guidelines for obtaining funding, you specify that impact is a major factor in determining if a proposal is funded.

Isabella: I know we say that, Captain, but we don’t mean it. Why, if we actually judged proposals that way, many of them would fail.

As you might imagine, he continues to riff on the risk-aversion of these 15th-century royal patrons. Fortunately (at least for US citizens if not Native Americans) the actual funding request worked out better in the real world.

Dr. Petsko is a retired biochemistry professor at Brandeis and a member of the National Academy of Sciences. So this is a personal inspiration as to what sarcastic academics can do in retirement.

As open innovation is rapidly gaining importance in research and practice, new questions and challenges arise that require a deeper understanding of these phenomena from an IS perspective. There is much that has yet to be understood about the role of IS in enabling open innovation, and the implications of open innovation movement on the various aspects of the IS discipline. The aim of this special issue is to expand and advance the state of open innovation research within the IS field, highlighting work that makes significant theoretical and empirical advances to our understanding of IT enabled open innovation.
…

Important dates
Initial submissions of full papers: 5th August 2013
Reviews sent to authors: 2nd December 2013
Workshop: 16th December 2013 at ICIS (for papers through to 2nd round)
Revised papers from authors due: 31st March 2014 Decision notification: 30th June 2014
Final papers due: 25th August 2014
Publication (anticipated): November 2014

September 14, 2012

ESADE is bringing to Barcelona the latest iteration of its Ph.D. seminar in open innovation.

The seminar will be held in January 7-9, 2013. As always, it is being organized by the father of open innovation, Henry Chesbrough, and another leading open innovation scholar, Wim Vahaverbeke (both of whom are part-time faculty at ESADE).

Last year’s seminar generated quite a lively discussion on the OIC Facebook page, so it sounds like a unique experience for nascent innovation scholars.

August 25, 2012

The following article from the Journal of Management Studies, The Impact of Accumulating and Reactivating Technological Experience on R&D Alliance Performance by Holger Ernst, Ulrich Lichtenthaler and Carsten Vogt, published online 17 March 2011 on Wiley Online Library (http://www.wileyonlinelibrary.com), has been retracted by agreement between the authors, the journal's General Editors, Andrew Corbett, Andrew Delios and Bill Harley and Blackwell Publishing Ltd. The article is retracted due to errors in the reported empirical results, which form part of the basis for the conclusions drawn by the authors in the study. While the second author did not collect the data, he takes the responsibility for these technical errors.

The second author (Lichtenthaler) was listed as corresponding author for this article.

Update, Sept. 4: People are asking for a complete list of retracted papers so I will publish one at the bottom of any article about a new retraction. The list is chronological by author and original publication date, not retraction date.

The SMJ website didn’t state a reason, but the managing editor told me “the authors asked the SMJ to remove their paper and we followed their wishes.” The JWB stated:

This article has been retracted at the request of the Editor-in-Chief and Author.

After discussions between the Editors-in-Chief and the authors, it has been decided that this paper should be retracted due to errors in the results tables, specifically the regression coefficients and standard errors do not fit with the significance levels of some variables in a few models in the paper. The author notified the Editors-in-Chief of these errors on the 5th of June 2012.

This is similar to the Lichtenthaler and Ernst (2009) retraction, which was “retracted at the author’s and editors’ request due to errors in reporting, for which the first author has claimed responsibility” — although SO co-editor Russ Coff elaborated:

The author approached us and asked that we retract the paper. Further investigation confirmed specific irregularities as well as a broader pattern. For example, in some cases where the coefficients and standard errors are about the same size, variables are reported as highly significant, This problem is more evident for independent variables than control variables. It is clear that the findings should not be cited in subsequent research. This is only one of the issues raised and it appears to be part of a pattern across a number of articles published in a variety of well-respected journals. The first author wants to make it clear that he approached us proactively and that he claims responsibility…

However, after discovering both parallel publication and omitted variables problems, Research Policyrefused to allow the author to retract his papers:

After the Research Policy Editors had made their decision to retract the two papers (but before he had been notified of the outcome), the author wrote to acknowledge a third problem with the Research Policy 2009 paper, namely that the statistical significance of several of the findings had been misreported. In the light of this new problem, the author asked to withdraw the Research Policy 2009 paper. However, by then the editorial decision to retract that paper on the original two grounds listed above had already been taken.

So the authors were allowed to pre-empt any retraction at SMJ but not at RP — while at SO and JWB the official retraction was announced as a joint decision of the authors and editors.

I have heard that articles are being investigated for possible retraction at two other journals, but nothing has been posted to these websites.

August 23, 2012

We’ve decided to extend the deadline for the special issue of Research Policy on open innovation. We had requests for additional time (being the end of summer vacation for most) and also looked at how the submissions will be processed, and decided we could provide authors additional time without slowing our review of the papers.

The old deadline was 11:59pm PDT on Friday Aug. 31. The new deadline is 11:59pm GMT on Sunday Sept 2.

We cannot grant any extensions beyond that time, because the review process begins Monday morning (London time) and we’re on a very tight schedule to complete multiple rounds of reviews and publish a special issue in 2013.

The 30 papers in sessions that mention either “firms” or “open innovation” are a rough proxy for OI papers, although I think at least half of the innovation contest/crowdsourcing papers are more OI than UI. So perhaps one-fourth of the papers are about OI, perhaps half have a UI theoretical (or philosophical) framing, and the rest are either both or neither.

With growth comes growing pains. Some of the crowdsourcing, open innovation and lead user authors are in competing sessions, which will make it more difficult to follow all the relevant research. Also, with a variable number of papers per session, it will be harder to jump between competing sessions (as was encouraged in 2008). As with any conference, some of the groupings are approximate: my paper (being presented Tuesday by co-author Jonathan Sims) on firms working with online communities is not grouped with either one. At least paper I’m presenting Wednesday (comparing open source in software and biology) is with OSS if not the other biomedical papers.

A number of my academic friends are surprised that (since 2008) I make a point to attend OUI every year but often skip the Academy (including this year). Even at its larger size, OUI is more focused on open and distributed models of innovation whereas even the TIM track of Academy is a hodgepodge of just about everything. The sessions of OUI offer a body of researchers on OSS, community, cumulative and related innovation processes that are only rarely found at even the best Academy session.

So I’m looking forward to returning to Boston, hearing interesting work and meeting old (and new) friends. Of course I’m also hoping to use the knowledgeable feedback to improve my own work, but that will at most be only 45 minutes of nearly three days of sessions.

July 27, 2012

At the end of last month’s #oi2012 workshop, the special issue editors gave some recommendations to those interested in publishing in the special issue. I summarized a few key points for those at the conference. Below are my notes on the process (delayed until after we got caught up with our promised feedback for conference authors).

The editors are looking for strong, RP-quality papers to fill the special issue. The added considerations are that the paper fits in the special issue — both by fitting the theme and complementing the portfolio of articles that we’ve selected. (The 2003 open source and 2006 Teece special issues are good examples of such portfolios.)

I offered these four recommendations to prospective authors:

Write about an important issue in open innovation. Some authors start with an issue they find interesting and then add a few cites to “fit” the paper to open innovation; given the large body of OI papers, this is often less successful than starting from what’s already been said (or not said).

Start with the most relevant literature, which will be both inside and outside open innovation. As Ammon Salter reminded conference presenters, don’t write a history of open innovation — in a conference (or special issue) you are writing for people who already know about open innovation.

Make it interesting — for the reviewers, editors and eventual readers.

Finally, what’s new? What can we do now that we couldn’t do earlier?

Many of these would apply to publishing in any context.

The deadline remains August 31. Each paper will be assigned one of the four guest editors — due to subject matter, workload or avoiding conflicts of interest. If we get more papers than available reviewers (we got 78 papers on April 15), then the editors will have to decide which ones go for review and which ones don’t (desk reject).

One thing I didn’t think to mention at the conference: The editors have the option of considering a “Research Note,” which RP defines as

Research Notes - typically of 3-5,000 words, this category is a vehicle for specific types of material that merit publication, but do not require all the 'normal' components of a full research article. This might cover, for example, specific aspects of methodology that have broad relevance for RP readers, or short reports about specific sets or types of data (and their access and use) that merit publication without the full set of requirements for a normal article. It might also be relevant, for example, for updating an earlier RP paper, where it is not necessary to repeat the literature review, methodology etc.

The advantage for RP (and the special issue) is that a research note can make a narrower point more succinctly and not use up as many pages; I personally think this could work well to support our portfolio strategy. The disadvantage is that the editorial review process for a note — soliciting and monitoring reviewers — is as much work for the editors as for a full article, even if it’s slightly easier for the reviewers; thus, in the first round the note is competing with full-length articles for a limited supply of reviewers, even if the final acceptance bar could be lower.

Finally, on to unpleasant subjects: unethical behavior. At the closing session, Ammon Salter reminded us of the earlier presentation by Ben Martin. In his talk on “20 challenges for innovation studies,” Martin mentioned that Research Policy has been dealing with a variety of cases of academic dishonesty, including plagiarizing others and self-plagiarism (which Ben called “salami slicing.”) [This was three weeks before Martin published the retraction of two RP papers on patent licensing.]

Research Policy and Elsevier adhere to the highest standards with regard to research integrity and in particular the avoidance of plagiarism, including self-plagiarism. It is therefore essential that authors, before they submit a paper, carefully read the Ethics Ethical guidelines for journal publication - see http://www.elsevier.com/wps/find/intro.cws_home/ethical_guidelines#Duties of Authors. Particular attention should be paid to the sections under 'Duties of Authors' on 'Originality and Plagiarism' and 'Multiple, Redundant or Concurrent Publication'.

RP expects that authors will fully disclose any multiple, redundant or concurrent publication. Ammon reminded authors to include any such disclosure in their submission letters.

July 19, 2012

Among open innovation researchers, the past 24 hours seem to have been spent discussing Ulrich Lichtenthaler and his three retracted articles, including two in Research Policy.

I’ve only met Dr. Lichtenthaler once, He certainly is a prominent open innovation researcher: Google Scholar says he has published 15 articles with "open innovation” in the title. When I review an OI paper, usually I find 1 or 2 cites to his work, and sometimes as many as four. But the rate at which the news has spread — via Twitter, Facebook, even Skype — hints at the number of people who’ve been waiting for confirmation of what they long suspected (or perhaps hoped).

Top-flight German business prof faces severe accusations of academic misconduct

One of the most successful German business professors is currently facing awkward questions about his scientific conduct. Ulrich Lichtenthaler, who is affiliated with the University of Mannheim, has come under suspicion of inflating his publication record using unethical methods.

Additionally, a number of his published papers apparently contain severe mathematical errors and methodological inconsistencies.

In the last couple of weeks, two academic journals – “Research Policy” and “Strategic Organization” – officially retracted three papers by Lichtenthaler. “This is only the tip of the iceberg”, asserted a researcher familiar with the investigations speaking to me on the condition of anonymity. “There is much more to come.”

Several people who looked into the matter are convinced that the whole affair has the potential to turn into a major academic scandal. One academic told me that “Industrial and Corporate Change” and the “Strategic Management Journal” are also preparing retractions.

Storbeck’s blog posting also included a written response from Dr. Lichtenthaler, as well as oblique references to how “a small group of academics privately started to question this academic flight of fancy,” analyzed his work and sent their findings to various journals. Given the rumors I’ve heard over the past 18-24 months, I suspect I either know some of these academics or know people who know them.

I would disagree that this “has the potential to turn into a major academic scandal,” because (barring some major exculpatory revelation) it seems like we’re already there. How did we get here?

In his paper from the Lundvall festschrift (also presented at Imperial last month) Ben Martin talks about how the field of innovation studies was lucky for a long time:

As a field, we were truly fortunate in our ‘founding fathers’ – individuals such as Chris Freeman, Richard Nelson and Nathan Rosenberg, who, besides making immense intellectual contributions, also shaped the culture and norms under which we operate. In particular, these individuals personified a spirit of openness, integrity and intellectual generosity.

I think when innovation studies was a small community when everyone knew everyone else, there was an element of social norms and sanctions that would deter authors from thinking they could blatantly game the system. (Conversely, more benign forms of gaming the system still seems to be OK: we all know senior respected scholars who’ve been very clever at squeezing one more paper out of a given dataset.)

Certainly the system of letter writing for tenure should catch most abuses. I can think of about 5-10 people senior to me who know my work well, and if they had my papers in front of them, they’d quickly spot any violations. However, not all schools require outside letters for tenure, and this does nothing for hiring decisions, where generalists review the work of a prospective colleague, supported by cherry-picked friendly letters and an occasional phone call to a former coworker.

Storbeck note the irony of some of the previous praise of Lichtenthaler given by Handlesblatt:

The thirty-three-year old used to be the undisputed shooting star of business science in Germany and has an incredible publication record. Since 2004, Lichtenthaler has published more papers in international renowned journals than almost any other German business professor. The database used for the Handelsblatt research ranking lists a total of 50 publications. 21 of them were published in 2007 and 2008. …

Given his amazing productivity, in 2009 the German association of business professors (Verband der Hochschullehrer für Betriebswirtschaftslehre) awarded a prestigious prize for young researchers to him. In the same year, he topped the Handelsblatt list of the most productive business researchers below 40 albeit he was one of the youngest academics in the list. …

In 2009, we published a portrait about Lichtenthaler’s amazing career in Handelsblatt. With the benefit of hindsight, the headline seems to be ironic: “The boy who gets everything right”. He told my colleague Anja Müller that despite his striking research output, he wasn’t considering himself a workaholic and that “academic enthusiasm” keeps him going. …

Alas, what Storbeck doesn’t discuss is the role that Handlesblatt has played in encouraging (if not creating) such behavior. My German-speaking academic friends can quote to me where they stand on the Handlesblatt rankings of business faculty, the ranking of well-known researchers and the date when the next rankings will be published. German (and Swiss and Austrian) deans hire, promote and reward faculty based on this one article that comes out every year.

In the US, strong individual incentives like this gave us Enron, Worldcom, Fannie Mae and numerous other ethical scandals. However, while the MBA rankings (by Business Week, FT, US News and the WSJ) distort the behavior of business school faculty and administrators, they don’t seem to have produced the pressure (yet) that encouraged individuals or an organizational conspiracy to out-and-out cheat.

If Handlesblatt really wanted to get to the bottom of this scandal, it would look in the mirror. Or, as cartoonist Walt Kelley said in Pogo 41 years ago: “we have met the enemy and he is us.”

PS: The reader comments at the bottom of the blog post point out Handlesblatt’s role in creating this problem, but Storbeck vehemently denies any responsibility therein.

We are aware of the retractions. When the underlying problems of the publications of Ulrich Lichtenthaler were brought to our attention WHU decided to establish an investigation committee with external experts to look into these matters. As WHU condemns all forms of academic misconduct, we are very interested in complete transparency on the issues and, depending on the findings of the committee, we will then take appropriate actions.

In his talk last month to our London conference for the OI special issue of Research Policy, editor Ben Martin alluded to this in his talk on “20 challenges for innovation studies.” Now we know what he was talking about. Here are excerpts from Martin’s written paper:

20. Maintaining our research integrity, sense of morality and collegiality
…
There are many in the academic community who like to think that ‘the Republic of Science’ remains one last shining bastion where misconduct is rare, generally low-level and self-correcting, where any serious misconduct is quickly detected by peer review and stopped, and where the risk of being caught and the severe repercussions that follow are such that few researchers are tempted to err (Martin, 2012b). However, the growing incidence of plagiarism (Martin et al., 2007) as well other forms of research misconduct throws all this into question.
…
Occasionally, perhaps because of the pressure of a deadline to produce a conference presentation or to publish a journal article, individuals may succumb to the temptation to engage in outright plagiarism. Fortunately such cases appear to be rare, although there are some indications that serial plagiarisers such as Hans Werner Gottinger† are becoming more common (Martin et al., 2007, p.910, footnote 32). Moreover, by definition we only know about the incidence of detected plagiarism – how much more remains undetected is what the noted American philosopher, Donald Rumsfeld, would term “a known unknown” (quoted in Boardman, 2005, p. 783)

Rather more common, and certainly on the increase, is the phenomenon of ‘salami publishing’. With the growing use of publications as a performance indicator comes escalating pressure to exploit one’s database, survey or study to the full with as many articles as possible. Hence, some authors resort to ‘slicing the salami very thinly’. The resulting papers are often sent to different journals. In some cases, the author may cite the other parallel papers. However, it is very difficult to persuade referees to read not only the paper in question but also the other parallel papers (which may not have been published yet and therefore are difficult to access) in order to establish whether the former represents a sufficiently substantial and original contribution to merit publication in its own right. In other cases, the author simply ‘forgets’ to cite the parallel papers. Sometimes, this may be picked up by a diligent referee. Other times, it may only be discovered after publication, leaving journal editors with a difficult decision as to whether or not that article should be withdrawn or subject to a ‘corrigendum’. In the worst cases, ‘salami publishing’ shades into self-plagiarism, where the author re-uses material from one or more of his earlier publications without drawing the attention of the reader to the existence of the earlier work.

A quick search of the Research Policy database lists only the one Gottinger and two Lichtenthaler articles as retracted thus far. However, I believe that Martin’s passion here (both in his paper and the oral presentation at our conference) is driven by more than just these two serial offenders; if so, there are more shoes yet to drop.

Open Innovation is increasingly used as an instrument to enhance the creation of ideas and the development of solutions in innovation projects. The active integration of external stakeholders into an organisation's innovation processes independently of their institutional affiliation can take different forms – from the generation of ideas and the development of concepts to participation in the realisation of an innovation.…The following list shows suggestions for possible topics to be addressed:

The role of Web 2.0 in interactive value creation

Crowd Sourcing - integrating swarm intelligence in Open Innovation

Integrating mobile applications into Open Innovation activities

Open Innovation and social networks across nations: challenges concerning intellectual property rights

Customer integration via the web: Positive and negative side effects on internal processes

Strategic controlling and the measurement of Open Innovation activities

Please send submissions (in the form of full papers) by November 1st 2012 to: s.nigon@idate.org. Proposals must be submitted in Word format (.doc) and should not exceed 6,500 words.

June 27, 2012

This week’s #oi2012 conference at Imperial College London was a great success, if the comments that I heard and the surveys returned are representative of the attendees at large.

While the conference was organized to help develop papers for our special issue, in retrospect it also served another purpose — perhaps the largest gathering of leading open innovation scholars in one place at one time since the publication of Open Innovation in 2003.

I realize that’s a strong claim to make, but I’ve been racking my brain to come up with a comparable conference in the past nine years. Certainly there were sessions at DRUID or AOM (2004 and 2005 come to mind) with a diverse range of talent. I’ve attended very interesting OI tracks at EURAM and UOI, but not when the big names were in the room.

Obviously this was no coincidence, as we invited a range of recognized scholars to submit their work for the conference. 34 of the 60 attendees were authors or co-authors of one of the plenary papers, with another 8 authors of posters present. (Most of the remainder were affiliated with Imperial or UK~IRC, two of the three conference sponsors). As Frank Piller pointed out to me later, the attraction of the special issue also attracted better quality work than one would normally find at a conference.

The attendance validated our assumptions about where the center of open innovation research is: 56 of the 60 attendees (or 30 of the 34 plenary paper authors) came from Europe. In the original discussions, we considered holding it in Berkeley, but — as predicted — London proved to be the ideal central location, reachable by local train, Eurostar, intra-European and (for four of us) transatlantic flights. (We were disappointed not to have any research from Asia, but hopefully that will be remedied in the special issue submissions.)

The nature of the attendees demonstrates the vibrancy of field (stream? movement?) of open innovation. The discussion was vigorous and spirited, with authors getting welter of ideas of how to improve their work. We deliberately allocated half of the presentation for discussion, by some combination of the discussant and the audience. Several times I had to bite my tongue — when presenters were arguing with those trying to help improve the paper, rather than quickly thanking them for the suggestions and using the time to solicit for further feedback. (We also had good participation at the posters — not as many people hearing each poster, but for some authors, as many people providing feedback as in the plenary session).

In addition to the four guest editors, strong comments came from Oliver Alexy, Joachim Henkel, John Hagedoorn, Todd Zenger and (on Tuesday) Massimo Colombo and Keld Laursen. (Near the close of the conference, I think the authors were cringing when either Laursen or Salter proceeded to identify the key weakness of the paper). Our objective was not to showcase polished work, but to help the authors achieve the maximal benefit from the conference through feedback that anticipates future reviewer concerns.

That said, I think George von Krogh stole the show. Yes, he’s senior, thoughtful, serious and for a decade has run Europe’s largest research group examining issues of external innovation and communities. Still, I’ve never seen him present any paper on open innovation before, and the title of his paper (“How Firms Formulate Sharable Problems…”) I'm sure left some people scratching their head — until they heard his presentation, when many of us asked ourselves: “why didn’t someone study this before?”

With 30 papers, I’d be hard pressed to identify a trend. However, some issues came up more than once. One point was that while people agree we need more work on outbound innovation (a point Marcel and I made in our study of inbound OI), several people pointed out that the practices did not begin with OI — but instead tie at least a decade (e.g. Arora et al 2001) to the markets for technology literature.

At the same time, outbound OI is more than just licensing technology. This is a point Joanne Zhang and Andy Cosh hinted at in their presentation.

In fact, many of us are eager to learn more from Cosh, Zhang and others of the UK~IRC about what they’ve learned from the UK~IRC survey of 1,202 UK companies in manufacturing and services. With a questionnaire more tailored to OI issues, the Cambridge & Imperial papers from the study promise to (at least briefly) supplant studies from the Community Innovation Survey in providing large-scale evidence of OI practices.

While I don’t know that we can repeat it, I know that all of us are grateful — as Ben Martin acknowledged Tuesday — for the contribution of those who came as well as our own success in attracting and organizing the program. That the conference happened at all was due in large part to the tireless and methodical efforts of Maryam Philpott in making it all happen.

No matter how good the conference, in the end what it leaves behind is good memories: like footprints in the sand, they will be soon washed or blown away, forgotten forever (unless you live next to Vesuvius). In our case, we had no volcano, but there is a scrapbook of my photos from the conference is available on Picasa.

Today the editors met for three hours to discuss the special issue. General and specific guidance will be forthcoming to the authors of the 30 papers to better help them prepare for the August 31 deadline. (Other high-quality submissions are of course welcome.)

Thanks again to all who came. We hope you enjoyed it as much as we did.

June 26, 2012

The nominal reason for this week’s #oi2012 conference at Imperial was to prepare authors and manuscripts for the special issue of Research Policy. (The real reason was to get some of the best open innovation scholars in the world under one roof to teach and learn from each other).

Our VIP guest for the conference was Ben Martin of SPRU, who is lead editor of Research Policy and perhaps the most visible tie remaining to Keith Pavitt, Chris Freeman and the earlier era of SPRU and RP.

In his opening talk Monday, he presented his “20 challenges for innovation studies (from his paper for the Lundvall festschrift). He didn’t have time to present the trends of publication in RP (which having hard data I thought was more practical); instead, he focused on his predictions of future trends which were (as promised) provocative but at times somewhat implausible.

In a brief talk Tuesday just before lunch, Martin spoke about something nearer to the hearts of the assembled audience of OI researchers — how a Research Policy special issue works. He talked about the guidelines and heuristics RP as developed to get special issues that provide both quality and integration.

After seeing the first day and a half of this week’s conference, Martin then congratulated the prospective authors (and the four guest editors) for what he considered a “model special issue”:

“An important topical theme,” as demonstrated by the interest and discussion

An integration of papers that demonstrates the value of publishing them together in a special issue as being superior to separate publication

June 25, 2012

This morning we kicked off the two-day open innovation workshop (#OI2012) at Imperial College London. The four editors of the special issues were on hand to hear Ben Martin (lead editor of Research Policy) talk about 20 challenges for innovation studies going forward.

We have a very impressive lineup of 22 papers and 8 posters which promise to move the study of open innovation forward. With 60 attendees registered — all interested in open innovation — the two days promise to have a vigorous discussion of the papers and the field going forward.

We are quite pleased with the participation of authors and attendees. The vigorous discussion should help both the individual papers and the overall portfolio of work that we receive in August for the Research Policy special issue.

Below is the program for this week’s conference:

First Plenary Session; Chair: Ammon Salter

Alfonso Gambardella, Claudio Panico: Closed or Open Models? Investigating the Governance of Open Innovation

Massimo Colombo, Evila Piva, Francesco Rentocchini, Cristina Rossi-Lamastra: Collaboration with the Open Source Community and Entrepreneurial Ventures’ Innovation Performance: The Depth and Breadth of Community Knowledge Leveraging

Lance Newey, Stephanie Schleimer: Open but not Dynamic: How Open Innovation Differs as a Dynamic Capability Across Firms

Volker Nestle, Florian Täube: Open Innovation in Clusters – Framework and Empirical Evidence on the Effects of cluster Management in R&D-Intensive Industries

May 11, 2012

As noted, I’ve been reading a lot of “open innovation” papers recently, in service to a greater good.

In addition to the definition of “innovation,” another area of conceptual fuzziness (if not obfuscation) has been treating “open innovation” and “user innovation” (or “open, distributed innovation”) as if they’re the same thing. They’re not. Eric von Hippel has said so, and I think I know Henry Chesbrough well enough to say he would say so as well.

In 2008, I was fortunate to attend my first user innovation workshops. Over the next three years, I got to know the UI community and Eric von Hippel.

While I’ve published two papers so far about open and user innovation, I think Linus Dahlander is today perhaps the best example of someone who has published significant work in both camps — including 170+ cites for his Research Policy paper two years ago (with David Gann).

I was honored when Dahlander and Gann mentioned my own small part (thus far) in linking these two worlds, first at OUI 2009 and then in their RP paper:

In all, 244 scholars have worked on 150 papers. This figure illustrates that the community is relatively fragmented with a few scholars that have collaborated with several others. There are few bridges connecting teams of researchers with the exception of West/Lakhani, who have connected open innovation researchers with scholars investigating user aspects of open innovation. (Dahlander and Gann, 2010: 702).

I think Dahlander and Gann have done the best job thus far of stepping back and seeing the similarities between the these two streams without blurring the differences. At the same time, my earlier work (in less influential journals) makes explicit the differences between the two.

Marcel Bogers and I have also been working on this perspective for a few years, in a paper we posted to SSRN and will be presenting in London. After publication of Dahlander and Gann, we have a higher bar to clear.

May 3, 2012

On Wednesday the four editors of the Research Policy special issue on open innovation notified authors as to whether their paper was accepted for the special issue conference. While it was a day later than promised, we still turned around decisions in 16 days, in time for the accepted authors to make plans to be in London June 25 and 26. (More info will be posted to the conference website.)

We were surprised at the volume of submissions that we received: 78 papers and abstracts. I had personally been expecting about 30-40. This large volume is a testament to the high level of interest in open innovation in the research community.

Given the high demand, we wanted to accept more authors to share their work and participate in the conference. However, there was no way to add plenary papers without extending the conference, and we felt that multiple tracks would undercut our goal of sharing ideas across the entire conference.

Thus, we decided to add a poster session on Monday night, at a prominent time (before the conference dinner) when we expect nearly everyone to attend. With this poster session, more authors won an invite to present their work at the conference and otherwise fully participate in the discussion.

Overall, we accepted 37 plenary papers and posters. We will be posting the schedule once the invited authors have confirmed their intention to participate.

Unfortunately, given the aggressive schedule, we were not able to provide personal feedback for the accepted (or rejected) papers. For each papers or poster presented at the conference, after June 26 one of the four guest editors will provide specific direction based on our earlier reading, the reading of the final paper and the reaction at the conference.

Participating in the conference is not a requirement to be accepted into the special issue. On August 31, we expect to see plenary papers, posters, rejected submissions and brand new submissions. I would not be surprised if we receive even more than 78 submissions, leaving us with tough choices to create a special issue with less than 20 papers.

Having been on the other side, I know that being rejected from the conference will come as a great disappointment. Every paper was independently scored by two of the four guest editors and there was a high degree of consistency in the evaluations. While we felt there was a clear divide between the accepted and rejected papers, it's always possible that we made a mistake. However, given the high number of expected submissions, papers that were rejected are unlikely to be more successful unless they are significantly improved.

I want to offer two general pieces of advice to all three groups of authors (accepted, rejected, and new submissions). First, this is a special issue of Research Policy, and nothing will be published that fails to meet Research Policy's rigorous standards. Authors should look at their papers -- or get peer feedback -- to assess whether they meet these standards.

Second, only papers that build upon and contribute to open innovation will be published in the special issue (as opposed to a regular issue of Research Policy). There is certainly room for research that challenges existing thought in open innovation, and in fact engaging the existing research is the expectation for all papers in the issue.

Our ultimate goal for the special issue is to publish a diverse set of contributions to open innovation research with a variety of research designs, approaches and perspectives.

For those that are designing and writing open innovation research, I have my own personal ideas about what constitutes open innovation research. I will continue to post these observations to this blog.

April 29, 2012

Earlier today, Nathan Mattise posted an article on the celebrations at PARC marking the 10th anniversary of its (quasi) spinoff.

Veteran OI researchers know that OI started at PARC, with Henry Chesbrough’s research on Xerox PARC that led to Chesbrough & Rosenbloom (2002) and then his original Open Innovation book. And apparently Chesbrough gave the opening presentation Thursday afternoon at the event entitled “The Power of 10,” celebrating open innovation.

But what really caught my eye was the picture of the live illustration that Heather Willems did for the event. (Live illustrating is a local fad in the Valley, which I find interesting but less concrete than live blogging.)

In the Ars picture, the signboard starts with “The Power of 10” in the stylized PARC design for the event. But at first glance, the signboard says “The Power of Henry Chesbrough.”

I’ve made a few contributions to OI, open source and standards research, but nothing to earn a slogan with my name in it. I’m not sure how (or if) I’m going to get there, but then I don’t think Henry necessarily anticipated this outcome when he started a decade ago.

April 20, 2012

In doing several lit reviews of open innovation, I was struck by how often studies of inbound “open innovation” weren’t about innovation — at least as it has been defined 40 or 50 years of innovation studies. This sort of sloppiness fogs the interpretations of empirical findings and the cumulative nature of the scientific process. Here is my first cut in this blog at trying to cut through some of this fog.

First, let’s leave aside the question of non-innovative content. Sourcing Wikpedia articles or product reviews from consumers isn’t innovation, any more than newspaper reporters or Consumer Reports are creating innovations. It’s just content. Yes, some text would fall under the “creativity” lit, but writing a tertiary semi-encyclopedia and movie reviews doesn’t seem like it would even fit that category.

However, what seems to the most common mess is when “innovation” is used as a synonym for “knowledge” or “invention” or other things that tend to be antecedents of innovations.

We know that an “invention” is not an “innovation” — from various sources including Joseph Schumpeter, Chris Freeman, Ed Roberts and Henry Chesbrough. For example, in my AOM conference paper with Marcel Bogers (Bogers and West, 2010: 4) we wrote:

As conceptualized by innovation scholars, the industrial innovation process comprises both a technical component (invention) and also the commercialization of that technology (innovation). Schumpeter (1934: 88) concluded that technical inventions “not carried into practice ... are economically irrelevant,” while Freeman (1982: 7)† argued that “inventions ... do not necessarily lead to technical innovations. In fact the majority do not. An innovation in the economic sense is accomplished only with the first commercial transaction.” …

[Another] definition of innovation … is given by Roberts (2007: 36): “Innovation is composed of two parts: (1) the generation of an idea or invention, and (2) the conversion of that invention into a business or other useful application.”

This very same sentiment is articulated in the Chesbrough’s prequel to his open innovation manifesto:

The inherent value of a technology remains latent until it is commercialized in some way. (Chesbrough and Rosenbloom, 2002: 530).

But that’s only part of the mess, which goes beyond the invention vs. innovation distinction. Other inputs include the provision of knowledge, components or complements, as Marcel and I wrote in a paper published earlier this year (Bogers and West, 2012: 62):

Discussions of distributed innovation processes tend to blur the distinctions between innovation and its origins and effects. However, all the firm-centric perspectives consider how firms access external sources of knowledge to supplement their own knowledge as an input to their innovation efforts. …

In some cases, firms will rely on external actors to supply knowledge that serves as an input to creating their own innovations. This includes basic scientific research produced and disseminated through open science1 processes, knowledge of market needs and demands obtained from customers, or broad- cast search used to identify promising avenues for future innovation (David, 1998; Lilien et al., 2002; Jeppesen & Lakhani, 2010).
…
The external innovator may also commercialize his or her innovation in the form of a product that is sold to the focal firm (cf. Shah & Tripsas, 2007). These products may be components or other materials that are integrated by the firm into its own products, as has become the norm in the personal computer industry (Dedrick & Kraemer, 1998). Alternatively, the research and development (R&D) of an equipment supplier is used to produce innovations incorporated in tools purchased by producers, as when domestic machine tools improved the post-war German auto industry. Supplier innovations may thus come in the form of materials, components and equipment; Laursen and Salter (2006) found that suppliers were the most common source of external knowledge for innovation among 2,707 UK manufacturers.

Finally, complementary innovations produced by external participants may be provided directly to users. In some cases, these complementary products are sold by for-profit firms, as is common with third party computer software (West, 2006). In other cases, the complements are provided by individuals, whether in the form of user support (Lakhani & von Hippel, 2003), synthesized musical instruments (Jeppesen & Frederiksen, 2006) or game modifications (West & Gallagher, 2006). While such information, goods or services do not directly involve the firm, they do increase the value of the firm’s products and thus improve its ability to profit from its innovations (cf. Teece, 1986).

Nothing in this discussion is meant to suggest that scholars shouldn’t study the various external sources of inputs that firms use in their innovative efforts. The only point is to draw the distinction between the firm’s innovations and its various antecedents and correlates — just as we distinguish between purchase intention and actual purchase, or market share and profitability.

Similarly, the OI processes can be used to study external sourcing of things other than innovations, as long as the distinctions are clear. The “open source software” model and “open innovation” are not the same thing — even though there are important theoretical and empirical overlaps.

In the same way, nothing here would say we can’t study other processes and draw parallels to open innovation. For example, I think many of the OI processes might apply to nonprofits, even though Chesbrough (2003; Chesbrough and Rosenbloom, 2002) requires alignment to business models. (Perhaps someone should first try to extend the concept of a revenue or business model to nonprofits or even government agencies.)

Also, sometimes we can’t measure innovation process directly, but we can measure something else: patents. That’s why thousands of papers use patents as a proxy for “innovation” when (per Schumpeter, Freeman, etc.) we know they are just inventions. It would be foolish of me to suggest that such patent studies should (or would) go away, but we still need to remind ourselves that the output of a technical invention process is only imperfectly correlated (even at the most efficient firm) with a firm’s output of technological innovations.

† Although Chris Freeman’s (1982) original edition is out of print, the identical point is made by Freeman and Soete (1997: 6).