December 18, 2013

On page 822 of its December issue, the Academy of Management Journal announced its retraction of a 2009 article by Ulrich Lichtenthaler:

This article has been retracted at the request of the Editor-in-Chief of Academy of Management Journal.

Formal investigations by the Academy of Management Journal and an affiliated university of Professor Ulrich Lichtenthaler have revealed ethical violations in research practices. Those violations center on the larger data collection effort that forms the foundation for this article as well as the empirics and reporting in the article itself. Independent re-analysis has been unable to replicate the findings as reported in this article and other journals have retracted other published pieces from the larger data collection effort.

This retraction marks 13 retracted articles by Lichtenthaler.

The "affiliated university” is not specified, but would presumably be WHU (his PhD institution) — which announced its own sanctions last summer — as opposed to Mannheim (his current employer) which seems to have said nothing.

With the AMJ retraction, there are five journals who have not yet retracted their suspect papers. Since AMJ retracted its suspected papers while the others have not, this suggests that AMJ has either a more rigorous vetting process or higher standards than the other journals.

I received four notifications this morning: two personal emails, a Google news watch and a Facebook posting — all referring to this morning’s Retraction Watch article by Ivan Oransky. Oransky said the AMJ article has 97 citations in Web of Knowledge, which would make it one of his most cited articles.

Fortunately, the article was on my suspect list. This means it is not cited in our new edited book, nor in our open innovation special issue.

Bibliography

The full list of Lichtenthaler (or Holger Ernst) retracted papers (not including the three withdrawn papers):

Lichtenthaler, Ulrich (2008). “Externally commercializing technology assets: An examination of different process stages,” Journal of Business Venturing, 23 (4): 445-464. doi: 10.1016/j.jbusvent.2007.06.002 (Retracted by the editor and author, November 2012)

Holger Ernst, James G. Conley, Nils Omland (2012). “How to create commercial value from patents: The role of patent management,” Research Policy, published online 21 May 2012. doi: 10.1016/j.respol.2012.04.012 (Retracted by the authors and editor prior to print publication, February 2013)

November 4, 2013

This morning I got an email from the Journal of Product Innovation Management that they have typeset and published online our lit review on inbound open innovation. The West & Bogers article was accepted in December 2012, and (as best we can tell) will appear in JulyJune 2014. Editor Gloria Barczak and (accepting editor) Tony Di Benedetto have been working hard to clear the journal’s considerable backlog.

Other than a few corrections introduced with the typesetting, there’s one change from the version posted to SSRN (and already cited): we changed Table 3 to Figure 2. By having to summarize the paper to ACAC last May, I found that I had trouble understanding the table (that I’d made) and found the Venn diagram approach much less ambiguous: hence the new Figure (designed in color for my ACAC talk, but printed in B&W in the journal).

In discussing the paper with authors of other OI works — and reading how they use it — it seems as though the interest is on three topics. One is the summary of the 165 open innovation papers (listed above).

The second is the process model, separating how firms handle their use of external innovation into four phases. Those phases are summarized below (as excerpted from Table 4):

Phase

Category

Open Innovation Topic

1. Obtaining

Searching

·Sourcing

·Technology scouts

·Limits

Enabling/ Filtering

·Brokerage

·Contests

·Intermediaries

·Toolkits

·Platforms

·Gatekeepers

Acquiring

·Incentives to share

·Contracting

·Nature of the innovation

2. Integrating

·Absorptive capacity

·Culture and “Not Invented Here”

·Incentives to cooperate

·Competencies

3. Commercializing

·Commercialization process

·Value creation

·Value capture

4. Interaction

Feedback

·R&D feedback

·Customer/market feedback

Reciprocal

·Co-creation

·Communities

·Value networks

Finally is the conclusion that research is particularly light on the second and third phase: how external innovations get into firms, and how these innovations are commercialized differently (or similarly) to internal innovations. We are already seeing research agendas influenced by the latter findings.

This version of the article also has the online appendix, which lists all 151 inbound or coupled articles used in preparing this literature review. We asked JPIM to let us publish this online (which is not something we’d seen in this particular journal before). We felt this was important to share with future researchers, both so they would know what inbound (and coupled) literature was written during the period in review (2003-2010), and so everyone see which research we classified as falling in these categories.

We started working on the paper in June 2010: it’s hard to explain to a non-academic why it will be more than four years from when we started the project to when it was published (some of that, of course, being due to deficiencies in the early drafts). Still, it’s gratifying to have the paper out there and being read by those we hope will find it valuable.

Reference

West, Joel and Bogers, Marcel, “Leveraging External Sources of Innovation: A Review of Research on Open Innovation,” Journal of Product Innovation Management, 31, 4 (JulyJune 2014): 814-831. DOI: 10.1111/jpim.12125, available on SSRN at http://ssrn.com/abstract=2195675

October 28, 2013

There's a webinar this Wednesday on improving best practices in OI in companies. The invitation came in an email from my friend and co-author Wim Vanhaverbeke. It’s hard to imagine that any reader of this blog is not on Wim’s email list, but for the sake of Google® searches, I thought it was important to summarize and link to the post.

The October 30 webinar will talk about Project MOOI, which is described as

MOOI – a beautifully ambitious OI best-practice project
In the last decade Open Innovation (OI) has become part of the daily operations of many companies in different sectors of industry. Despite the soaring popularity of OI practices, only few companies succeed in optimally preparing their internal organization for their OI endeavors and thus make effective use of the many opportunities OI has to offer.

Our experience is that finding best practices and good advice is a daunting exercise. We therefore launch a major community based initiative to gather, structure, and evaluate publicly available information on the best OI practices in companies.

The webinar will talk about how managers and other innovation professionals will benefit by joining the community and pooling their best practices.

The presentation will be given by Henry Chesbrough (UC Berkeley & ESADE), Wim Vanhaverbeke (Hasselt, ESADE & NUS), and Nadine Roijakkers (Hasselt). It starts at 4pm CET, 3pm UK, 11am EDT and 8am PDT, and will last 50 minutes. For more information, see the notice on InnovationManagement.se.

October 6, 2013

The Guardian published a story of a phony article submitted to 304 open access journals in less than a year:

The paper, which described a simple test of whether cancer cells grow more slowly in a test tube when treated with increasing concentrations of a molecule, had "fatal flaws" and used fabricated authors and universities with African affiliated names, [John] Bohannon revealed in Science magazine.

He wrote: "Any reviewer with more than a high-school knowledge of chemistry and the ability to understand a basic data plot should have spotted the paper's shortcomings immediately. Its experiments are so hopelessly flawed that the results are meaningless."
Bohannon, who wrote the paper, submitted around 10 articles per week to open access journals that use the 'gold' open access route, which requires the author to pay a fee if the paper is published.

The "wonder drug paper" as he calls it, was accepted by 157 of the journals and rejected by 98. Of the 255 versions that went through the entire editing process to either acceptance or rejection, 60% did not undergo peer review. Of the 106 journals that did conduct peer review, 70% accepted the paper.

The Guardian story is based on a paper by John Bohannon, a correspondent for Science.

Of course, Science is to open access journals what the (late great) Encyclopedia Britannicais to Wikipedia: it’s not exactly a neutral party in the conflict between open access and proprietary publication business models. (And since Science published one of the 50+ fraudulent articles by social psychologist Diederik Stapel, it can hardly be considered above reproach on such matters.)

Still, the ease by which Bohannon generated 150+ future retractions suggests that we academics will be accessing even more low quality information via Google Scholar — even without the massive scale of a Gottinger or his recent successors.

September 26, 2013

It was everything a small conference (~55 attendees) should be: a concentration of specialized expertise, a single track, plenty of time for discussing each paper, a chance to meet any participant. In short, it was everything that the Academy of Management (with its 10,000 attendees and 12 minute presentations) is not.

It’s always interesting to learn what others are doing in their open innovation research. Letizia Mortara summarized the sizable ongoing research program at the Institute for Manufacturing at Cambridge, which focuses on how firms are actually using open innovation. With Tim Minshall, Mortara has authored a number of papers that provide important new insights into adoption, implementation and adaptation of open innovation by companies. IMHO, one of the most interesting is their chapter (Chapter 12) in the forthcoming Chesbrough-Vanhaverbeke-West OI book from Oxford.

In addition to the keynote and my own paper, I was also tasked with summarizing the future of open innovation. Probably no one was surprised that I plugged both the 2014 Research Policy special issue (with Chesbrough, Salter and Vanhaverbeke) and the forthcoming Oxford book.

However, in my closing comments, I also sought to classify the research presented at the conference using three typologies:

The trend (i.e. most popular alternative) was pretty much as I would have predicted. The one encouraging exception was that there were three papers about results, i.e. papers that measure the outcomes of open innovation.

At the end, we were all grateful to the School of Management and our organizers, Felicia Fai & Anthony Roath, for putting on such a productive conference. (Most of us would also thank the Italian invaders for building such a durable Romanesque structure in Bath during the first millennium).

Keynote speakers Letizia Mortara and Teppo Felin at the
conference dinner reception. Photo by Joel West

September 22, 2013

Over the last year, the Licthenthaler retractions scandal (and its ramifications for our field) has tended to come up as a conference mealtime discussion topic — at least when I'm at a conference with European innovation scholars. Last week’s open innovation workshop in Bath was no exception. However, unlike at most conferences, the topic also boiled out into the open during a plenary discussion.

In my opening keynote, I had mentioned the opportunity for more outbound open innovation research, given that half of the Licthenthaler retractions were on this topic. This was news to some people. Although the scandal is well known among German business academics and open innovation scholars, it turns out there were a few attendees who hadn’t heard about the 13 retracted articles by Ulrich Licthenthaler and his former habillitation supervisor Holger Ernst, nor the three articles that were accepted but withdrawn prior to online publication.

At the closing session, a few doctoral students and faculty asked about the new rules in the post-Licthenthaler world. Here let me offer an assessment of what it means and also some thoughts on what’s next (or what’s left).

Research Policy and its Standards

As it turns out, last month students at one doctoral consortium at the Academy in Orlando read June’s editorial by Research Policy editor Ben Martin. The abstract summarizes the problem facing the journal and innovation studies more broadly:

This extended editorial asks whether peer-review is continuing to operate effectively in policing research misconduct in the academic world. It explores the mounting problems encountered by editors of journals such as Research Policy (RP) in dealing with research misconduct. Misconduct can take a variety of forms. Among the most serious are plagiarism and data fabrication or falsification, although fortunately these still seem to be relatively rare. More common are problems involving redundant publication and self-plagiarism, where the boundary between acceptable behavior (attempting to exploit the results of one’s research as fully and widely as possible) and unacceptable behavior (in particular, misleading the reader as to the originality of one’s publications) is rather indistinct and open to interpretation. With the aid of a number of case-studies, this editorial tries to set out clearly where RP Editors regard that boundary as lying.

On the first page, Martin provides the broader context:

[W]e know the pressures of academic competition are rising, whether for tenure, research funds, promotion or status, which may mean that more researchers are tempted to cut corners… The use of performance indicators based on publications, citations, impact factors and the like may also be adding to the temptation to stray from previous conventions regarding what constitutes appropriate research behavior or to attempt to surreptitiously ‘stretch’ the boundary between appropriate and inappropriate behavior. …

There are worrying signs that research misconduct is on the increase. The number of retractions of published papers by journals has increased more than 10-fold in a single decade – from around 30 a year in the early 2000s to some 400 in 2011. … Moreover, the majority of retractions are seemingly the consequence of research misconduct rather than simple error.
…
With regard to the particular problem of self-plagiarism and related activities described below, the number of academic articles referring to ‘self-plagiarism’, ‘salami publishing’, ‘redundant publication’ or ‘duplicate publication’ has risen nearly five-fold from 170 in 2000 to 820 in 2012. More and more editorials are appearing in which journal editors complain about the growing burden being imposed on them as they attempt to detect, evaluate and sanction research misconduct in its various forms.

Martin noted that the journal only faced an “occasional” problem of research misconduct, until 2007, when it stumbled across the scandal of a dozen or more plagiarized articles published by Hans Gottinger†. (The scandal was jointly investigated by Research Policy and Nature).

After listing various retracted and withdrawn articles, the sixth page of Martin’s editorial refers to the Licthenthaler case (emphasis mine):

More recently, an even more complicated case was brought to the attention of RP Editors by two individuals who independently had been asked to review papers by the same author (a professor at a European university) submitted to two other journals. They discovered that the author concerned had published an astonishing total of over 60 journal articles since 2004. Since this number was too great to handle, the two reviewers concentrated their attention on 15 articles published in leading journals over the period 2007–2010 (including three published in Research Policy), all of which formed part of a single stream of research emerging from a survey of over 100 firms in Europe that the author had conducted. They found that in these papers, similar analyses had been carried out with differing combinations from a large set of variables (sometimes relabeled, to add to the confusion) with no references to indicate the prior papers the author had already produced on the same broad theme. Moreover, in some cases, a given variable was treated as a dependent variable and in others as an independent variable. Perhaps more worryingly, variables that were demonstrated to be significant in some papers were then (presumably deliberately) omitted in the analysis reported in other papers. The author was asked for an explanation. This explanation was deemed unsatisfactory by the RP Editors, with the result that two of the RP papers[31] had to be formally retracted.

[Footnote 31: At a late stage in the investigation, it also became apparent that in one of these RP papers the degree of statistical significance of several of the claimed findings had been misreported or exaggerated. Whether this was simply the result of ‘accidental’ mistakes, as the author claimed, is unclear. However, the fact that similar problems have since been confirmed in several other papers by this author makes this less plausible as an explanation.]

The workshop participants asked what the new rules are for multiple publication from the same data. How does one avoid self-plagiarism and salami slicing? I think Martin and RP have spelled out more clearly than anyone else what these rules are. Drawing from his editorial, his public comments and from my current experience as a RP guest editor, let me paraphrase it into two guidelines.

First, would a reasonable reviewer (or reader) conclude that this article deserves publication if all previous or parallel articles were visible at the same time? Second, does it appear that the authors have withheld from the editor (if not the blinded manuscript) full disclosure of all related work? As Martin concluded:

Failure to provide all pertinent information in the full version implies a premeditated attempt by the author(s) to deceive the journal as to the level of originality of the paper. As such, it represents grounds for the summary rejection of a paper.

† Martin’s editorial doesn’t mention the names of any transgressors. When I asked him about it, he said it was because some of the journal’s actions were public (i.e. retracted articles) and some were not (rejected articles); since he couldn’t list all the names, he decided to list none of them.

Further Sanctions

The Licthenthaler story seems to be winding down. With his habillitation and Lehrbefähigung withdrawn by WHU earlier this month, what’s left is the results of the investigation by Mannheim, his current employer. The university issued a brief press release in July; as with the WHU announcement, it was issued only in German so here is my composite (computer-aided) translation:

Allegations of scientific misconduct against Professor Dr. Ulrich Lichtenthaler:University examines the way forwardPress release of 31 July 2013

After the Permanent Commission for the investigation of allegations of scientific misconduct at the University of Mannheim has sent its final report to the allegations of scientific misconduct against Prof. Dr. Ulrich Lichtenthaler, the university now is considering the commission report.

"For legal reasons, in particular allowing for the right to fairness to Prof. Dr. Ulrich Lichtenthaler, I cannot yet make any statement about the contents of the 170 page report or any possible consequences" said the Rector of the University Mannheim, Prof. Dr. Ernst-Ludwig von Thadden. The report was given to Prof. Dr. Lichtenthaler. "The public has a legitimate interest in full disclosure of the allegations and will be informed of further steps by to the extent legally possible," said the rector of the university.

After the Rector of the University of Mannheim received allegations of scientific misconduct against Dr. Lichtenthaler in the summer of 2012, a responsible commission of inquiry of the university was immediately called. Since 24 July 2012, the Commission has made a considerable commitment of its members to deal with the allegations against Prof. Lichtenthaler. Among other things the Commission has commissioned external reports, interviewed respondents and performed its own extensive evaluations. By letter of 21 July 2013, the Commission sent its final report to the Rector. This completes the work of the Commission in accordance with no. 4.3 of the guidelines of the University of Mannheim for safeguarding good scientific practice. The Rector will check any further action.

Further Retractions

That’s not to say that the retractions are over. They are still trickling in; Martin notes the varying degrees of concern (if not integrity) by the editors of the affected journals:

In the case of the extensive self-plagiarism by the German author, other journals were slow to react when alerted to the problem, and in at least one case, the eventual retraction of an article by this author was justified rather vaguely in terms of ‘data problems’ rather than giving details of the specific form of misconduct involved.

According to someone who’s read the various Licthenthaler articles, five articles have problems comparable to those of retracted articles, but are at journals that have not yet announced any decision regarding Licthenthaler’s publications

A sixth article at a top journal has been investigated, but the results (and any corrective action) have yet to be announced.

What happens after such retractions? It appears that the scientific field partially (but not entirely) self-corrects on retracted articles. As Jeff Furman, Kyle Jensen and Fiona Murray reported in their 2012 Research Policy study of retracted medical research:

Our core results [imply] that annual citations of an article drop by 65% following retraction, controlling for article age, calendar year, and a fixed article citation effect. … The effect of retraction does appear to be stronger in the most recent decade than in prior decades, although the large, statistically significant impact of retractions on future citations does not appear to be only induced by modern IT. The results … suggest that papers retracted between 1972 and 1999 experienced a 63% decline in citations after retraction, while those retracted since 2000 experienced a 69% decline in citations.

One would hope that in the future, after a retraction any subsequent citations would rapidly decline to zero. Fortunately, online publication (unlike dusty library print collections) allows prominent marking of the retraction status for a previously published article.

Ben Martin, “Whither research integrity? Plagiarism, self-plagiarism and coercive citation in an age of research assessment,” Research Policy 42, 6 (June 2013): 1005-1014. doi: 10.1016/j.respol.2013.03.011

September 19, 2013

Today and tomorrow I'm at the University of Bath for a two day workshop entitled “Strategizing Open Innovation.” The event was organized and hosted by the Strategy and Innovation Management Group within the School of Management, and included four keynotes and an open call for papers.

The first keynote came from California’s second most famous open innovation scholar. I have uploaded my slides to SlideShare, and will talk more about the content another time. Tomorrow, two participants from the June 2012 open innovation workshop in London — Teppo Felin (Oxford) and Letizia Mortara (Cambridge) — will offer their own insights into open innovation.

And then there was the second keynote this morning by Richard Whittington of Oxford, which was not about open innovation but about open strategy. It spurred a rather vigorous discussion.

If we are to make strategic sense of innovation communities, ecosystems, networks, and their implications for competitive advantage, we need a new approach to strategy—what we call “open strategy.”

Open strategy balances the tenets of traditional business strategy with the promise of open innovation. It embraces the benefits of openness as a means of expanding value creation for organizations. It places certain limits on traditional business models when those limits are necessary to foster greater adoption of an innovation approach. Open strategy also introduces new business models based on invention and coordination undertaken within a community of innovators. At the same time, though, open strategy is realistic about the need to sustain open innovation approaches over time. Sustaining a business model requires a means to capture a portion of the value created from innovation. Effective open strategy will balance value capture and value creation, instead of losing sight of value capture during the pursuit of innovation. Open strategy is an important approach for those who wish to lead through innovation.

The focus this morning was on openness not as an antecedent of open innovation but as an organization with permeable boundaries (open systems). In fact, during the discussion Felin cited Dick Scott’s Rational, Natural and Open Systems (1981) while Whittington also cited as an influence Karl Popper’s The Open Society and its Enemies (1945).

Many of the ideas about openness are ones I’ve published in the past, such as firms deliberately choosing openness (“How open is open enough?”) and selective degrees of openness (“…Open standards: Black, white and many shades of gray”). (The work of Joachim Henkel is also very relevant here). This is not to claim “I said that first,” but merely that elements have been out there before and the author (as would any author) bears the obligation to demonstrate that this framework tells us something new and interesting.

However, I am a little concerned that this is yet another example of “open” being used as magic pixie dust which can be sprinkled on anything to make it special. Whittington explicitly disclaimed any suggestion that openness is necessarily better — as when Apple has lost its ability to keep product introductions secret due to supplier leaks. Still, ceteris paribus, calling something "open" strategy implies that the "open" approach is better than the "closed" one.

At its meeting on 11 September 2013, the Senate of the WHU - Otto Beisheim School of Management unanimously decided to withdraw the teaching qualifications [Lehrbefähigung] that Professor Dr. Ulrich Lichtenthaler gained at WHU. The withdrawal was preceded by an intensive investigation into the allegations of scientific misconduct, which had the goal of producing a complete investigation.

After a thorough examination and discussion of the Senate of the WHU has come to the conclusion that an essential condition for the granting of the teaching certificate was not met. Prof. Lichtenthaler may appeal the decision.

Course of the Procedure

After the Dean of WHU in summer 2012 had learned of statistical defects and other scientific shortcomings in the work of Prof. Lichtenthaler, these were investigated in detail. The existing commission for safeguarding good scientific practice at WHU presented its final report to the Dean of WHU on June 13, 2013, after a thorough examination of the scientific works of Professor Dr. Lichtenthaler. The report was the basis of the examination by the Senate, which had begun on June 20 and on September 11 led to the decision on the withdrawal of the teaching certificate. The decisions are based on principles and rules of procedure of the WHU for the handling of scientific misconduct and the habilitation procedure.

For those of us outside Germany, Wikipedia helpfully explains that the Habillitation is a post-doctoral examination (in German-speaking Europe) that is the prerequisite for the Lehrbefähigung (teaching certificate). I don’t know what normally happens to a professor who used to have a Lehrbefähigung but no longer has one — since I imagine this doesn’t happen very often.

The outcome is a validation of the faith that many of us had that the system would eventually confront the serious charges here and not sweep them under the rug. It appears that the desire of the WHU faculty to distance themselves from their tainted alumnus outweighed any desire to cover up or explain away the problem.

The decision appears not to impact the PhD that Dr. Lichtenthaler earned at WHU. It is unclear what impact it will have upon the investigation by his current employer, University of Mannheim, and his chaired professorship. I’m told that it’s very difficult to fire or demote a German professor because of civil service rules.

I am concerned that administrative punishments might also short-circuit the remaining (and long overdue) investigations into some questionable papers that have been published. R&D Management has published 5 articles and retracted none, so it’s hard to imagine that his integrity batting average was 100% at this journal when it was 50% (or 0%) at other journals.

According to someone who closely examined Dr. Lichtenthaler’s entire publication output, five articles remain that have the same level of problems as the 12 retracted articles, including a 2009 article at R&D Management and articles at Organization Studies and Entrepreneurship Theory & Practice.However, there isn’t much transparency as to whether these journals are investigating these problem articles or plan to do so.

As someone co-editing a book and a special issue of a journal on open innovation, this uncertainty is a problem for our field. Of this prolific output, what can we cite and what can’t we cite? Some individuals are erring on the side of caution, but others are continuing to cite the non-retracted articles on the assumption that they are as valid as any other in that journal. What if these articles are retracted later? What if they are seriously flawed — to the point that they never should have been published — but are never retracted? What does this mean for the integrity of our field and the lessons that today’s doctoral students will draw for their own careers?

July 18, 2013

After 2½ days at #oui2013 at the University of Brighton, user innovation (and a few open innovation) researchers have scattered to the wind. It’s hard to summarize 52 hours in one post, but I’ll give it a try.

The location was great, host Steve Flowers was very gracious, and it was quite handy to be across the motorway from the SPRU thought leaders.

What did I learn? I learned quite a bit about being at the beach on a summer weekend near London, and also how expensive it is to host 140 people (or 200 for the US workshops) with all meals for three days. I learned a lot about my own work on firm external collaborations (more in another post).

Below are random thoughts about what I learned about UI topics; apologies to any of the presenters if I mangled the story from their talks. I've also excerpted from a few of about 80 photos of the conference, which I have posted for general viewing on Picasa.

Diffusion of User Innovations

One of the big topics at the conference is what happens after users innovate. Diffusion of such innovations was a major focus of the talk by Jeroen de Jong, a self-described “part of Eric’s mercenary army.” He started out with the amusing video by Martin Aslander, who created a wooden iPad stand that was featured in Wired and then got an explosion of interest on their website.

Based on a large-scale survey in Finland, users disseminate only 19% of their innovations. One reason users might not share their innovations is that they’re too narrow or individualistic to be value for other users; in the Finnish survey, 85% of these innovations are thought (by the inventor) to be of general value. Of these 85%, 22% were diffused: 6% by transfer to commercial firms, and 16% via peer-to-peer diffusion.

This is a very useful framework: peer diffusion vs. commercial diffusion. The one thing missing is subdividing commercial diffusion between existing firms and new firms. The latter would include both user entrepreneurship and non-user entrepreneurship. So I propose this typology

Non-diffusion

Non-commercial (social) diffusion

Commercialization by an existing firm

Commercialization by a new (unrelated) firm

Commercialization by the user entrepreneur

De Jong’s approach is an important step forward in defining the impact of user innovation, and I hope this classification approach will take root.

In a minitalk Wednesday, Christoph Stockstrom of TUHH looked at a particular example of user innovation in services — surgical procedures. As we know from previous work, user innovation is common by healthcare providers, but how does it diffuse? Yes, the attitude of the user innovator has a big impact on diffusion, but what’s also crucial is the involvement of the initial round of early adopters, who (for medical innovations) become trainers to interpret and diffuse the innovation to others.

Rational Non-Diffusion

Ben Martin (R) listens to Manabu Mizuno
present his minitalk

On the final day Wednesday, there were a lot of interesting 2-minute minitalks — too many to follow up by attending the full talk. While I decided to focus on people (and work) that I know, several talks seemed worthy of mentioning.

Two of these minitalks were from Japan. The most fun minitalk all day was by Manabu Mizuno of Hannan University. The example he showed was of a “bird deterrent device” (think 21st century scarecrow), based on people hanging used CDs. (From prior bird deterrent efforts, I assume the reflections off the CD scare the birds away).

It was a very simple user invention, but Mizuno asked: how would you commercialize it? First, can a simple hanging CD be turned into a real product, such as a vest of hanging CDs? Secondly, if it’s such a simple idea — that can’t be patented and can’t be kept a secret — how can firms appropriate the returns of such innovation, and if not, how will the innovation be diffused?

Self Organization

Another Japan-oriented talk was by Peter Svensson and colleagues. He talked about how — after Fukushima — citizens and scientists self-organized to improvise low-cost real-time radiation measurement equipment. Instead of waiting for commercial equipment to be developed, the urgency of monitoring radiation levels for public health prompted volunteers to adapt technologies to solve the problem at hand.

To me, the Safecast.jp network provides a fascinating example of a) how an exogenous shock stimulated concerted user innovation and b) how the network self-organized to harness that enthusiasm.

This seems like a wonderful example of a research design that allows us to study the early phase of coordination in user innovation communities.

Rational Irrationality and Randomness

The opening plenary talk by Nik Franke Tuesday asked “Are users really Spocks?” summarizing his 2010, 2010 and 2013 JPIM articles and 2013 Org Science. The graphical illustrations (using the Star Trek character) brought several rounds of laughter, but unfortunately from my vantage point I couldn’t get a good picture.

If I understood the argument, it was there are more reasons why users innovation (and share innovations) than just for the personal “functional benefit.” One reason is a sense of accomplishment; this is old hat to open source researchers, and also something (from two 2010 JPIM articles ) Nik mentioned in his plenary in Vienna two years ago.

His Org Sci focuses on the role of affect in the decision to share — something that seems to assumed away from the stylized stories of altruistic “free revealing” that have been told in the past. I love the title of the latest paper: “Does this sound like a fair deal?”

Franke also had my favorite title of the conference, “Does God play dice?” After doing a study with a €50,000 prize to find optimal solutions, Franke and colleagues then did a random sampling of a subset of the participants to see what the results would have been if everyone had not participated. It turns out that 22 factors that predict community success explained 11% of the variance — while the community size (# of participants from the draw) explained 60% of the variance.

This both confirms what we know about the importance of attracting participation — and also (as Franke noted) is a humbling chastening for our attempts to ascribe success to causal mechanisms.

July 15, 2013

Today at #oui2013 at the University of Brighton, the final session featured three journal editors from SPRU on the other side of the A23 motorway: Ben Martin (co-editor of Research Policy), Joe Tidd (managing editor of International Journal of Innovation Management) and Paul Nightingale (co-editor of Industrial and Corporate Change).

The three editors explained their respective journals and topical interests, and gave doctoral students and others unfamiliar with the journal advice on how to avoid a desk reject. (Tip #1: only send papers that fit the journal’s stated scope).

There were no questions during Q&A, so I asked the question that has been on the mind of many experienced innovation scholars: “How is the journal process changing in the light of high profile retractions?” The answer revealed that trying to avoid a repeat of the recent embarrassing and systematic fraud has already created a drag on the innovation publishing system.

Martin (two retractions a year ago) said that academic misconduct had no impact on his workload 6-8 years ago, but now requires one day a week. The journal is both trying to design new processes to prevent fraudulent articles from getting through, and to follow up on the suspicions of authors and editors.

Meanwhile, Nightengale said that academic fraud (presumably the May 2013 retraction) cost him two months to investigate (“I don’t have two months”). Facing the threat of litigation over the retractions, Oxford University Press provided legal defense.

Essay on Research Integrity

During the coffee break, Martin informed me of the recent publication of his editorial on academic integrity (Martin, 2013) — an essay that I read in draft form last year. By my count, the essay lists 4 examples of authors attempting plagiarism, 12 cases of self-plagiarism.

The final paragraph in the latter section provided the most complex example of self-plagiarism in the essay:

More recently, an even more complicated case was brought to the attention of RP Editors by two individuals who independently had been asked to review papers by the same author (a professor at a European university) submitted to two other journals. They discovered that the author concerned had published an astonishing total of over 60 journal articles since 2004. Since this number was too great to handle, the two reviewers concentrated their attention on 15 articles published in leading journals over the period 2007–2010 (including three published in Research Policy), all of which formed part of a single stream of research emerging from a survey of over 100 firms in Europe that the author had conducted. They found that in these papers, similar analyses had been carried out with differing combinations from a large set of variables (sometimes relabelled, to add to the confusion) with no references to indicate the prior papers the author had already produced on the same broad theme. Moreover, in some cases, a given variable was treated as a dependent variable and in others as an independent variable. Perhaps more worryingly, variables that were demonstrated to be significant in some papers were then (presumably deliberately) omitted in the analysis reported in other papers. The author was asked for an explanation. This explanation was deemed unsatisfactory by the RP Editors, with the result that two of the RP papers had to be formally retracted.

One OI scholar said the policy of many journals — rejecting plagiarized papers — paralleled the lax 1980s policy of the Budapest subway system: if you were caught failed to pay for a subway ride, the consequence was the price of a subway ride. Without severe penalties, there was no deterrence for misconduct. (In contrast, Martin 2013 mentions examples of attempted plagiarists told by RP not to submit any paper for 1 or 2 years).

Another OI scholar saw the recent raft of retractions as being to innovation studies what American Lance Armstrong was to professional cycling. The cheating has become such a huge scandal precisely because it featured one of Europe’s most successful young innovation scholars. Using banned substances to gain unfair advantage seems like an apt metaphor for cheating in academic research papers.

July 14, 2013

I’m now in Brighton, the 19th century English seaside resort, awaiting the start of the 11th Open and User Innovation Workshop, which starts Monday morning 9 a.m. at the University of Brighton. While I’m not excited about jet lag (or the drunken Englishmen who party outside my beach-area hotel), I am excited to be attending what I consider the premiere venue for discussing innovation beyond the firm.

Since starting in 2008, this will be my sixth consecutive OUI (née UOI) conference, and nowadays I attend this conference in preference to the much larger (and less focused) Academy of Management, which I typically attend in alternate years. As in most years (and most conferences), when attending sessions I’m torn between hearing friends, learning something new, and monitoring related work upstream (which I might cite), downstream (which cites me), and directly competing.

According to my analysis of the program, there are 98 papers in 20 sessions across 2 1/2 days. 12 of these are in 3 sessions titled "open innovation,” while “firms and users” (From a UI perspective) account for another 16 papers across 3 sessions.

I won’t be presenting in any of those, but instead in one of 3 sessions (15 papers) related to crowdsourcing (not counting an additional 6 papers in a 4th session on crowdfunding. (My paper with Frank Piller bridges user innovation, open innovation and co-creation in developing a model for firm and user collaboration).

As noted, I’d like to monitor topics that I’m actively researching, particularly two papers first presented last year at OUI 2012. One topic is health innovation, which has 7 papers in a session directly competing with the West-Piller paper on Tuesday. The other is community — both a topic from last year and something I’ve researched for almost a decade — which is indirectly represented by 4 papers on open source and 5 papers on community motivation.

Of course, the major topics of user innovation (including lead users and toolkits) are still represented. What’s disappointing is a decline in user entrepreneurship (only 2 papers) which I hope does not represent a broader decline of interest in the topic.

Letizia Mortara, Institute for Manufacturing, University of Cambridge, UK

Not listed is Bath’s newest faculty member, widely cited OI scholar Ammon Salter, who joins the school this fall.

Although the keynote speakers have been selected, the organizers are waiting for submissions to determine the rest of the program:

We invite submissions that relate to the adoption of open innovation (OI) practices and their implications for strategic development. This may include (non-exclusively or non-exhaustively): adjustments in managerial mindsets, the creation of new resources, the development of new or adjustment of existing capabilities, the creation of new business models, structures and strategies for OI.

To signal your interest, please send a summary of your idea and how it contributes to the workshop theme in no more than 500 words. A maximum of 30 submissions will be selected for discussion at the workshop.

July 5, 2013

Those who follow elsewhere on the OpenInnovation.net website will have seen last week’s article announcing results of the 2013 executive survey on open innovation. But for those who didn’t — or who didn’t open the report — I thought I’d summarize a few highlights.

The goal of the study was to understand the nature and degree of firms support for open innovation. Who did they study?

To perform the first quantitative study on open innovation in large firms we emailed our survey on open innovation to senior executives at the headquarters of more than 2,840 large and stock market listed firms. Our sample included all large companies in Europe and US, with revenues annually in excess of US$ 250 million and more than 1,000 employees. This sampling frame was drawn to fill a gap as there is no quantitative study on open innovation in very large firms. We received usable survey responses from 125 firms in November and December 2012. We sent the survey to at least one contact person at the company headquarters. Our primary contact was the Chief Executive Officer or the Chief Operations Officer. We also sent our survey to the Chief Technology Officer or a senior executive responsible for strategy or business development (e.g. VP Strategy or Business Development) if contact details were available.

The responses were overweighed for manufacturing and European companies but were representative in terms of size and age.

While I recommend the full report to everyone (it’s free!), a few findings caught my eye.

First (p.6) was the degree of OI adoption, which averaged 78% across the entire sample. Within manufacturing, high-tech firms were the highest (90.91%) and low-tech manufacturing was the lowest (40%) among all the industry categories, with two other categories (“medium-high-tech” and “medium-low-tech”) in between. Other than low-tech manufacturing, the only below average industries were financial services (including real estate and insurance) and transportation/utilities.

The next was how they measured OI practices by using Dahlander & Gann (2010)’s 2x2 of inbound vs. outbound and pecuniary vs. non-pecuniary (p.10). Not surprisingly, inbound dominates outbound by better than 4:1. Key OI partners (p.15) include customers, universities, suppliers, indirect customers (e.g. consumers if they lack a direct consumer relationship) and public research organizations. Unlike the open source firms famously studied by Joachim Henkel, page 17 includes the dog-bites-man conclusion that

Firms are more likely to receive freely revealed information from outside participants than they are to provide it to others. In other words, large firms are net “takers” of such freely revealed information.

Finally, the strategic objectives for using external partners (p.18) emphasized new technologies, partners and opportunities over reducing R&D costs. I have been quite pessimistic about the impact of inbound OI on internal R&D — based on a few anecdotal announcements by US and Japanese companies — but apparently this is not widespread (at least among this sample).

Other aspects of the study examine the internal organization of open innovation within firms (Chapter 5) and how firms measure the impact of open innovation (Chapter 6). Both chapters suggest other opportunities for future research.

July 1, 2013

UCLA is trying a new approach to licensing its IP, using an external 501(c)(3). The plan is stirring up considerable controversy.

At the May 15, the UC Regents voted to create an external nonprofit corporation, tentatively called “Newco.” Here are some key excerpts from the board agenda:

The primary mission of Newco and its Board will be to: (i) improve UCLA’s rate of invention disclosures per year; (ii) increase UCLA’s volume of patent applications per year; (iii) increase the overall flow of licensing royalties back to UCLA; and (iv) better position UCLA to win large, multi-year ISR contracts. This improved performance will not only benefit UCLA, but the surrounding community and the public at large.

In comparison to its peers at other universities, UCLA OIP-ISR has historically underperformed in the areas of technology transfer and ISR. UCLA believes this underperformance has not been caused by a lack of productivity among UCLA faculty or a deficiency in UCLA scholarship, but rather due to deficiencies in the current process of managing IP and ISR at UCLA.

Last week, the plan inspired a breathless exposé in a publication I’d never heard of, the East Bay Express. The opening excerpts:

June 26, 2013

Public Research for Private Gain UC Regents recently approved a new corporate entity that will likely give a group of well-connected businesspeople control over how academic research is used.By Darwin BondGraham

In a unanimous vote last month, the Regents of the University of California created a corporate entity that, if spread to all UC campuses as some regents envision, promises to further privatize scientific research produced by taxpayer-funded laboratories.

There are so many problems with the article, it’s hard to know where to start. The new plan would change what patents get filed and licensed for the UCLA medical center. It won’t be privatizing the research: the same research will be done, and it would still be licensed to firms.

The UC has been licensing IP to businesses for decades. Patents get licensed because it generates money, because it gets the technology into society’s hands, and because (after Bayh-Dole) it’s the law. Without Herb Boyer of UCSF (and Stan Cohen of Stanford) and their famous patent, there would be no recombinant DNA, no gene splicing, no Genentech.

The new plan for UCLA will decide differently which patents to prosecute (file) and license. An unpaid external board — the volunteer directors of Newco — will decide which ones to pursue and which ones not to. In the whole pipeline of developing and commercializing university research, the plan only changes the final few steps. (As far as I can tell, it doesn’t change the pricing of licenses nor the allocation of royalties to inventors and places within the university).

The exposé suggests that the plan parallels the University of Washington’s “Center for Commercialization,” but I can’t get enough info on either one to tell for sure. Certainly other schools have tried drastic measures in hopes of making their tech transfer more entrepreneurial.

I’m not sure what’s wrong with UCLA. According to the UC tech transfer FY2011-2012 annual report, Berkeley (no surprise) leads the pack, but by various measures UCLA, UCSD and UCSF all seem to be comparable. In some years, UCR does much better than some of the other UC campuses — perhaps owing to its century-long history as an agricultural research station.

However, in looking into the proposal, the proponents point to a March 2011 study “An Ecosystem for Entrepreneurs at UCLA,” written by the oft-cited Bill Ouchi, who in recent years has focused his efforts on changing troubled organizations. Based on 17 universities and benchmarking against 16 universities, UCLA was neither the best nor the worse. It holds up Columbia as a role model, because it can generate $154 million in licensing income off research expenditures of $604 million. In terms of license revenue per research dollar, Columbia is 9x as effective as UCLA.

Some of the plan seems unrealistic: it calls for “setting the national standard in technology transfer” : not everyone can be above average, not even at Lake Wobegon High. Private schools are inherently less bureaucratic and (best case) more entrepreneurial. Regressing to the mean of UC ratios is probably more realistic than matching Columbia’s results (assuming Columbia has topped the customary pillars of tech licensing — MIT and Stanford).

Still, as an experiment it seems like a worthwhile one. The plan calls for a performance evaluation after two years and every five years thereafter. It would be worth trying this experiment elsewhere (e.g. with Berkeley or UCSD’s engineering school) but not making it UC-wide for all colleges until it has passed at least two outside evaluations.

Research Fraud Allegations Trail a German B-School Wunderkind
By Carol Matlack

Ulrich Lichtenthaler was a research wunderkind. The German management professor, an expert on technology licensing and innovation, published more than 50 journal articles, was a visiting scholar at Northwestern University’s Kellogg School of Management, and won a business school department chairmanship—all before he turned 34 last August. The newspaper Handelsblatt in 2009 named him the top young business researcher in the German-speaking world.

Now, Lichtenthaler’s reputation is in tatters. In recent months, academic journals say they have retracted 13 of his articles and are scrutinizing others, after finding that he mischaracterized data and engaged in “self-plagiarism,” offering slightly different versions of the same material to multiple publications while claiming each article was original.

In an e-mailed response to questions from Bloomberg Businessweek, Lichtenthaler said that some of his work contained “unintended statistical errors. I deeply regret these errors and would like to emphasize that no attempt was made to deliberately influence the results.”

He declined to comment further, pending an investigation by the University of Mannheim Business School, where he is chairman of management and organization.

Other than the e-mail interview, the substance of the story has little new for readers of the Open Innovation or Retraction Watch blogs (and in fact, the story quotes original reporting in both).

The author attempts to find a systemic explanation for the high number of German retractions, emphasizing the lack of university control rather than (as others have) the strong incentives for publication by German management scholars, such as the Handlesblatt rankings.

One new tidbit is a more detailed explanation from the editor responsible for Dr. Licthenthaler’s first retraction:

“In some cases, measures from the survey were relabeled from earlier papers so one had to look carefully to see whether a given finding had already been published,” says Russell Coff, a University of Wisconsin management professor who edits the journal Strategic Organization, which retracted a 2009 Lichtenthaler article on technology licensing.

The most serious problem, Coff says, was that Lichtenthaler had labeled some variables as statistically significant when a quick glance at the data showed they were not. “It did not seem that the mislabeling was necessarily accidental,” Coff says, “though we cannot be certain.” Lichtenthaler “approached us asking to retract the article and wanted to be clear that he was being proactive,” Coff adds.

The term “proactive” seems to be frequently used to describe Licthenthaler’s efforts of the past year to voluntarily retract some of his published research. I’ve had other researchers (who haven’t had their work retracted) ask me why a “proactive” approach to retracting questionable articles is better than a non-proactive (“reactive”?) approach, or even waiting for the journal to render its own decision.

I can’t answer their question. Perhaps the idea of “proactive” has an exculpatory meaning in German that it lacks in American English.

May 31, 2013

At most colleges, this is the time of year for graduations. It’s also when senior scholars who worked hard for their first doctorate are recognized for their work with an honorary doctorate. Both Henry Chesbrough (father of open innovation) and Eric von Hippel (father of user innovation) were recognized recently with an additional doctoral degree.

Hasselt University: Henry Chesbrough

For weeks I’ve been getting invitations from my co-author Wim Vanhaverbeke to come see Henry Chesbrough receive his honorary doctorate at Hasselt University in Belgium, one of three schools where Wim has an appointment. (Wim, Henry and I co-edited a 2006 book on Open Innovation, and are now working on a sequel).

On Tuesday (May 28), Henry was one of seven public figures so honored this year, on the school’s 40th anniversary. As the university website explained:

Prof. Dr. Henry Chesbrough (Haas School of Management, University of California, USA) is the spiritual father of the concept of 'open innovation', that is gaining acceptance among companies and research institutions in the four corners of the world. His honorary degree is a recommendation of the Faculty of Business Economics.

I would have liked to have attended, but between the cost and travel time it wasn’t practical to visit unless I was already nearby for another purpose (which I wasn’t).

However, for the benefit of the rest of the world, Wim arranged for online webcast of the post-graduation Q&A with Henry. Questions were solicited via email, and the discussion was broadcast live (at 345am PDT) via Google+ (whatever that is). A video recording of the 65 minute Q&A is now available at YouTube.

This could be Henry’s first honorary doctorate, but since his official online CV hasn’t been updated since 2008, it’s hard to tell.

Hamburg University of Technology: Eric von Hippel

On April 10, TUHH awarded an honorary doctorate to Eric von Hippel. As the school’s (English-language) web page put it:

The Hamburg University of Technology (TUHH) today awarded an honorary doctorate of economics and social science (Dr. rer. pol. h.c.) to Professor Eric von Hippel, PhD, of the Massachusetts Institute of Technology (MIT), Boston, U.S. The TUHH awarded the degree in recognition of “his trail-blazing contributions to research on user innovation and his untiring endeavors to assist young scientists on their academic career path,” as the citation put it.

Prof. Hippel, 71, is one of the world’s most renowned academic experts in management and innovation research.

According to his online CV (updated December 2012), this is Eric’s third honorary doctorate, after Ludwig-Maximillians Universität München (2004) and Copenhagen Business School (2007). TUHH says it’s only their sixth honorary doctorate in 35 years.

Apparently this was the big event of the year of the German-speaking innovation community. Several of my friends asked why I wasn’t there, and I’m sure I know at least a dozen people who were listed in attendance.

The website notes that Eric was doctoral supervisor for TUHH PhD students and others have visited MIT to research with him. Speeches on behalf of the honoree were made by Dietmar Harhoff of U. Munich and Cornelius Herstatt and Christian Lüthje of the TUHH.

Prof. Eric von Hippel comes from a family that has produced many well-known scientists. His father Prof. Arthur R. Hippel held the chair of materials science at the MIT. His mother Dagmar was the daughter of Nobel laureate James Franck. As she was of Jewish extraction the family had to leave Germany in the 1930s. Before it found a new home in Boston, Arthur R. von Hippel worked for several years as a research scientist at the Niels Bohr Institute in Copenhagen. Eric von Hippel has four siblings. His younger brother holds the chair of Public and International Affairs at Princeton’s Woodrow Wilson School, his elder brother was Professor of Molecular Biology at the University of Oregon, Arndt von Hippel was a heart surgeon at Anchorage University Hospital, Alaska. His sister is a writer.