Contents

Matt Ridley recently published a piece in the WSJ, arguing that basic scientific research does not lead to more innovation. There are things to like and dislike in the piece. Jack Stilgoe and Anton Howeshave weighed on this, and you can read summaries of their articles here.

The first thing to say here is that the linear model of innovation being talked about here is something that is rejected by the consensus of scientists (!) studying how science and technology actually work, whatever their policy recommendations. (See here for a possible explanation of its persistence) Ridley, Stilgoe, Howes, Mazzucato, and I reject it. Only that then different implications are drawn. Ridley asserts that it is technology that drives science, Stilgoe says that reality is more complex than the linear model, Mazzucato says that because reality is complex, but that means there are more areas for government investment.

I basically agree with the vision of the technological advance as a process carried out by many agents, and with feedback loops everywhere. So Science<->Technology, and not just one way or the other.

The example of thermodynamics is so good that I can't help pasting some quotes from chapter 1 of the Handbook of the Economics of Innovation

Rudolf Diesel was a trained engineer, a good specimen of the “new inventor,” trained in science, a “rational” engineer, in search of efficiency above all else. Rather than tinkering and tweaking, Diesel started from first thermodynamic principles He began searching for an engine that incorporated the theoretical Carnot cycle, in which maximum efficiency is obtained by isothermal expansion so that no energy is wasted, and a cheap, crude fuel can be used to boot (originally Diesel used coal dust in his engines). [...] Parsons, even more than Diesel, personified the second Industrial Revolution. The first turbine was built simply because “thermodynamics told him that it could be done” (J. Mokyr)

From Anderson's book Introduction to Flight, we get a similar development for the aircraft industry:

Mathematics up to the present have been quite useless to use in regards to flying (14th Annual Report of the Aeronautical Society of Great Britain, 1879) Mathematical theories from the happy hunting grounds of pure mathematicians are found suitable to describe the airflow produced by aircraft with such excellent accuracy that they can be applied directly to airplane design. (Theodore von Karman, 1954)

So Ridley is indeed clearly wrong that science and innovation just flows from technology. But why is he wrong? Why do these things happen?

It happens because there are some insights that are very, very hard to get just by tinkering with technological artefacts. You can probably design a steam engine or a ship just by tinkering around. You can just think of the main components, build them, place them together, and iterate. And that's it. That way is how tinkering produces advances. Tinkering is easier than systematising and deducing everything from first principles, and that's why originally science lagged behind technology.

But then, I bet you can't design an efficient steam engine justo by trying to improve existing engines without understanding how and why they work. Then you have to think of specific materials, thicknesses, thermodynamic cycles, regulators, fuel, and design. Efficient steam engine means the best possible steam engine, not just one that simply works. In the steam engine concept space are lots of possible designs, but just a couple you could call the most efficient. It's hard to get there just by guessing. You usually get there by sitting down, laying out a formal model, and then building and iterating. And if you aren't even aware of relevant factors (What if you don't know that a meta-material is the way to go because you don't know what a meta-material is? [Because no one has done research on that]), you won't get there. Basic science in many cases is exploring nature to see what we are missing, filling in the fine details in our books.

Right now, science is quite advanced, and it is rarely the case that it is surprised by advances in technology. As Dosi and Nelson point out in the Handbook,

Since the Industrial Revolution, the relative contribution of sciences to technology has been increasing, and in turn such a science base has been largely the product of publicly funded research, while the knowledge produced by that research has been largely open and available for potential innovation to use (more in David, 2001a,b, 2004; Nelson, 2004; Pavitt, 2001). This, however, is not sufficient to corroborate any simple “linear model” from pure to applied science, to technological applications. First, the point made elsewhere by Rosenberg (1982), Kline and Rosenberg (1986), Pavitt (1999), and Nelson (1981) continues to apply: scientific principles help a lot but are rarely enough. An enlightening case to the point, indeed in a “science-based” area—medical innovation—is discussed in Rosenberg (2009). Semiconductors technology is another good example. For many decades, efforts to advance products and process technology—crucially involving the ability to progressively make circuits smaller and smaller— have taken advantage of the understandings in material science and the underlying solid-state physics. However, much more pragmatic and tacit elements of technological knowhow have persistently been crucial.

Second, it is quite common that scientific advances have been made possible by technological ones, especially in the fields of instruments: think of the example of the electronic microscope with respect to the scientific advances in life sciences (more in Rosenberg, 1982, 1994).

So, with Howes, yes science is not the only source of progress, but its importance, pace Ridley, is not a myth.

Then, I have to agree with Ridley that it is true that had the great inventors and scientist not been born, technology would have progressed anyway. I made this point here and here. This is not to say that we would have had the iPhone without Steve Jobs. We wouldn't. But we would have gotten something similar. Think of the difference between a technological paradigm (the smartphone) and a particular implementation (the iPhone).

But this does not suffice to argue for the unstoppability of technology, or that it has some life of its own. The way technology develops is very often influenced by societal norms and interests, and also historical events. Take rocketry. There was an interest in rocketry before World War II, but then the Nazis saw potential new weapons then, and so boosted rocket development. From there, to the US, the Soviet Union, and the Space Race. Maybe without World War II we would have a different kind of rockets, more geared towards launching civil satellites and saving money, than launching super heavy payloads to space. Or take human cloning: we have the technology to produce human clones, yet 'technium' has not yet forced us to produce adult humans via cloning.

A better story seems just that we humans like some things, and prefer them generally better and cheaper. So that's what the market delivers. In turn, technology use changes the way we see the world, that's true. Maybe the selfie-stick was absurd 10 years ago, and it makes sense for some people now once we have front facing phone cameras. But would one go as far as saying that the selfie stick is an appendage of 'technium' that uses us in some way? I have not read the book where that theory originates, but with the arguments presented, it doesn't seem plausible to me.

We then have the role of government. I must first make my mandatory self-publicity and tell you that I have over a dozen articles written on Mazzucato and her theories. You can consult them here in the relevant section. A broad TL;DR would be that government wasn't that important, and that where it was more evident that was important was in basic science, contradicting her thesis, as she argues against a State that just funds basic science.

And is government spending in R&D useful? It is a case by case, empirical question. It can be difficult to measure with precision, and it can be the case that it crowds in, but then fails to increase productivity, so just measuring whether the private sector spends more would not be a good measure of productivity increases. It can also be the case that government activity redirects research and talent from one areas to others. Sure we got to the Moon, but all that talent sitting in NASA was doing that, and not other things. Fortunately, with Science, it is possible to reuse gained knowledge in one area in another area, but still the fact remains that one could build counterfactuals where we would be better off absent government spending there. On the other hand, it could also be helpful. Even when it is true that companies have ways of partially fixing the public goods problem, and that there are non-for-profits organisations dedicated to research, it's plausible to think that there must be policies that could be useful. Suppose one institution that listens to the scientific needs of companies, and performs research in that, be it basic or applied. (Something like Germany's Fraunhofer-Gesellschaft). So I'd say that there will be cases where it increases welfare, and cases where it won't. Military research spending, for one, accounts for a sizeable part of public R&D spending, yet the consensus is that it is not useful for non-military applications. In the Handbook it is pointed out that around 50% of public R&D spending is not justifiable by public goods argumentation. Here, you can either say (with Mazzucato) that more than that is needed, or that perhaps the market failure fixing is enough. I would like to do a summary of this field at some point in the future, it is huge. What I will say for sure is that the answer won't be 'yes, always' (or no, never).

To finish up, I'd like to comment on the role of vision and stories. I think Ridley is right in attacking the prevalent idea that basic science is all-important, but not that right in saying it's useless. Stilgoe is also right in pointing out the need for another story for scientists to tell instead of the linear model. But here is where Mazzucato comes in, and so far she has not been publicly challenged except by some humble bloggers here and there. She has taken some views that were prevalent, and gave it something that seemed a coherent theory (like Keynes did). That's the story that is filling the gap left by the disintegration of the linear model. But I see it as another version of both the linear model (the linear governmental model), and the Great Man Theory of Innovation (without the great men that actually did what they did, innovation would not march forward).

I think a more realistic model of innovation should begin from microfoundations (or to use another word, be reductionistic, not holistic). We should study the people who do science and innovation. What traits do they posses? Is everyone equally capable to innovate? Then, study how, moving upwards, they fit into organisations. Do they like science per se? How elastic is the supply of innovators, scientists, or entrepreneurs with respect to earnings? Then, how organisations relate to each other. That would give is a better picture of what is going on, and assign a relative importance to all the factors involved, instead of dismissing ones, or absurdly magnifying others.

Most technological breakthroughs come from technologists tinkering, not from researchers chasing hypotheses. Heretical as it may sound, “basic science” isn’t nearly as productive of new inventions as we tend to think.

Had Edison or other great scientists or inventors died, technological advance wouldn’t have stopped: for a given invention or discovery, at a given moment in History, there are usually a couple of people close to the invention or discovery.

To the science writer Kevin Kelly, the “technium”—his name for the evolving organism that our collective machinery comprises—is already “a very complex organism that often follows its own urges.” It “wants what every living system wants: to perpetuate itself.”

Prohibition can’t stop technological progress, and that steering it is also impossible, then explains why patents are usually not good, that copying is costly.

The linear model of innovation is wrong, so raining universities with funds for basic science is misguided policy.

The above is because it is wrong that Science->Technology, and it is right that Technology->Science, and companies are already investing in Technology and the Science needed to back it up.

Arrow and Nelson are criticised for not looking at the real world when they presented the problem of science as a public good, because companies do fund most R&D, and around half of basic science in particular in many countries. This is the model that gets taught in Econ 101, and it is wrong. Reality is more complicated than that, but the meme is still there.

The US and UK made huge advances in science and technology without government funding during the Industrial Revolution. It was after WWII when massive government investment began.

According to the OECD (2003), private R&D increases economic growth, while public R&D doesn’t, result that was echoed by Leo Sveikauskas from the US BLS. Why? He points out to crowding out: government spending redirects and not increases the amount of scientific progress.

Public science funding based on a few examples like the Internet or the Higgs Boson is misleading: of course if you pour money on Science, you’ll get something. But you would have got the same anyway.

Governments cannot dictate either discovery or invention; they can only make sure that they don’t hinder it. Innovation emerges unbidden from the way that human beings freely interact if allowed. Deep scientific insights are the fruits that fall from the tree of technological change.

Stilgoe says:

Ridley is half-right, the linear model is wrong indeed

But it is not as simple as Ridley puts it, Science is not just a causal endpoint, but also a force on its own.

Ridley argues that technological regulation is impossible and desirable, but provides no real argument for that.

Kealey, whom Ridley cites, is wrong: just see Paul David’s review of Kealey’s book (read it here), or see the science done on this: government investment crowds in private investment in science.

For Ridley, the maxim that the advance of science can be killed or mutilated, but not shaped is a convenient mantra (as is for Silicon Valley entrepreneurs who dislike government), but in reality, government has been a vital source of ideas, expertise, and people as well as inventions, as pointed out by Mariana Mazzucato.

The linear model is wrong, and scientists need to find a better story, or fall prey to Ridleyian fallacies.

Howes says

Ridley is right in questioning the supremacy of scientific research in driving technological progress.

To innovate, one need notice how the world works, not figure out the why.

Howes says Ridley cites thermodynamics: it is true that originally, it didn’t benefit from thermodynamics (and that thermo arose to explain steam engines), but it did benefit from understanding vacuums, and that came from scientists. After that, thermodynamics was helpful to design more efficient engines.

Examples where science may be more important: nuclear power, bio-engineering, pharmaceuticals and materials science.