Musings on research, international development and other stuff

Tag Archives: International development

Evidence-informed policy – it’s a wonderful thing. But just how widespread is it? The ‘Show your workings’ report from the Institute of Government (and collaborators Sense About Science and the Alliance for Useful Evidence) has asked this question and concluded… not very. It states “there [are] few obvious political penalties for failing to base decision on the best available evidence”. I have to say that as a civil servant this rings true. It’s not that people don’t use evidence – actually most civil servants, at least where I work, do. But there are not good systems in place to distinguish between people who have systematically looked at the full body of evidence and appraised its strengths and weaknesses – and those who have referenced a few cherry-picked studies to back up their argument.

Rosie is my actual cat’s name. And she does indeed make many poor life decisions. Incidentally, I named my other cat ‘Mouse’ and now that I am trying to teach my child to identify animals I am wondering just how wise a life decision that was…

The problem for those scrutinising decision making – parliament, audit bodies and, in the case of development, the Independent Commission for Aid Impact – is that if you are not a topic expert it can be quite hard to judge whether the picture of evidence presented in a policy document does represent an impartial assessment of the state of knowledge. The IoG authors realised this was a problem quite early in their quest – and came up with a rather nifty solution. Instead of trying to decide if decisions are based on an unbiased assessment of evidence, they simply looked at how transparent decision makers had been about how they had appraised evidence.

Now, on the evidence supply side there has been some great work to drive up transparency. In the medical field, Ben Goldacre is going all guns blazing after pharmaceutical companies to get them to clean up their act. In international development, registers of evaluations are appearing and healthy debates are emerging on the nature of pre-analysis plans. This is vitally important – if evaluators don’t declare what they are investigating and how, it is far too easy for them to not bother publishing findings which are inconvenient – or to try multiple types of analysis until, by chance, one gives them a more agreeable answer.

But as the report shows, and as others have argued elsewhere, there has been relatively little focus on transparency on the ‘demand’ side. And by overlooking this, I think that we might have been missing a trick. You see, it turns out that the extent to which a policy document explicitly sets out how evidence has been gathered and appraised is a rather good proxy for systematic evidence appraisal. And the IoG’s hypothesis is that if you could hold decision makers to account for their evidence transparency, you could go some way towards improving the systematic use of evidence to inform decision makers.

The report sets out a framework which can be used to assess evidence transparency. As usual, I have a couple of tweaks I would love to see. I think it would be great if the framework included more explicitly an assessment of the search strategy used to gather the initial body of evidence – and perhaps rewarded people for making use of existing rigorous synthesis products such as systematic reviews. But in general, I think it is a great tool and I really hope the IoG et al. are successful in persuading government departments – and crucially those who scrutinise them – to make use of it.

I’ve been a little bit delighted to see the publicity that the Jaded Aid card game has been generating (see for example this Foreign Policy write-up). Nothing is more loltastic for development workers that some wryly-observed development humour. As Wayan Vota observes in that FP article, it is affectionate humour; most of us care deeply about the work we are doing. But if we can’t laugh at some of the absurdities of our industries we might go mad (or explode with pomposity).

In this spirit, I’ve been thinking about what other business projects I could crowd-source funding for from the jaded aid generation. I think I have come up with some crackers. Here they are – in ascending order of cynicism 😉

Somehow I suspect that my business ideas wouldn’t get me that far on Dragons’ Den…

1. Many people have questioned aid workers abilities to actually end world poverty – but surely no-one could deny their deep, contextual knowledge of long-haul flights and seedy business hotels. I mean, I don’t know anyone else as good as me at securing the best seat in economy class or blagging my way into business class lounges. So my first idea is to combine this latent travel knowledge with another skill which development workers have – creating online knowledge repositorieshttps://kirstyevidence.wordpress.com/2012/10/08/why-your-knowledge-sharing-portal-will-probably-not-save-the-world/. My one-stop-shop would enable seasoned development workers to mentor and share knowledge with long-haul tourists looking for exotic adventures. Development workers will get the satisfaction of being truly useful. And the boost to the tourist industry may benefit poor countries more than many misguided development projects: win win.
2. Wherever you go in the world, you get ethnic spas based on sanitised versions of indigenous health beliefs. So you get Thai spas with incense, Thai muzak and traditional Thai massages; Indian spas with ladies in saris, treatments inspired by Ayurveda and incense. And, to my surprise, I recently came across a ‘traditional African spa’ with treatments inspired by African traditional medicine carried out to the sound of the Soweto gospel choir. And incense.
People love these spas because it is well known that while people in developing countries may lack wealth, they are rich in indigenous wisdom, charmingly exotic practices… and incense.
So, my proposal is that we give something back to all these developing countries from which we have appropriated our luxury spa treatments. And what better gift than the marvelous indigenous health system of Germany: homeopathy*. I suggest that we set up our german spas across the developing world. Homeopathic massages will have been watered down to the extent that no actual massage is left. Instead, customers will sit in a room with a mulleted German masseuse listening to the relaxing sound of David Hasselhof – and sniffing bratwurst-scented incense. Health impacts will be mediated by the placebo effect – and the huge pitcher of German beer you will be given before leaving.
3. I have long felt a dilemma about gap-year voluntourism projects. On the one hand, I feel that sending under qualified people to carry out projects in poor countries can be patronising, unhelpful and potentially undermining to local economies. Or the other hand, I do think that it is useful for young, impressionable people to have the chance to connect with people from other cultures and (hopefully) to realise that people the world over are just people. I wondered if there is a way to enable this cross-cultural exchange without the patronising well-digging projects. Which is how I came up with the Mzungu** houseboy project. The idea is to link up earnest European gap year students with nouveau riche African families. To be precise we would need to find a particular subset of the newly wealthy who want to show off to their friends and family. The Europeans would get an authentic experience of poverty as a houseboy/girl – secure in the knowledge that they are not falling into the white saviour cliche. And the ostentatious families of Lagos, Nairobi or Kampala get the ultimate status symbol; a European houseboy! What is not to like?!

* A system of alternative medicine/quackery – invented in Germany – where active substances are diluted down to infinitesimally low concentrations.
**The word for a European/Caucasian (or sometimes foreign-resident Africans) in many Bantu languages

I’m not sure if I have mentioned it, but I am kinda into gender equity.

Or, more precisely, I am a card-carrying, misogyny-hating, bra-burning*, don’t-you-dare-tell-me-I-can’t-do-that-just-cus-I-have-a-uterus kind of feminist.

Most of my friends are similarly inclined** and thus, recently, I got into a discussion about how feminism relates to international development. We talked about two facets – how feminist is the international development movement in its actions and how feminist is the international development industry as an employer.

The answer to the first question seems obvious: development is obsessed with gender issues and supporting women and girls. Surely it is more feminist than Caitlin Moran reading the Female Eunuch while chanting suffragette slogans? Well, sometimes. It is true that many of those working on projects for women and girls, regardless of gender, are feminist in thought and action. But my friends and I also noted that projects targeting women are particularly susceptible to the ‘white saviour’ myth; there are some who love the idea of parachuting in to save poor vulnerable women from primitive conditions. This type of rhetoric frequently comes from men, but not always. In fact, it is particularly prevalent in that most un-feminist of publication – the women’s glossy magazine. Inserted between articles about why you should feel inadequate about your body or spend ridiculous amounts of money on your appearance, there is often an article about someone who has gone out to Africa to save the poor vulnerable and helpless women there.

This patronising approach is popular with celebrities aiming to show their caring side – but to some extent it can seep into serious development agencies. One of my friends, a gender specialist, described a recent development conference she had attended. She noted that during the tea breaks, there was a good mix of genders but when the breakout sessions started the (mainly male) economists and political scientists went off to discuss the meaty governance issues while the (mainly female) social development and gender specialists were ushered into rooms to discuss more ‘fluffy’ issues. It drove my friend mad. She didn’t think gender should be reduced to ’boutique’ projects about disadvantaged women but that rather there was a need to think about power relationships much more generally. She said she felt like shouting “I want to talk about gender-sensitive tax regimes not about periods (well, not always but I reserve the right to also be allowed to talk about that too but at my choosing!)!”

So what about the development industry as an employer – to what extent does it support, promote and empower women? In my career, I have encountered the odd sexist person, but have generally found that the people and organisations I work with promote gender equity more than seems to be the case in many other industries. Actually, in some cases I have assumed that someone has a problem with women but have later discovered they are just downright rude – in a gender non-specific manner. My other development friends reported a range of experiences on this – some, like me, had not found sexism too much a problem but others had encountered it frequently.

But whether or not people are overtly sexist, there are structural issues in the industry which may disproportionately impede women. I will give a couple of examples.

The point at which the conversation turns to competitive development stories is definitely the time to LEAVE the bar…

Development people are preoccupied with the level of experience that people have ‘in the field’. This is completely justified – I have witnessed the problems you can get when people who have no clue about life in developing countries are managing development programmes, particularly if they don’t have insight into their own lack of knowledge. However, this principle is sometimes used as an excuse for a slightly macho bragging culture and, at times, downright rudeness towards those who are perceived as being less experienced. Unfortunately, people who are more introverted, younger and/or female seem to be disproportionately targeted in this way. And thus I have seen some very unedifying meetings in which a collection of people who have happened to be mainly male have acted in a very discourteous and disrespectful way towards a collection of people who have happened to be mainly female. I suspect these people are not sexist per se – but the combination of their assumptions and their bad manners can still result in gender discrimination. In fact, I suspect that such attitudes probably have a disproportionate effect on other groups as well – including those who come from poorer backgrounds who perhaps don’t have the same level of self-assurance that a lifetime of relative privilege brings with it.

Another issue is that certain groups may genuinely be less able to gather ‘field experience’ – but may have much to offer. Parents with caring responsibilities may not be able to travel overseas at short notice – and, although this is slowly changing, this currently disproportionately affects mothers. Once again, women are not the only group disadvantaged in this way; individuals with physical disabilities may not be able to travel to all locations while people with mental illnesses may struggle with the emotional impact of overseas travel.

None of these issues are insurmountable – but it is important to at least recognise that the industry is set up to favour cis-gender, straight, able-bodied, white, middle/upper class men. By starting with this knowledge, it is possible to consider what actions we can take to improve opportunities for a variety of groups – including women.

.

*Of course I don’t really burn bras since EU regulations have made them all flame-retardant. They spoil all our fun.

** To be honest, I kind of expect you are too. My view on feminism is summed up by comedian Azis Ansari when he says:

‘If you believe that men and women have equal rights, and then someone asks you if you’re a feminist, you have to say yes. Because that’s how words work. You can’t be like, “Yeah, I’m a doctor who primarily does diseases of the skin.” “Oh, so you’re a dermatologist?” “Oh that’s way too aggressive of a word, not at all, not at all.”’

I have recently been pondering what the age of austerity means for the development community. One consequence which seems inevitable is increasing scrutiny of how development funds are spent. The principle behind this is hard to argue with; money is limited and it seems both sensible and ethical to make sure that we do as much good as possible with what we have. However, the way in which costs and benefits are assessed could have a big impact on the future development landscape. Already, some organisations are taking the value for money principle to its logical conclusion and trying to assess and rank causes in terms of their ‘bang for your buck’. The Open Philanthropy project has been comparing interventions as diverse as cash transfers, lobbying for criminal justice reform and pandemic prevention, and trying to assess which offers the best investment for philanthropists (fascinating article on this here).

The Copenhagen Consensus project* is trying to do a similar thing for the sustainable development goals; using a mixture of cost-benefit analysis and expert opinion, they are attempting to quantify how much social, economic and environmental return development agencies can get by focussing on different goals. For example, they find that investing a dollar in universal access to contraception will result in an average of $120 of benefit. By contrast, they estimate that investing a dollar in vaccinating against cervical cancer will produce only $3 average return. Looking over the list of interventions and the corresponding estimated returns on investment is fascinating and slightly shocking. A number of high profile development priorities appear to give very low returns while some of the biggest returns correspond to interventions such as trade liberalisation and increased migration which are typically seen as outside the remit of development agencies (good discussion on ‘beyond-aid agenda’ to be found from Owen Barder et al. at CDG e.g. here).

In general, I find the approach of these organisations both brave and important. Of course there needs to be a lot of discussion and scrutiny of the methods before these figures are used to inform policy – for example, I had a brief look at the CC analysis of higher education and found a number of things to quibble with, and I am sure that others would find the same if they examined the analysis of their area of expertise. But the fact that the analysis is difficult does not mean one should not attempt it. I don’t think it is good enough that we continue to invest in interventions just because they are the pet causes of development workers. We owe it both to the tax payers who fund development work and to those living in poverty to do our best to ensure funds are used wisely.

Having said all that, my one note of caution is that there is a danger that these utilitarian approaches inadvertently skew priorities towards what is measurable at the expense of what is most important. Impacts which are most easily measured are often those achieved by solving immediate problems (excellent and nuanced discussion of this from Chris Blattman here). To subvert a well-known saying, it is relatively easy to measure the impact of giving a man a fish, more difficult to measure the impact of teaching a man to fish** and almost impossible to measure, let alone predict in advance, the impact of supporting the local ministry of agriculture to develop its internal capacity to devise and implement policies to support long-term sustainable fishing practices. Analysts in both the Copenhagen Consensus and the Open Philanthropy projects have clearly thought long and hard about this tension and seem to be making good strides towards grappling with it. However, I do worry that the trend within understaffed and highly scrutinised development agencies may be less nuanced.

So what is the solution? Well, firstly development agencies need to balance easy to measure but low impact interventions with tricky to measure but potentially high impact ones. BUT this does not mean that we should give carte blanche to those working on tricky systemic problems to use whatever shoddy approaches they fancy; too many poor development programmes have hidden behind the excuse that it is too complicated to assess them. Just because measuring and attributing impact is difficult does not mean that we can’t do anything to systemstically assess intermediate outcomes and use these to tailor interventions.

To take the example of organisational capacity building – which surely makes up a large chunk of these ‘tricky’ to measure programmes – we need to get serious about understanding what aspects of design and implementation lead to success. We need to investigate the effects of different incentives used in such projects including the thorny issue of per diems/salary supplements (seriously, why is nobody doing good research on this issue??). We need to find out what types of pedagogical approach actually work when it comes to supporting learning and then get rid of all the rubbish training that blights the sector. And we need to think seriously about the extent of local institutional buy-in required for programmes to have a chance of success – and stop naively diving into projects in the hope that the local support will come along later.

In summary, ever-increasing scrutiny of how development funds are spent is probably inevitable. However, if, rather than fearing it, we engage constructively with the discussions, we can ensure that important but tricky objectives continue to be pursued – but also that our approach to achieving them gets better.

* Edit: thanks to tribalstrategies for pointing out that Bjorn Lomborg who runs the Copenhagen Consensus has some controversial views on climate science. This underscores the need for findings from such organisations to be independently and rigorously peer reviewed.

**High five to anyone who now has an Arrested Development song on loop in their head.

One of the things I love about working in DFID is that people take the issue of beneficiary* feedback very seriously. Of course we don’t get it right all the time. But I like to think that the kind of externally designed, top-down, patronising solutions that are such a feature of the worst kind of development interventions (one word: BandAid**) are much less likely to be supported by the likes of DFID these days.

In fact, beneficiary feedback is so central to how we do our work that criticising it in any way can been seen as controversial; some may see it as tantamount to saying you hate poor people! So just to be clear, I think we can all agree that getting feedback from the people you are trying to help is a good thing. But we do need to be careful not to oversell what it can tell us. Here are a couple of notes of caution:

1. Beneficiary feedback may not be sufficient to identify a solution to a problem

It is of course vital to work with potential beneficiaries when designing an intervention to ensure that it actually meets their needs. However, it is worth remembering that what people tell you they need may not match what they will actually benefit from. Think about your own experience – are you always the best placed person to identify the solution to your problems? Of course not – because we don’t know what we don’t know. It is for that reason that you consult with others – friends, doctors, tax advisors etc. to help you navigate your trickiest problems.

I have come across this problem frequently in my work with policy making institutions (from the north and the south) that are trying to make better use of research evidence. Staff often come up with ‘solutions’ which I know from (bitter) experience will never work. For example, I often hear policy making organisations identify that what they need is a new interactive knowledge-sharing platform – and I have also watched on multiple occasions as such a platform has been set up and has completely flopped because nobody used it.

2. Beneficiary feedback on its own won’t tell you if an intervention has worked

Evaluation methodologies – and in particular experimental and quasi-experimental approaches – have been developed specifically because just asking someone if an intervention has worked is a particularly inaccurate way to judge its effectiveness! Human beings are prone to a whole host of biases – check out this wikipedia entry for more biases than you ever realised existed. Of course, beneficiary feedback can and should form part of an evaluation but you need to be careful about how it is gathered – asking a few people who happen to be available and willing to speak to you is probably not going to give you a particularly accurate overview of user experience. The issue of relying on poorly sampled beneficiary feedback was at the centre of some robust criticisms of the Independent Commission for Aid Impact’s recent review of anti-corruption interventions – see Charles Kenny’s excellent blog on the matter here.

If you are trying to incorporate beneficiary feedback into a rigorous evaluation, a few questions to ask are: Have you used a credible sampling framework to select those you get feedback from? If not, there is a very high chance that you have got a biased sample – like it or not, the type of person who will end up being easily accessible to you as a researcher will tend to be an ‘elite’ in some way. Have you compared responses in your test group with responses from a group which represents a counterfactual situation? If not, you are at high risk of just capturing social desirability bias (i.e. the desire of those interviewed to please the interviewer). If gathering feedback using a translator, are you confident that the translator is accurately translating both what you are asking and the answers you get back? There are plenty of examples of translators who, in a misguided effort to help researchers, put their own ‘spin’ on the questions and/or answers.

Even once you have used a rigorous methodology to collect your beneficiary feedback, it may not be enough to tell the whole story. Getting feedback from people will only ever tell you about their perception of success. In many cases, you will also need to measure some more objective outcome to find out if an intervention has really worked. For example, it is common for people to conclude their capacity building intervention has worked because people report an increase in confidence or skills. But people’s perception of their skills may have little correlation with more objective tests of skill level. Similarly, those implementing behaviour change interventions may want to check if there has been a change in perceptions – but they can only really be deemed successful if an actual change in objectively measured behaviour is observed.

.

I guess the conclusion to all this is that of course it is important to work with the people you are trying to help both to identify solutions and to evaluate their success. But we also need to make sure that we don’t fetishise beneficiary feedback and as a result ignore the other important tools we have for making evidence-informed decisions.

.

* I am aware that ‘beneficiary’ is a problematic term for some people. Actually I also don’t love it – it does conjure up a rather paternalistic view of development. However, given that it is so widely used, I am going to stick with it for this blog. Please forgive me.

** I refuse to provide linklove to Bandaid but instead suggest you check out this fabulous Ebola-awareness song featured on the equally fabulous Africaresponds website.

I have written before about the separate functions of evidence supply and demand. To recap, supply concerns the production and communication of research findings while demand concerns the uptake and usage of evidence. While this model can be a useful way to think about the process of evidence-informed policy making, it has been criticised for being too high level and not really explaining what evidence supply and demand looks like in the real world – and in particular in developing countries.

I was therefore really pleased to see this paper from the CLEAR centre at the University of Witwatersrand which examines in some detail what supply and demand for evidence, in this case specifically evaluation evidence, looks like in five African countries.

What is particularly innovative about this study is that they compare the results of their assessments of evaluation of supply and demand with a political economy analysis and come up with some thought-provoking ideas about how to promote the evidence agenda in different contexts. In particular, they divide their five case study countries into two broad categories and suggest some generalisable rules for how evidence fits in to each.

Developmental patrimonial: the ‘benevolent dictator’

Two of the countries – Ethiopia and Rwanda – they categorise as broadly developmental patrimonial. In these countries, there is strong centralised leadership with little scope for external actors to influence. Perhaps surprisingly, in these countries there is relatively high endogenous demand for evidence; the central governments have a strong incentive to achieve developmental outcomes in order to maintain the government’s legitimacy and therefore, at least in some cases, look for evaluation evidence to inform what they do. These countries also have relatively strong technocratic ministries which may be more able to deal with evidence than those in some other countries. It is important to point out that these countries are not consistently and systematically using research evidence to inform decisions and that in general they are more comfortable with impact evaluation evidence which has clear pre-determined goals rather than evidence which questions values. But there does seem to be some existing demand and perhaps the potential for more in the future. When it comes to supply of evaluations, the picture is less positive: although there are examples of good supply, in general there is a lack of expertise in evaluations, and most evaluations are led by northern experts.

Neopatrimonial: a struggle for power and influence

The other three countries – Malawi, Zambia and Ghana – are categorised as broadly neopatrimonial. These countries are characterised by patronage-based decision making. There are multiple interest groups which are competing for influence and power largely via informal processes. Government ministries are weaker and stated policy may bear little relationship to what actually happens. Furthermore, line ministries are less influenced by Treasury and thus incentives for evidence from treasury are less likely to have an effect. However, the existance of multiple influential groups does mean that there are more diverse potential entry points for evidence to feed into policy discussions. Despite these major differences in demand for evidence, evaluation supply in these countries was remarkably similar to that in developmental patrimonial countries – i.e. some examples of good supply but in general relatively low capacity and reliance on external experts.

I have attempted to summarise the differences between these two categories of countries – as well as the commonalities – are summarised in the table below.

There are a couple of key conclusions which I drew from this paper. Firstly, if we are interested in supporting the demand for evidence in a given country, it is vital to understand the political situation to identify entry points where there is potential to make some progress on use of evidence. The second point is that capacity to carry out evaluations remains very low despite a large number of evaluation capacity building initiatives. It will be important to understand whether existing initiatives are heading in the right direction and will produce stronger capacity to carry out evaluations in due course – or whether there is a need to rethink the approach.

As I mentioned in yesterday’s blog, DFID’s recent lit review on links between science and development started by figuring out how people think science leads to development outcomes. By far the most common justification for investment in research given by developing country policy makers was its expected contribution to economic growth. The Nigerian Science, Technology and Innovation Policy is typical of many in stating:

“Specifically, the new [Science, Technology and Innovation] Policy is designed to provide a strong platform for science, technology and innovation engagements with the private sector for the purpose of promoting sound economic transformation that is citizen centred”

This focus is not likely to surprise anyone who has attended conferences related to science and international development; huge faith is put into Science, Technology and Innovation as drivers of economic development. If evidence is required to back this up, the example of the Asian Tiger economies, which invested in research and subsequently saw unprecedented growth, is frequently cited. For example, this recent communique from the Governments of Ethiopia, Mozambique, Rwanda, Senegal, and Uganda states:

“Inspired by the recent success of the Asian Tigers, We, the African governments, resolve to adopt a strategy that uses strategic investments in science and technology to accelerate Africa’s development into a developed knowledge-based society within one generation.”

If pressed on how research will lead to growth, it is typical to hear statements based broadly on endogenous growth theory: research will lead to new knowledge which will contribute to private sector development which will lead to growth.

So, what does the evidence tell us?

Well for a start, contrary to popular belief, there is little evidence to suggest that public investment in research was a major factor in the economic development of the Asian Tigers. Theories about what did cause this ‘development miracle’ abound but they can be broadly split into two categories. There are those who believe that increased growth was simply due to increased financial investments in the economy – and clearly this camp does not think that public investment in research played much of a role. Then there are those who believe that ‘knowledge’ was a key factor in explaining the rapid growth. At first glance this theory seems consistent with those who advocate for public investment in research to stimulate growth – but when you delve deeper you see that even this latter camp does not suggest that publicly-funded research knowledge was a major driver of growth. In fact, detailed case studies suggest that knowledge which drove growth was accumulated mainly through learning from the processes and technologies of more developed economies and gradually developing in-house R&D capabilities.

Of course just because public investment in research did not lead to past economic transformations doesn’t mean that it can’t do so in the future. There are many examples of initiatives specifically aimed at stimulating economic growth through publicly-funded research. Perhaps the most well-known – and popular – intervention is the establishment of a ‘science park’. These are generally property developments located near universities which aim to support technology start-ups and companies which ‘spin off’ from the university. They aim to emulate successful technology hubs in the USA, in particular Silicon Valley. The idea is that research in the university will lead to products and technologies which will be commercialised by start-up companies located in the science park.

There has been an explosion of science parks in emerging and developing countries. However, the evidence of their success is less abundant. Beyond a few high profile science parks linked to world-leading universities, there is little evidence that science parks actually succeed in supporting the commercialisation of university-generated research results; studies of science parks from both high-income and middle-income countries demonstrate an almost complete lack of technology transfer from universities to firms. Firms do report some advantages to location in science parks including access to information and informal linkages with academic colleagues. However there is little evidence that firms perform better in science parks than if they were located elsewhere.

In a 2004 article, Professor Stuart Macdonald of the University of Sheffield and Yunfeng Deng of Qingdao National Hi-Tech Industrial Zone describe science parks in developing countries as ‘pure cargo cult’ – aiming to superficially emulate Silicon Valley without creating any of the underlying factors which were necessary for its success. They conclude:

“. . .despite all the enthusiasm, there is little evidence that science parks work as their supporters say, and growing evidence that they do not.”

What could possibly go wrong?

Other interventions to drive economic development by supporting technology transfer are not much more promising. Technology transfer offices have been set up in many universities world-wide – however, the evidence shows that the vast majority of such offices work at a loss. In fact, patenting and licensing of academic research knowledge only generates significant income for the very top tier (i.e. the top few percent in global rankings) of universities internationally. A 2005 (paywalled) paper by University of Cape Town academic Dr Anthony Heher, which aims to draw conclusions for developing countries from university licensing data from wealthier economies, concludes:

“Without a well-funded, high quality research system, it is unlikely that a technology transfer programme will make any significant contribution to economic development. It is also not clear that any other country can simply emulate performance of the USA in deriving benefit from technology transfer due to differing social and economic conditions.”

Given this, it seems unlikely that technology transfer from universities is likely to have significant impact on economic development in most developing countries in the short to medium term. In fact, there is evidence that this is unrealistic even in developed countries. A recent article by Times science columnist Matt Ridley concluded that:

“The idea that innovation happens because you put science in one end of the pipe and technology comes out the other end goes back to Francis Bacon, and it is largely wrong.”

There is one silver lining to the evidence on public research and economic growth – there is good evidence that the capacity to use research is an important factor in driving economic growth. And that fact leads neatly on to tomorrow’s blog which will focus on human capital.