Some ‘Deep Thought’ on The Global Innovation Index

There’s been a fair bit of chatter on the release of the global innovation index. It’s an impressive composite of many indicators of innovation and it thows out many interesting lists. According to the report, Switzerland, Sweden and Singapore are the most innovative countries and the Ivory Coast, Nigeria and China are the most efficient innovators (Braden Kelley at Blogging Innovation has been doing a good job of reporting on the scores).

Measuring innovation is very difficult and there is a classic tradeoff between scope and detail. The broader we want the results to be, the more we lose detail. My main issue with the Global Innovation Index is how much detail we need to lose to obtain a global ranking. In the blogosphere, I’m sure most people will focus on the lists so I’d like to spend a bit of time considering the data.

The general approach with the GII is that innovation has inputs and outputs and we need to get measurements for both sides of the ledger, which is a sound starting point. A summary graphic of the composite measures for inputs and outputs is in the report (see below).

Global Innovation Indedx Measures

As an innovation researcher, I immediately focussed my attention on the output side and the first thing that becomes apparent is that the index is skewed towards scientific outputs as a proxy for innovation. GDP growth is also used as a proxy for knowdledge impact and outward foreign direct investment is used as a proxy for knowledge diffusion. Given that innovation is supposed to be the implementation of a significantly improved good, service or process, I’m left wondering how well this composite really tracks innovation? I won’t say too much more but we know that less than half of innovations in most industries are patented and many firms report innovations without reporting R&D. According to a 2010 OECD report, the number of firms in Australia that report innovation is nearly twice the number of those those that report R&D. Many service industries are not R&D instensive and they don’t produce many patents either, but services and service innovation is a vital part of developed economies.

This is where the scope of the index starts to make things confusing. Using the website, I could find the strengths and weaknesses of Australia but does a lower proportion of exports per GDP unit really mean a nation is less innovative? Do many joint ventures and alliances make a nation more innovative? Remember, that in Australia many of these JVs are associated with the resources sector.

The other thing that strikes me is how much this is biased towards measures that are suitable for developed nations. I’ve just finished supervising a project that used some excellent survey data to look at innovation in Vietnamese firms. Now, these companies do report significantly improved goods and services but very few of these show up with patents or trademarks. Looking at the Global Innovation ranking of 51 tells me nothing about innovation in Vietnam. It does tell me that there aren’t many journal articles and patents coming out of Vietnam and that the economy is growing quickly but that’s not very interesting.

The report attempts to measure another dimension of innovative outputs in the creative industries but this is where measurement gets really tough (and the authors acknowlwedge this). I have no objection to experimenting with measures but the risk in composite indices is that we look at the final number and forget the assumptions that are going on behind the headline figures. Motion pictures as a measure of creative output? Hmmm…. maybe. Newspaper circulation? Good luck with that!

After reading the report, the main question that I had was “what is the question that this report answers?”. Does it really help to have an index of all of these nations? Will it help to encourage innovation? Will it help governments? It’s a bit like Douglas Adams’s answer to life the universe and everything. Following from Tim’s creative use of Orson Welles, here’s the segement from the movie.

Like the Magratheans, in an effort to reduce the complexity and idiosyncracy of innovation to a simple number, we have generated an answer that actually doesn’t get us very far in being more innovative.

Portugal scores 42 on the innovation index. Lucky Portugal!

Please note: I reserve the right to delete comments that are offensive or off-topic.

6 thoughts on “Some ‘Deep Thought’ on The Global Innovation Index

  1. This is the same old stuff – the indicators represent a developed but heavy manufacturing centric perspective of innovation. Chris Freeman was working on this stuff in the 1960s – we just have more data now. But have we innovated much with our metrics – not really. The problem is that less and less manufacturing is actually conducted in the west. We need truers measures of innovation and we need to understand that resource industries innovate differently to service industries and then there are non-profits and governments – which are typically get a mention.

  2. Thanks Paul. I think we have an obligation to delve into the details when these claims to authority are made. I don’t have a problem with the concept of a global index but we shouldn’t take it too seriously.

  3. Hear hear Brian! Like you, I’m not sure that an edifice of composite indicators does much more than some of the standard measures. To really understand what firms are doing we need to ask them and that usually means a survey, which is time consuming and expensive- but that’s the trade-off for good data.
    Thanks for mentioning Chris Freeman. His recent passing was a sad event and we should keep his contributions to innovation studies in mind.

  4. Great post John – I have one question and one comment.

    You imply that JVs/alliances in the Australian resources sector are likely to be less innovative – is this observation about the resources sector generally or particularly in Australia? Would also love to know if there is any comparative data on innovation outcomes from alliances in different sectors.

    My comment is that I’m not convinced that these indices/rankings pass the additionality test – do they aid better decision-making above and beyond what would be occurring if they didn’t exist? In other words would the world be significantly different if there were no country-level innovation metrics. I know governments like league tables but my personal view is that they should be discouraged from paying attention to them – they’re at best a distraction from designing and evaluating good, context-specific policy fit for local conditions.

    I support attempts to measure innovation outcomes at the firm level and am willing to be persuaded on the value of industry-level innovation metrics. However aggregation problems make me very doubtful about the veracity of country-level measures and comparisons.

    Given your work on measuring innovation at the regional level, do you think there is a maximum scale (sectoral or geographic) at which we should stop trying to measure innovation in the aggregate? (Sorry – that makes two questions!)

  5. Hi Kate:
    thanks for the kind comment and the questions- both of which are very good.
    I’d need to check on alliances/jvs but we know that these are correlated wiht innovation in high-tech product development such as biotechnology. Most biotech innovation studies use aliances as a control variable because they are correlated with innovation. We now have an Australian dataset on innovation across all industries and it would be good to profile the mining businesses. Generally mining JVs are designed to share project development risk rather then develop new processes or products. There are exceptions to this such as the JV with Beach Energy, Petratherm and Tru energy to develop geothermal energy resources in the Cooper Basin (an innovative process for producing electricity) but that’s an unusual example.
    Our experience with the mining sector is that there is a lot of hidden innovation that goes on as part of a continuous improvement process. These innovations get picked up in survey but get totally missed in these indices comprised of secondary data such as R&D spend and patents.
    The last quesiton is a really interesting one. I think the issue boils down to reliability and validity. Reliability is about how replicable the result would be if someone else had the data. Validity is about how well the study captures the phenomena in question. The GII has a very high level of reliability but very low validity. It depends what we need the measure to do and what question we need to answer. (sorry- that’s not really an answer). We used survey data for the Brisbane Innovation Scorecard with case studies to really understand innovaiton in hte region. With this high-validity we can go back to firms and governments with real inights into how be more innovative.
    Thanks again Kate. Hope you are enjoying the sunny north.

Comments are closed.