Filtering, Crowdsourcing and Innovation

How can we take advantage of the ‘wisdom of crowds’ in our innovation efforts? There are some distinct challenges in trying to do this. The basic idea is this: if you get a large number of people to estimate something – the weight of an ox, or the number of jellybeans in a jar, for example – usually the average of all of the estimates is closer to the actual number than any individual’s guess. Consequently, there is a strong argument for taking advantage of this phenomenon if you are trying to get a handle on estimating a particular number. Businesses have used these techniques to improve their sales forecasting for example (Gary Hamel includes a really nice example of how Best Buy used this method in The Future of Management).

Can this work to improve innovation? It’s not as obvious that it will. I’m currently reading You Are Not a Gadget by Jaron Lanier (more on this book in a later post). Lanier has this to say about using crowds:

The reason the collective can be valuable is precisely that its peaks of intelligence and stupidity are not the same as the ones usually displayed by individuals.

What makes a market work, for instance, is the marriage of collective and individual intelligence. A marketplace can’t exist only on the basis of having prices determined by competition. It also needs entrepreneurs to come up with the products that are competing in the first place.

Since the internet makes crowds more accessible, it would be beneficial to have a wide-ranging, clear set of rules explaining when the wisdom of crowds is likely to produce meaningful results… Among other safeguards, I would add that a crowd should never be allowed to frame its own questions, and its answers should never be more complicated than a single number of multiple choice answer.

Crowds can be useful, but also dangerous. Nassim Nicholas Taleb says that crowdsourcing should be avoided in situations where the potential payoffs are very complex, and when we don’t know what the outcome probability distribution looks like. Unfortunately, this is precisely the case for most innovations.

Relying on crowds can lead to innovation problems. Stefan Lindegaard identifies this as one of the common causes of open innovation failure (the comments on that post are worth reading too):

Many companies start off with idea generation platforms hoping that external contributors will contribute with great ideas and/or technologies. Most do not deliver on the expectations as they get more trash than gold.

And in a post that addresses some of the issues with crowdsourcing really nicely, Graham Horton says:

In conclusion, customer idea portals as they are currently popularly advocated will produce limited results; they will only provide suggestions for solutions that are apparent to customers, given their level of expertise and self-knowledge.

All this might suggest that we can’t use crowds to help innovation. However, I think that these two quotes suggest a possible way that we can still take advantage of crowds in our innovation efforts. One of the issues is that we often misunderstand how crowdsourcing actually works. The Lindegaard quote suggests that people think that we can turn to our crowd (customers, stakeholders, etc.) and just wait for the good ideas to roll in. This is in line with a common understanding of crowdsourced systems – people often talk about Linux, for example, as a process where thousands of people write bug fixes for the software, and all of these fixes get put into the program, making it better. This misses a critical step.

That’s a diagram that I made last year to explain to some friends how icanhascheezburger.com works – but it explains Linux just as well as it explains lolcats. The critical step in the process is the middle one. Both systems crowdsource content – Linux crowdsources code, icanhascheezburger crowdsources cat drawings. The problem is, not all of the code works, and not all of the lolcats produce lols. In each case, there is a small group that filters the incoming content. We don’t have crowds creating stuff, and then voting on stuff. We have crowds creating stuff that answers questions posed by the group guiding the process. The answers that work are then selected by that group as well.

This leads to the answer that both Lindegaard and Horton suggest: in order to get useful answers from crowds, we have to have good internal capacity ourselves. Crowdsourcing needs to be guided. To use the crowd in innovation, we need to set the questions. And we need to know enough to be able to figure out when the crowd is giving us good answers.

A while ago I talked about using jams to select ideas. This process follows these principles. The questions being asked are set by the organisation, so the crowd is trying to address a specific problem. And the best answers are not just judged by popularity – there are several evaluation mechanisms that can be used. You can use the votes and go with the most popular. You can use the ideas that were most polarizing. You can take the ideas that are generated and plug them into whatever other system you use (stage/gate, gut feel, whatever).

Crowdsourcing then is another tool that we can use in our aggregate, filter and connect strategies. In this case, the filtering is the critical step. If we don’t filter correctly, crowdsourcing simply aggregates, which by itself doesn’t help us much. And the aggregated crowdsourced answers need to be connected to questions that we know are important. Crowdsourcing is not a panacea, but it can be a useful innovation tool if we use it correctly.

Graham Horton has written a terrific post that looks at which questions we should ask the crowd.

Student and teacher of innovation - University of Queensland Business School - links to academic papers, twitter, and so on can be found here.

Please note: I reserve the right to delete comments that are offensive or off-topic.

14 thoughts on “Filtering, Crowdsourcing and Innovation

  1. Tim,

    For me, the crucial point in your article is contained in these sentences:

    “Crowdsourcing needs to be guided. To use the crowd in innovation, we need to set the questions. And we need to know enough to be able to figure out when the crowd is giving us good answers.”

    As you point out, schemes that “just ask the world for ideas” fail, because they only receive trivial inputs. In order to get useful information, you have to ask concrete questions that have specific intentions.

  2. Tim,

    As I see it, there are 2 important problem with such crowdsourcing: for crowdsourcing to work you need some minimum number of contributors to achieve requisite variety. But even if crowdsourcing adds value, the process of sifting through useless feedback is so costly that it rendered the whole process uneconomical.
    But does crowdsourcing add value in the case of innovation? most likely to outcome would be a regression to the mean where most outcomes will be sufficiently bland to achieve crowd consensus.
    As you rightly point out, we could make better use of crowdsourcing. You suggest that a filtering referent group will be tasked with selecting promising ideas based on some internal rationale of what is good and is bad. That is fine but does not really eliminate the 2 problems I just pointed to.
    The obvious solution (to me) is not to use crowdsourcing for idea generation at all. They are better suited for filtering and diffusion. And this need not necessarily be done via popular vote (a la digg). The referent group could use the crowd as control to assess the reliability of their internal filtering rationale and whether it reflects common sentiment.

  3. As you rightly point out, we could make better use of crowdsourcing. You suggest that a filtering referent group will be tasked with selecting promising ideas based on some internal rationale of what is good and is bad. That is fine but does not really eliminate the 2 problems I just pointed to.
    The obvious solution (to me) is not to use crowdsourcing for idea generation at all. They are better suited for filtering and diffusion. And this need not necessarily be done via popular vote (a la digg). The referent group could use the crowd as control to assess the reliability of their internal filtering rationale and whether it reflects common sentiment.

Comments are closed.