In 20 Things Good Managers Know About Innovation, number 18 is “Failing is good – try to fail as small as possible, and make sure you learn from it.”
One of the responses to that post said “Have never believed failing is good [see item 18]. Try telling that to the families of trainee pilots!!”
I get this a lot, and it is extremely dangerous thinking. Some people just seem to shut down as soon as they see the word “fail” – the critical parts of this idea are the last two – fail small, and learn.
Here is how Hugh MacLeod put it in his daily newsletter:
Nassim Nicholas Taleb addresses the fail small idea in his latest book Antifragile. He says antifragile systems gain from things that people try to avoid:
What is that something? Simply, membership in the extended disorder family. The Extended Disorder Family (or Cluster): (i) uncertainty, (ii) variability, (iii) imperfect, incomplete knowledge, (iv) chance, (v) chaos, (vi) volatility, (vii) disorder, (viii) entropy, (ix) time, (x) the unknown, (xi) randomness, (xii) turmoil, (xiii) stressor, (xiv) error, (xv) dispersion of outcomes, (xvi) unknowledge. It happens that uncertainty, disorder, and the unknown are completely equivalent in their effect: antifragile systems benefit (to some degree) from, and the fragile is penalized by, almost all of them—even
Or look at errors. On the left, in the fragile category, the mistakes are rare and large when they occur, hence irreversible; to the right the mistakes are small and benign, even reversible and quickly overcome. They are also rich in information. So a certain system of tinkering and trial and error would have the attributes of antifragility. If you want to become antifragile, put yourself in the situation “loves mistakes”—to the right of “hates mistakes”—by making these numerous and small in harm. We will call this process and approach the “barbell” strategy
The implication of the second quote is that in fragile systems, when we try to suppress failure (“we can always celebrate success!”), we actually increase the chances of a big catastrophic failure down the road. Antifragile systems avoid catastrophic failures because they have a series of small mistakes that lead to learning.
Learning is the second issue. The latest issue of Smith Journal has a great interview with Zach Klein, who has been involved with founding Vimeo, College Humor, Boxee and the Founder Collective venture capital fund.
There are a lot of investors who invest solely in the entrepreneur, because that relationship is really important,” he explains. “They want to be supportive of their bad ideas so that when they do have a good idea, they have front row seat to participate in the investment. I don’t perform any science to evaluate what our success of failure ratio should be. You need to invest in enough companies to be resistant to failure so that if one company fails, your entire portfolio isn’t spoiled.”
Failure, he says, is just fuel….
“Entrepreneurs and ideas are constantly mutating,” he says. There’s never been a moment when someone just failed and that’s the end of it. Usually what happens is that someone has a theory about how something should work or should be built, and they pitch that logic to me, and I trust them. I invest money and I hope that they will faithfully execute on their plan. When it doesn’t work, it’s just as much my failure as it is theirs, and we take that experience and we hopefully convert it into another opportunity; What do we know now? What do we know better? I’ve never really dwelled on failure because it’s a critical component of succeeding You have to experiment. You have to take a risk.”
This is what Stefan Lindegaard calls SmartFailing. He’s trying to address the same issue:
A few years ago, I argued that we needed to become better at learning through failure, but that the word failure itself is so negatively loaded. How could we create a new concept and vocabulary on the intersection of failure and learning?
I agree with Stefan, we need to reframe failure. There are a few ways we can do this:
- Dont’ Fail Fast, Learn Fast: that’s what Braden Kelley says. And he is right. We need to test ideas as early as possible, and we need to set the test up as an experiment so that we are sure that we learn. When Hindustan Unilever launched their Shakti program, they started with a very small experiment. They were trying to set up a rural sales force of entrepreneurial women so that they could reach the 40% of people in India that lived in villages too small to have a store. They didn’t roll out the new program across the whole country. First, they trained 17 women and tried to answer two questions: could these women run their own business, and was there demand for HUL products in these villages? If the answer to both questions was yes, then they would expand the program. If the answer to either was no, then they learned something and could adjust their plans accordingly. That’s how to set up an experiment.
- Ask “Is the knowledge that we’ll gain worth having?” If you set up an experiment so that you learn something no matter how it turns out, then you are investing in knowledge, not failing. There’s no point in experimenting if you don’t learn. If you don’t learn, then that really is a failure, and we do need to avoid that. If you learn, then you are building an antifragile organisation.
- Scale up your investment over time. If the first Shakti experiment had yielded negative results, the cost was low. As they continued with further experiments in years 2-5, the cost gradually increased. First they expanded the geographic ranges and went up to trialling the program with 60 women. Then they tested the supply chain, and finally they tested potential profitability. This minimizes the cost of failure.
Rita Gunther McGrath has done great research on how to make this work. In an article in Harvard Business Review, she says that you can do this by:
- Define what success and failure look like before you start.
- Experiment to turn assumptions into knowledge.
- Iterate through this process quickly.
- Do it cheaply – ask how much you can afford to lose to gain the knowledge.
- Contain uncertainty by testing one thing at a time.
- Build a culture that celebrates experiments that lead to learning.
- Make explicit what you learn, and share it.
The fact of the matter is that trainee pilots fail all of the time. They do it in simulators, and in trial runs on the ground. The repeated small failures make it much less likely that they will fail once they’re actually up in the air.
We should follow the same principles in our innovation work.