Prior to graduating from high school I had a few different jobs, but they were all casual. My first real job was during the summer before I left for university. I worked in a large electronics firm in the group that assembled circuit boards. My memories of this job are a bit hazy – it was a while ago, and it was also pretty boring work. We worked in a semi-clean room, and the process had about six steps. There were five of us on the team, plus a manager, and we all worked on whatever needed to be done at the time.
We counted the pieces for each batch of circuit boards, sorted them into right order, laid them out in order while interleaving the sheets of “pre-preg” (the composite fiber that included the adhesive), and put the sheets onto metal pins. Once the circuit boards were pinned up, we laid them out in a specific way, and then sent them through a giant conveyor belt that cooked them.
When they came out the other end, they were counted again, we took off the end plates and retrieved our metal pins, trimmed the melted pre-preg from around the edges, and sent off to another part of the factory where they welded the transistors and resistors onto the boards that we made.
I don’t remember that much of the actual work – I’m pretty sure I’ve forgotten a couple of steps in the process. What I do remember is our performance system. Everyone on the team had to keep a daily timesheet. We recorded which batches we worked on, which of the six jobs we did, how long it took, when we took breaks, when we did miscellaneous tasks (like sweeping up) – pretty much everything.
Each of the six jobs on the timesheet had a standard attached to it – a measure of how many of each task we were expected to do in an hour. Once I got comfortable with the team, I asked Martin, the guy that took the most time to show me how to do things about these standards. He said that they weren’t very accurate, so everyone ignored them. Even after just a couple of weeks, I could tell that this was true. Some of the jobs wildely overestimated the time it took to do them, but most underestimated the time. The job that was the most fun was pinning up the boards. This was also the one with the least accurate time estimate – the company thought we should be lots faster at this than we were – even our most lightning-fast guy couldn’t hit the standard.
I also made another discovery one day when I was talking to the manager. The people on my team might be ignoring the standards because they weren’t accurate, but management was definitely paying close attention to them. Each week she received reports on everyone’s efficiency – and these were based on performance against those flawed standards.
Like I said, the job itself wasn’t very interesting. So to keep my mind engaged, I started figuring out how to game the system. I figured out that the best job to do was the one that no one wanted – counting out the pieces when they came in. So I started doing that. Pinning up was by far the worst to do from an efficiency standpoint, so I let everyone else do the popular job. I also discovered a few other tricks to improve the way I looked on paper.
By the end of the summer, in terms of actual speed, I was probably about 3rd best out of the five of us. But on the management report, it was a different story. Martin’s efficiency was about 110%, the lightning-fast guy was just under 120%. And I was just over 180%! This was the first time I really thought seriously about performance metrics – something that ended up being one of my strengths as a manager.
There are three innovation lessons in this story.
- People will respond to incentives if they are clearly communicated. In this case, they weren’t. If everyone had been aware of the way that the standards were actually being used, we would have been fighting over who got to count pieces, not over who got to pin them up. On the other hand, I’m not sure that the standards really reflected the behaviors that the managers wanted to encourage, which brings up the second point:
- Your metrics need to be aligned with your strategy. The metrics for our circuit board team didn’t match up with what the firm wanted. They actually wanted no defects (we weren’t measured on that at all!), at a reasonable speed. But what they measured was speed, inaccurately. Because the metrics weren’t aligned with what they wanted, I’m pretty sure that they didn’t get what they wanted. This is also often a problem with innovation. We want innovation to improve our performance, or transform our business (or industry!). But that’s not what we measure. Often we don’t measure innovation at all. If innovation and strategy are not linked, your organisation will not be innovative, and it probably won’t meet its strategic goals either.
- You almost certainly need new metrics. My electronics firm definitely did, and most of us need better innovation metrics too. No metrics are bad, measuring something like patents is only marginally better than doing nothing (and could actually be worse!). We need better metrics for tracking innovation. Scott Anthony has some excellent suggestions. He recommends developing a suite of metrics that include inputs (things like Google’s 20% rule – all the programmers have 20% of their time to put into projects), process measures (such as innovation portfolio balance), and outputs (as 3M does with their target of generating 25% of their revenue from products introduced within the past 3 years). Stefan Lindegaard has also written a very good post on this subject, with examples from Johnson & Johnson and Intel.
The metrics that we use provide the incentives for action within our organisations. Getting them right is important. If you are trying to improve innovation, a good first step is to improve your innovation metrics.