Friday, 26 February 2010


Being the Best - Benchmarking, motivation and KM



I posted a few days ago about Performance metrics, KM and asparagus. The story tells how publishing data about aircraft loading procedures among Peruvian asparagus producers motivated the poor performers to learn from the good performers, for the benefit of all, The authors of the study I quoted claimed

Objective proof of superior performance helps overcome a principal barrier to
convincing experienced professionals to adopt new practices - that is, the
belief that they are already doing the right thing and that their current
results are the best that anyone can expect

I would like to follow that thought - about how benchmark data can motivate people to learn.

One of the biggest barriers to overcome in KM is the lack of desire people usually have for learning from others. It's the old "Not Invented Here" syndrome, and behind "Not Invented Here" are two things

1. People are comfortable and familiar with their own performance, and with the way they currently do things
2. Change involves risk. "If my way works" they think, "why risk changing it? Why change horses in midstream? Why ditch a perfectly good approach, for something unfamiliar?"

The great thing about good performance data, and good benchmark data, is that people then often come to realise that their approach is not "perfectly good", that their way may "work", but it works pretty badly. They become uncomfortable with their own performance, and beconme open to learning. "Not Invented here" disappears, because they realise that "Invented Here" is not actually very good!

We see this very very clearly in our Bird Island exercise, where people were comfortable building an 80cm tower, and think they might be able to stretch it to 120cm. Then we show them benchmark data where the record is over 3m, the mean is 285cm, and even a bunch of Chicago lawyers achieved 250cm. And we show them a picture of the record tower, so they can see this is not a joke.

What happens, is that the people are shaken out of their comfort zone, They realise their own performance was pretty poor. They become very open to learning. And they DO learn, and they also turn in a top quartile performance.

The old motivation, to be safe and secure with a known approach, is replaced by a new motivation. The new motivation is "To do a decent job". (And they often feel that of they are being soundly beaten by Chicago Lawyers, they are't doing a decent job!*)

When you think about it, most people are professionals. They have pride in their work. They don't like to put in a poor performance. It's only the Homer Simpsons of this world** who are happy with shoddy work. So the existence of benchmarking data or performance data makes people aware if their performance is bad, they become dissatisfied with their approach, and are open to learning something better.

This is a very positive motivation. It reminds me of the introduction of technical limit (a very detailed approach to benchmarking and target setting) in the USA, and the motivation of the Ocean America drill crew to try new things and learn new approaches. Here's what the guy in charge of that program said.

"Technical Limit creates a new world; it creates something that says "there is a
big difference between perfection and where we are at today. We really were not
doing it to save the company millions of dollars; we were doing it because we
wanted to be undeniably the best drilling team in the world".

That's a very positive motivation - to be undeniably the best. And that's where the combination of good performance data and KM can really help. The performance data tells you if you're not the best, and it tells you who the best is. It tells you that you need to learn, and it tells you who to learn from. KM enables that learning.






*Apologies to all the Chicago Lawyers out there
**I was thinking of when Homer said "If you don't like your job you don't strike. You just go in every day and do it really half-assed. That's the American way".


3 comments:

SJPONeill said...

If a bunch of Chicago lawyers got to 250 metres then more power to them!! Think you may have missed the 'c' from the 'cm'? If not, a lot of Chicago Architects need to be really worried...

I think the metric approach works well, probably best, when working with technical people. In other situations, reinforcing the 'wrongness' or inadequacy' of current processes can be taken as threatening and the result is often for the wagon to be put in a circle, leaving the KMer on the outside with the Indians.

Anyone who thinks "It's only the Homer Simpsons of this world** who are happy with shoddy work" has never worked in a large corporate or government bureaucracy. I used to see this all the time - one of the reasons I am quite happy taking some time out and just having to put up with two large dogs with no work ethic at all - and without a clear 'what's in it for me' factor (stick, carrot, or both), many could really careless about overall system improvement.

Nick Milton said...

Thanks for spotting that one - I've changed it now

i still am not sure anyone goes to work to do a shoddy job, though I appreciate that in some circles, shoddiness becomes accepted. Also I appreciate that in other circumstances, performance metrics are unwelcome. That doesn't make them a bad thing though.

Nick Milton said...

Just to follow up on my reply to SJP - the culture he describes where "many could really care less about overall system improvement" is a difficult one to penetrate. I agree that metrics work best with technical people, but I would suggest that's partly because metrics are easier. It's pretty clear, in engineering or construction, what separates a good job from a bad job.

In areas of the public sector, in contrast, it can be very hard to understand what makes for a good performance. Is cheaper necessarily better? Is faster necessarily better? What makes for a good government policy - one that works, or one that plays well in the press?

You can see that for example in the health service, where metrics become political instruments and are treated with derision.

But where you CAN find clear unambiguous metrics, then they can support KM. Take infection rates as a result of surgery. If your hospital rate is 1%, you might be quite happy, until you find that this puts you at the bottom of the league, and that the global average infection rate is 0.1%. That's when you know that you have something to learn.

Blog Archive