Following my previous post about the Invisible Gorilla and the illusions of awareness, I would like to share some thoughts on the illusion of confidence, and how this might impact Knowledge Management.
The illusion of confidence represents the way that people value knowledge from a confident person. This would be fine if confidence and knowledge go hand in hand, but in fact there is almost an inverse relationship. A lack of knowledge is, instead, allied to overconfidence. Lack of knowledge leads to confidence, which leads to you being seen as knowledgeable.
Lets explore these ideas a little.
Christopher Chabris and Daniel Simons, the authors of The Invisible Gorilla, give several examples of the illusion of confidence.
They mention studies of chess players. Each chess player is given a points rating based on their competition results, which is in fact a very effective and reliable measure of their ability. Yet 75% of chess players believe they are underrated, despite the evidence to the contrary. They are overconfident in their own ability.
They mention studies of groups of people coming together to solve a maths problem. You would expect the group to defer to the person with the greatest maths knowledge, wouldn't you? In fact, the group deferred to the most confident person, regardless of their knowledge. In trials, in 94% of the cases, the final answer given by the group is the first answer suggested, by the most confident person present, regardless if whether it is right or wrong.
Which doctor would you rather trust, they ask; the doctor who confidently writes a prescription without any ado, or the one who consults a reference book or online diagnosis system before concluding their diagnosis? The heart says you trust the former for their confidence, the head says you should trust the latter for their use of knowledge.
When I was a geologist, I used to run a field trip to Sardinia, the purpose of which was primarily to undermine the confidence of young geologists, by showing them how complex geology actually is. We would drive to a viewpoint, look out over the countryside towards our next stop, and ask people to predict what rocks we would find there. And for the first two days, almost every prediction was wrong. Together with raising their knowledge of geology, the field course taught them a healthy realisation of how little they really do know. With increased knowledge, came decreased overconfidence.
So what are the implications for Knowledge Management?
For example, when sharing knowledge in a Peer Assist or Knowledge Exchange, we have to be careful to distinguish between confidence and knowledge. The facilitator need to be aware of this illusion, and make sure that everyone has their input, and that we understand the knowledge and experience behind their viewpoint. The facilitator cannot just allow the most confident person to sway the result, he or she must explore what the group actually knows, before coming to a conclusion.
Secondly we need to be aware how groups can reinforce confidence without increasing knowledge. The authors of the book mention a Harvard study of confidence vs knowledge in a trivia test. They certainly saw overconfidence in individuals - people were confident of their answer 70% of the time, while being correct only 54% of the time. They then put them together in pairs. The counterintuitive outcome was that the pairs were no more successful than the individuals, but they were a lot more confident. When two low-confidence people were put together, their overall confidence increased by 11%, even though their success rate was no higher than before.
Again, a facilitator needs to be aware of this. A confident group expressing an opinion is not necessarily right, and can make some crazy decisions. The authors relate this experiment to the example of Georgia declaring war on Russia - a country which outnumbered them 25 to 1. I quote
"This experiment illustrates why the Georgian government's high-confidence decision to provoke war with Russia did not necessarily stem from the overconfident views of any one individual. The people making these decisions might each have had low confidence - perhaps so low that they might not have given the order themselves. In a group, however, their confidence might have inflated to a point where what were actually uncertain, risky procedures seemed highly likely to succeed"
So confident is not the same as knowledgeable, and confident decisions are not the same as low-risk decisions.
1 comment:
It's one of those pesky cognitive biases which evolution gave us and which works fine if you are a neolithic hunter-gatherer, but not so hot when you are an iPod wielding, late' sipping modern.
An associated finding is that as subject-knowledge climbs, answers tend to be less definite, longer, and more complex.
The novice will give a definite yes/no answer with great certitude, the expert tends to say something like "well, that depends on ..."
... and of course we have a bug in our brains that makes us trust the short and definite answer over the long and hesitant one.
What was that that Bacon said in his prologue?
Something about those who make things out to be fully understood as doing great harm to science.
Post a Comment