Friday 22 March 2019

The human bias behind group-think

There is a real human bias that drives us to agree with each other, which can drive group think and false consensus 



The power of social proof climbs rapidly
with the number of people involved.
From the Solomon Asch study
Why are "canned laughter" tracks so common on TV comedies?  We all hate them, we know they are false, and yet they keep putting them on the soundtrack.  The reason is that canned laughter is a form of Social Proof, and social proof is a massive factor in the way we think and behave.

Social Proof is the name for the assumption that "if everyone else thinks so, it must be correct". Canned Laughter is a subtle form of social proof; and it works - people judge comedy shows as funnier if there is canned laughter. Even though we know its false, we instinctively think "they are all laughing, so it's funny". The TV executives know we think this way, which is why canned laughter is so endemic.

The Solomon Asch study shows an even more radical form of social proof - how up to 74% of people (as part of a secret experiment) would say something they know is wrong, just to agree with everyone else. Asch concluded that it is difficult to maintain that you know something when everyone else knows the opposite. The group pressure ("Social Proof") implied by the expressed opinion of other people can lead to modification and distortion, effectively making you agree with almost anything.

The risk in Knowledge Management is very clear.

Consensus in a group may mean that everyone agrees because they all independently think the answer is correct, or it may mean that they all agree because everyone else agrees. This is particularly the case when the first person to speak is very confident; everyone else is likely to follow, and so social proof builds up (I talk about this in my blog post on the illusion of confidence, and point out that confidence is often an function of ignorance, especially ignorance in small groups. Real experts are rarely dogmatic).

I saw this for myself in a meeting in Sweden (a country where consensus is particularly valued). I asked everyone to judge the success of a project from their own perspective, to mark the level of success out of 10, and to write that number down in front of them. I was looking for outliers, and I could see that the person next to me had written a 6. We went round the table, the first person said "8 out of 10", and the marks followed - 8,8,8,8. We got to the person who had written down 6, and she said "8" as well.

Social proof is such a well known phenomenon now that it is widely used by marketers to convince us to buy things, and it can be a powerful took when marketing KM in an organisation. However when we are identifying knowledge, discussing knowledge, or trying to determine from a group what actually happened and why, then social proof can drive group-think and distort the truth.

In knowledge management we are not interested in consensus, we are not interested in knowledge as something to sell to others, and we are interested in truth, or as close to the truth as we can get. Social proof is not real proof, and just because everyone agrees with a statement, does not mean they all believe it to be correct.

So how do we avoid conformance and groupthink driven by social proof in KM?

1) When looking for individual objective input, we must avoid "speaking out around the table." In Sweden I could have collected votes on post-it notes, or I could have said clearly "read out what you have written, even if its not what everyone else said".

2) As facilitators of KM processes, we must always ask for the dissenting voice. "Does anyone disagree with this interpretation? Might there be other views here? What are the alternatives? Susan, you are looking concerned, do you have another view?"

3) As online facilitators, we must make dissent safe. I recall one community of practice where, in the first year, social proof was very strong. If anyone disagreed with the first post on a conversation they would not disagree online, but would reply privately. It took a lot of work from the facilitator to reverse this trend, and to develop a community where dissent was welcomed as being part of the search for the truth.

4) We must be careful to avoid using social responses as a form of crowdsourcing. Crowdsourcing works either with an expert crowd willing to share dissenting voices, or with a knowledgeable crowd able to contribute independently. It doesn't work with a small uncertain crowd building on each other's opinions, as that way you can end up with false agreement through social proof.

Social proof is real, groupthink is powerful, and it is one of the many human biases we need to beware of in KM. 

No comments:

Blog Archive