Showing posts with label forgetting. Show all posts
Showing posts with label forgetting. Show all posts

Tuesday, 3 December 2019

General ignorance and the risks of outdated knowledge

Knowledge that "everyone knows" but which is quite wrong, it termed "General ignorance" and is a dangerous component of KM. 


QI stage-set from wikimendia commons
There is a highly amusing TV quiz here in the UK, called QI. One section of the show is called "General Ignorance", and consists of asking apparently easy questions, with answers that "everyone knows", but in fact they know incorrectly. The audience has great fun watching the participants answer with obvious, but totally wrong, answers.

Typical General Ignorance questions are

  1. How long did the Hundred Years War last? (Answer, 116 years)
  2. Which country makes Panama hats? (Answer, Equador)
  3. From which animal do we get catgut? (Answer, horses, sheep or cows)
  4. Puffinus Puffinus is the scientific name of what bird? (Answer, the Manx Shearwater)


Beware of General Ignorance  


I wrote last week about the maturity trajectory of knowledge – how knowledge passes through stages of maturity; from discovery, to exploration, to consolidation, to embedding, to obsolescence and reinventing. An exciting new idea passes through the stages, to become established knowledge; something “everybody knows”. Everyone knows the earth is just one planet in a solar system, everyone knows how an internal combustion engine works, and everyone knows that you need to wear a hat in the winter, because you lose most of your heat through your head.

Except, in the last case, you don’t. You don’t lose any more heat through your head than you do through any other part of your body. That’s one of the things “everyone knows” wrongly. This is knowledge that is obsolete and needs to be rejected and reinvented. Knowledge has a half-life beyond which it is no longer true, and common knowledge which has exceeded its half life - is beyond its believe-by date - has become general ignorance.

In Knowledge Management, we need to beware of the things that “everyone knows”, and occasionally we need to challenge them. Maybe they are not correct, maybe they have exceeded their believe-by date, maybe the context has changed and the knowledge is out of date.

For example, everyone knows you put the milk in the cup before you add the tea, but people used to do this to sterilise the milk, and all our milk is pasteurized. Similarly someone told me recently that you mustn't pick blackberries near busy roads for fear of lead poisoning, but who uses leaded petrol nowadays?

The ISO Knowledge Management standard requires a competent KM system to pay attention to the life of knowledge, and to have an approach to handle outdated or invalid knowledge, in order to protect the organization from making mistakes or working inefficiently, as a result of using outdated knowledge.

Communities of practice, and practice owners, need to be vigilant for general ignorance, and be prepared to challenge perceived wisdom.  Just because "everyone knows" that (for example) that the Canary Islands are named after yellow birds, doesn't make it correct.

Be aware of the risks of general ignorance, and ensure your KM Framework deletes or archives all knowledge that has exceeded its believe-by date

Thursday, 13 December 2018

The Gorilla illusions and the illusion of memory

Here is a reprise from the archives - a post primarily about the illusion of memory. The story here from Chabris and Simons raises some disturbing issues about the trustworthiness of tacit knowledge over a long timescale.




Gorilla 2
Originally uploaded by nailbender
I have just finished reading The Invisible Gorilla, by Christopher Chabris and Daniel Simons (an extremely interesting book). These are the guys who set up the famous "invisible gorilla" experiment (if you don't know it, go here). The subtitle of the book is "ways our intuition deceives us", and the authors talk about a number of human traits - they call them illusions -  which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

The illusions which have most impact on KM are
 I would like to address these three illusions in a series of blog posts, as its a bit much to fit into a single one.

The illusion of memory has massive impact in KM terms, as it affects the reliability of any tacit knowledge that is held in human memory alone.

I have already posted about the weakness of the human brain as a long-term knowledge store. Chabris and Simons give some graphic examples of this, pointing our how even the most vivid memories can be completely unreliable. They describe how one person had a complete memory of meeting Patrick Stewart (Captain Picard of Star Trek) in a restaurant, which turned out not to have happened to him at all, but to be a story he has heard and incorporated into his own memory. They talk about two people with wildly differing memories of a traumatic event, which both turn out to be false when a videotape of the event is finally found. And they give this story of a university experiment into the reliability of memory.

 On the morning of January 28, 1986, the space shuttle Challenger exploded shortly after takeoff. The very next morning, psychologists Ulric Neisser and Nicole Harsch asked a class of Emory University undergraduates to write a description of how they heard about the explosion, and then to answer a set of detailed questions about the disaster: what time they heard about it, what they were doing, who told them, who else was there, how they felt about it, and so on.
Two and a half years later, Neisser and Harsch asked the same stu­dents to fill out a similar questionnaire about the Challenger explosion. 
The memories the students reported had changed dramatically over time, incorporating elements that plausibly fit with how they could have learned about the events, but that never actually happened. For example, one subject reported returning to his dormitory after class and hearing a commotion in the hall. Someone named X told him what happened and he turned on the television to watch replays of the explo­sion. He recalled the time as 11:30 a.m., the place as his dorm, the ac­tivity as returning to his room, and that nobody else was present. Yet the morning after the event, he reported having been told by an ac­quaintance from Switzerland named Y to turn on his TV. He reported that he heard about it at 1:10 p.m., that he worried about how he was going to start his car, and that his friend Z was present. That is, years after the event, some of them remembered hearing about it from differ­ent people, at a different time, and in different company.

Despite all these errors, subjects were strikingly confident in the ac­curacy of their memories years after the event, because their memories were so vivid—the illusion of memory at work again. During a final interview conducted after the subjects completed the questionnaire the second time, Neisser and Harsch showed the subjects their own hand­written answers to the questionnaire from the day after the Challenger explosion. Many were shocked at the discrepancy between their origi­nal reports and their memories of what happened. In fact, when con­fronted with their original reports, rather than suddenly realizing that they had misremembered, they often persisted in believing their current memory.
The authors conclude that those rich details you remember are quite often wrong—but they feel right. A memory can be so strong that even documentary evidence that it never happened doesn't change what we remember.

The implication for Knowledge Management


The implication for Knowledge Management is that if you will need to re-use tacit knowledge in the future, then you can't rely on people to remember it accurately. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to "what feels right". The forgetting curve will have kicked in, and it kicks in quickly.  Tacit knowledge is fine for sharing knowledge on what's happening now, but for sharing knowledge with people in the future (ie transferring knowledge through time as well as space) then it needs to be written down quickly while memory is still reliable.

We saw the same with our memories of the Bird Island game in the link above. Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written or photographic record can come as a shock, and can feel wrong, even if it's right. People may not only refuse to believe the explicit record, they may even edit it to fit their (by now false) memories.


Any KM approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

Tuesday, 17 April 2018

The impact of the forgetting curve in KM

We forget stuff over time, if we don't practice it. What does that mean for Knowledge Management?



The human brain learns and remembers stuff, but it also forgets stuff too. We know all about learning curves, but also need to realise there is a forgetting curve. The brain discards knowledge it feels is irrelevant or not urgent, then begins to subtly alter the remaining knowledge so that it fits with our preconceptions.  We learn, we fill our brain with knowledge, then it begins to seep away. 


There are plenty of studies that show this effect. This reference, for example, suggests that, on average, students forget 70 percent of what they are taught within 24 hours of the training experience, unless given frequent "boosters" or reminders to keep the knowledge fresh. We found in our Bird Island exercise that having done the exercise before did not help people perform better a few months later. And Daniel Schachter has written a whole book on how the mind forgets and remembers, reviewed here.

The implications are these:


  • Telling people something does not give them lasting knowledge unless they have a chance to use and practice it. Therefore it's best to have the Telling as close as possible to the Using. This speaks in favour of pull-based KM, where people seek for knowledge as and when they need it, rather than push-based knowledge.
  • Leaving knowledge as tacit "head knowledge" will work if an activity is regularly practiced or happens on a regular basis. The regular ongoing nature of the activity keeps practice knowledge fresh (the lower chart in the diagram above).
  • This is particularly true of communities of practice. Connecting up people from multiple parts of the organisation makes it more likely they the activity is being practiced somewhere, and therefore that someone in the community has fresh experience and knowledge.  It's called a Community of Practice, because the knowledge is being practiced by the community. 
  • When knowledge is infrequent or irregular, then keeping knowledge as tacit "head knowledge" is a risky strategy (the upper chart in the diagram above). We may think we can remember an activity we did a year ago, but the chances are that we can't recall any reliable detail, and many of the things we remember are false. In situations like this we need to document knowledge as best we can, both as an aide memoire to the knowledge which still remains deeply buried in our heads, and as a replacement for the knowledge we have forgotten. Because we WILL have forgotten huge chunks of it.
Therefore knowledge of the routine operations of a factory, for example, can safeuly be left tacit. The operators deal with this knowledge day-in, day-out, and can keep it fresh in their heads. The knowledge associated with the non-routine operations however - the emergencies, the process upsets, and the maintenance shutdowns - probably needs to be recorded and stored, because you wont be able to trust the operators' memories from the least time something similar happened. 

The type of activity, and in particular the frequency by which we will practice it, can make a huge difference to the way you manage the associated knowledge.

Friday, 26 August 2016

7 ways the brain loses or distorts knowledge

There are seven ways by which an individual forgets, doctors or otherwise overwrites their memories, and through which their knowledge gets lost.


Would you store your documents in a system that;
  • Begins to lose them as soon as they are filed
  • Never stores many of them properly in the first place
  • Often won't let you find them when you need them
  •  Returns results that are wrong
  •  Allows documents to be falsified, and 
  •  Gradually adjusts all the documents to fit what you currently believe?  

No you wouldn't, but that's how your memory works - the system in which you store tacit knowledge.


The human brain, despite its many remarkable features, is not great at retaining detail in the long term. A series of posts on the Farnham Street blog (posts one, two and three) reviews a book by Daniel Shachter called "the seven sins of memory - how the mind forgets and remembers".

For those of us in Knowledge Management, this is crucial stuff. We often assume that the majority of organisational knowledge is held in the minds of individuals, and there are many people who will argue that knowledge is ONLY in people's minds, and becomes information once recorded (an old but ultimately unresolvable argument).

But is it safe to store knowledge in brains? Here are 6 ways in which brains lose this knowledge.

  • Transience - the issue of the forgetting curve. As the blog says - 
"Soon after something happens, particularly something meaningful or impactful, we have a pretty accurate recollection of it. But the accuracy of that recollection declines on a curve over time — quickly at first, then slowing down. We go from remembering specifics to remembering the gist of what happened ... What we typically do later on is fill in specific details of a specific event with what typically would happen in that situation".
  • Absent-mindedness - the process whereby the memory is never properly encoded, or is simply overlooked at the point of recall, and never transferred from short-term to long-term memory.  When your attention is divided, you never store the memory in the first place.
  • Blocking - the process where you know you know something, but you can't recall it. "It's on the tip of my tongue" you say, but you still can't recall the knowledge you know that you know.
  • Misattribution - where you recall something that is actually wrong. For example, I prided myself in being able to remember word for word (in Norwegian) the introduction to a Norwegian Christmas TV serial from the 1990s.  Then I found it on Youtube, and discovered I was almost 100% wrong in what I remembered.  Misattribution is what causes eyewitness testimony to be so dangerous.
  • Suggestibility - the way in which someone or something can implant false memories in your mind (a very disturbing phenomenon).  As the blog says, "Suggestibility is such a difficult phenomenon because the memories we’ve pulled from outside sources seem as truly real as our own".
  • Bias - the way in which you gradually filter your memories to become consistent with your current worldview and with your personal "narrative".  There are in fact 4 biases that we are prone to when editing our memories: Consistency and change bias, hindsight bias, egocentric bias, and stereotyping bias.
  • The "seventh sin" on the list is Persistence - the way in which unpleasant memories can persist, despite our attempts to forget. Although this is not a failure of memory, the unpleasant and persistent memories may come, through persistence, to override other memories, thus giving a distorted picture. 

How can Knowledge Management help?

Knowledge management has to have solutions to these issues, as they are real and pervasive. Some of the solutions are as follows.

  • Team reflection processes, such as After Action review and Retrospect, are opportunities for a team to review and rehearse what happened in an activity or project. By talking together, they fill in the gaps caused by absent-mindedness, and help cement the memories deeply enough to combat some (but not all) of the transience.  AARs and Retrospects should become a habit in the organisation, and should be held as soon as possible after the activity in question, before transience sets in. 
  • Team logging - perhaps through the use of a project blog or similar, allows the blog posts to act as an aide-memoire, thus avoiding misattribution, suggestibility and bias.
  • Rehearsal through conversation - perhaps through conversations in a community of practice, keeps knowledge fresh and avoids many of the time-related aspects of knowledge loss. Ensure your community discussions are open to all, so all practitioners get constant exposure to discussion and exercise their memories on a daily basis.
  • Codification - imperfect as codification is, it is the only way to retain details in the long term and avoid all of the 7 issues mentioned above. 
 Make sure your Knowledge Management Framework includes team reflection, logging, conversation and codification in their appropriate places. Don't rely on the human memory as a long term knowledge store, given it's seven modes of failure. 

Monday, 16 May 2016

The false allure of alumni networks in knowledge retention

When faced with a knowledge retention challenge, it seems an obvious idea to set up an alumni network of retirees so their knowledge can still be accessed. But is this always such a good plan?


Image from wikimedia commons
The allure of the alumni network is that it seems to avoid the knowledge retention issue. If critical knowledge is still held by the alumni, then by networking them (perhaps by including them in the skills directory, or letting them retain membership of the relevant communities of practice) you still have access to the source of knowledge. If someone in the organisation needs knowledge, the knowledgeable alumnus remains reachable.

However there are a number of drawbacks with this model, which you need to think through carefully.


  • If the alumnus has retired, then they are no longer active in their domain. Their memory will begin to fade, often quite quickly. The human brain is a poor long-term knowledge store (see this blog post on the forgetting curve). Over time the details begin to disappear, then the main points become re-invented and become false knowledge (see also this post). 
  • Unless they still have complete access to the company system, the alumnus will no longer have access to the files and documents they would have used to jog their memory. One would hope that the alumni don’t still have all their private collection of company files on their laptop, and it was this private stash that they used to refer to. Without it, they are not so useful.
  • If the alumnus has retired, then they begin to lose their “relationship capital”. People don’t know, them, they fall out of the “circle of discussion” which is closely linked to the “circle of trust”.
  • Once the alumnus has retired, their knowledge begins to go out of date. It’s a short step from “You shouldn't do it like that for the following valid reasons” to “In my day we never did it like that”.
  • You lose the face-to-face aspects of knowledge transfer, such as coaching and mentoring.
  • You are only postponing the problem for a few years. After a while he retirees will lose interest, and you are putting a big chunk of your KM strategy in the hands of people who one day will choose to go and play golf instead.
  • If the alumnus is available for rehire as a consultant, then you immediately introduce money onto the equation, and set up a barrier to wider knowledge retention. As Dave DeLong points out in his excellent book “Lost Knowledge”; 

This (using retirees as consultants) may seem like the most practical approach, but it can also undermine knowledge transfer practices among older employees who know their expertise is their ticket to a comfortable consulting relationship after they retire. When older employees are routinely hired back as contractors, they have much less incentive to share their knowledge with others before retiring. “Your knowledge is your security here” said a retired research scientist who had returned as a consultant. “If you didn't still have the knowledge, they wouldn't want you back”.

Because of these pitfalls, the Alumni model should not be an alternative to a comprehensive Knowledge Retention and Transfer strategy. At the best, it can be seen as a back-stop. At the worst (the final bullet point above) it can actually undermine other retention efforts. 

There are however a couple of circumstances where is it can add value. I can think of two:
  • When the knowledge is deeply historical, there is no other means to retain it, and nobody to transfer it to. The type example of this is NASA's Oral History project where NASA retirees have been interviewed to develop an oral history of the major events in the organisation's history.
  • When the activity is being outsourced. Here the knowledge will not be retained in-house, but the company will still want access to the services of skilled and knowledgeable people with an insider view of the organisation. A type example of this was when Knoco was set up in 1999. BP were winding down their KM program, and wanted access to people with knowledge and experience in oil-sector KM. The core of the KM team left to form Knoco, BP granted us intellectual property rights to the knowledge we had developed, and we provided services back to BP right up until the recent oil price crash. As alumni, we understand the context of the organisation, and as active consultants can also bring new experiences and new developments back to the organisation.

Friday, 16 October 2015

Avoiding the forgetting curve - the Basis of Design document

I blogged last week about the forgetting curve, and how easily vital knowledge can be lost from the human memory. Here's one way in which the learning curve can be countered.

My post had a reply on Linkedin from Vladimir Riecicky, who wrote

"A forgetting curve is a very well known paradox in software engineering: A well designed software system does not require much maintenance and software engineers are not busy fixing its bugs. This is a good news for the maintenance budget and at the same time a bad news for the (expensive) engineering knowledge concerning the respective software design - it erodes since software engineers do not have to utilize it. At the end you are kind of lucky to have a certain level of maintenance just to keep the knowledge fresh. A depreciation curve of a software system in general is quite an interesting issue that keeps (clever) CIOs busy".

Not just in software design either -  in the oil sector the concept of the forgetting curve is well recognised. Whenever there is a hiatus of more than a year in drilling activity on an oilfield, performance is markedly poorer when drilling resumes, even when there is a well documented methodology, because the knowledge associated with that methodology has been lost.

In the oil sector, they counteract this with a document that they call the Basis of Design.

The basis of design is a simple document that tells you why an oilwell was planned the way it was. A well design is based on a whole lot of assumptions, many of which eventually turn out to be wrong. Unless you capture these assumptions, you can never understand the basis for the design. The document is written as a pre-cursor to the detailed plan, and then rewritten at the end of the well to capture best-practice thinking on the well design.

The BOD is rewritten after the well is drilled, and the difference between the pre-well and the post-well BoD provides a learning history for the well, and captures the new knowledge gained.

The “basis of design” documents, for each section of the well, “What is the objective of each design element for this section? What are the performance measures”?  This breaks the well plan down into manageable portions, and sets the context for the detailed plan, which will be based on the required objective for each element.

As one drilling engineer said, referring to the Basis of Design for his oilfield  - "I could move to this office and put out a quality well plan within a week based on this document, and there hasn't been a drilling rig here for two years".

The Basis of Design therefore counters the forgetting curve, by capturing the reasoning behind the methodology.

Thursday, 8 October 2015

The curious case of the forgetting curve

The learning curve is a common phenomenon we see in Knowledge Management. As an organisation acquires more knowledge, their performance increases as they "climb the learning curve".  However without care and attention, the curve can reverse, and become a forgetting curve. Here is a case history where that happened.



I have blogged before about our Bird Island exercise, probably the longest running KM experiment in the world, and about how it demonstrates in a very clear way that Knowledge management can drive performance.

It is like a lab experiment in KM, with very clear learning points.

We had an interesting twist to the game a couple of weeks ago, where we had two people in the class who had done the game before, about 6 months ago. Now you might expect that this previous experience and knowledge would give them an edge.  They ought to remember some of the key design principles from the game, and they should therefore be well ahead of the other teams based on this knowledge.

So I put these two people with experience into the same team, to see if this would happen.

Well, it happened to an extent. The two people remembered some bits and pieces, and these included some high level design principles, and a few tips and hints. However much of the other detail required to succeed had been forgotten over the intervening 6 months. They built a tower slightly taller than the other teams, but one third the height of their performance 6 months previously.

As one of them said in the debrief "a little knowledge is a dangerous thing".

The graph above shows their performance 6 months previously, where the 5 bars on the left represent how they gained knowledge through their previous Bird Island experience. The red line is the learning curve they went through in the game.

The greay bar on the right shows their performance 6 months later, built with the help of a hazy memory and "little knowledge" from the previous exercise.and therefore how much had been lost in the interim. The green line is therefore their forgetting curve.


This result reinforces recognition of the frailty of human memory as a long term knowledge store, and therefore the need to support that memory through some sort of capturing and recording. Even 6 months is too long to leave knowledge in memory alone. We need to be capturing it as we go, even as an aide memoire, otherwise we lose it.

Or even worse, we retain a little knowledge, and find that it is just enough to be dangerous.

Wednesday, 16 July 2014


The Unlearning curve


We all know the concept of the learning curve – how an individual, team or organisation can improve performance over time through an accumulation of knowledge. 


You see this effect all over – in high jumps, frog rodeos, nuclear power plant construction and oilwell drilling – the more time you spend, the more you learn, and the better your performance.

Knowledge management helps the learning curve in two ways – its helps a single team learn more quickly and climb the learning curve faster, and it helps knowledge to be passed between people teams, so that the learning curve can extend beyond the activities of any one individual or project.

But what about the converse? What about the unlearning curve?

With individuals, if we don’t practice something, we forget. We get worse over time, not better.

As our memory fades, then we move from unconscious competence to unconscious incompetence, and we often get a shock when we try to reproduce previous skills and find that we have forgotten what we used to know.

Organisations have forgetting curves as well. What teams used to be able to do easily, now becomes impossible.

There are some very prominent examples of this; NASA “forgetting” how to design a Saturn Rocket, NNSA forgetting how to make a crucial component of Nuclear Warheads, Arup forgetting about the resonant periodicity of footbridges.

How do you avoid this kind of unlearning?

The answer seems to be to embed knowledge into designs, processes and procedures rather than relying on human memory, and to keep not only the design documents and procedural documents themselves, but also the thinking behind them.

There comes a time when knowledge needs to be written down, and written down carefully, before the unlearning curve kicks in.

Thursday, 29 July 2010


The Gorilla Illusions



Gorilla 2
Originally uploaded by nailbender
I have just finished reading The Invisible Gorilla, by Christopher Chabris and Daniel Simons (an extremely interesting book). These are the guys who set up the famous "invisible gorilla" experiment (if you don't know it, go here). The subtitle of the book is "ways our intuition deceives us", and the authors talk about a number of human traits (they call them illusions) which we need to be aware of in Knowledge Management, as each of them can affect the reliability and effectiveness of Knowledge Transfer.

The illusions which have most impact on KM are
 I would like to address these three illusions in a series of blog posts, as its a bit much to fit into a single one.

The illusion of memory has massive impact in KM terms, as it affects the reliability of any tacit knowledge that relies on memory.

I have already posted about the weakness of the human brain as a long-term knowledge store. Chabris and Simons give some graphic examples of this, pointing our how even the most vivid memories can be completely unreliable. They describe how one person has a complete memory of meeting Patrick Stewart (Captain Picard of Star Trek) in a restaurant, which turns out not to have happened to him at all, but to be a story he has heard and incorporated into his own memory. They talk about two people with wildly differing memories of a traumatic event, which turn out both to be false when a videotape of the event is finally found. And they give this story of a university experiment into the reliability of memory.

 On the morning of January 28, 1986, the space shuttle Challenger exploded shortly after takeoff. The very next morning, psychologists Ulric Neisser and Nicole Harsch asked a class of Emory University undergraduates to write a description of how they heard about the explosion, and then to answer a set of detailed questions about the disaster: what time they heard about it, what they were doing, who told them, who else was there, how they felt about it, and so on.
Two and a half years later, Neisser and Harsch asked the same stu­dents to fill out a similar questionnaire about the Challenger explosion.The memories the students reported had changed dramatically over time, incorporating elements that plausibly fit with how they could have learned about the events, but that never actually happened. For example, one subject reported returning to his dormitory after class and hearing a commotion in the hall. Someone named X told him what happened and he turned on the television to watch replays of the explo­sion. He recalled the time as 11:30 a.m., the place as his dorm, the ac­tivity as returning to his room, and that nobody else was present. Yet the morning after the event, he reported having been told by an ac­quaintance from Switzerland named Y to turn on his TV. He reported that he heard about it at 1:10 p.m., that he worried about how he was going to start his car, and that his friend Z was present. That is, years after the event, some of them remembered hearing about it from differ­ent people, at a different time, and in different company.

Despite all these errors, subjects were strikingly confident in the ac­curacy of their memories years after the event, because their memories were so vivid—the illusion of memory at work again. During a final interview conducted after the subjects completed the questionnaire the second time, Neisser and Harsch showed the subjects their own hand­written answers to the questionnaire from the day after the Challenger explosion. Many were shocked at the discrepancy between their origi­nal reports and their memories of what happened. In fact, when con­fronted with their original reports, rather than suddenly realizing that they had misremembered, they often persisted in believing their current memory.
The authors conclude that those rich details you remember are quite often wrong—but they feel right. A memory can be so strong that even documentary evidence that it never happened doesn't change what we remember.

So what is the implication for Knowledge Management?

The implication is that if you will need to re-use tacit knowledge in the future, then you can't rely on people to remember it. Even after a month, the memory will be unreliable. Details will have been added, details will have been forgotten, the facts will have been rewritten to be closer to "what feels right". The forgetting curve will have kicked in, and it kicks in quickly.  Tacit knowledge is fine for sharing knowledge on what's happening now, but for sharing knowledge with people in the future (ie transferring knowledge through time as well as space) then it needs to be written down quickly while memory is still reliable.

We saw the same with our memories of the Bird Island game in the link above. Without a written or photographic record, the tacit memory fades quickly, often retaining enough knowledge to be dangerous, but not enough to be successful. And as the authors say, the illusion of memory can be so strong that the written or photographic record can come as a shock, and can feel wrong, even if its right.

Any KM approach that relies solely on tacit knowledge held in the human memory can therefore be very risky, thanks to the illusion of memory.

Monday, 24 May 2010


The forgetting curve





I have blogged before about our Bird Island exercise, probably the longest running KM experiment in the world, and about how it demonstrates in a very clear way that Knowledge management can drive performance.

We had an interesting twist to the game a couple of weeks ago, where we had two people in the class who had done the game before, about 6 months ago (see also here, for a previous example). Now you might expect that this previous experience and knowledge would give them an edge, They would remember some of the key design principle from the game, you might expect, and they should therefore be well ahead of the other teams. So I put these two people with experience into the same team, to see if this would happen.

Well, it happened to an extent. The two people remembered some bits and pieces, and these included some highly level design principles, and a few tips and hints, However much of the other detail from the game had been lost over the previous 6 months. They built a tower slightly taller than the other teams, but one third the height of their performance 6 months previously. As one of them said in the debrief "a little knowledge is a dangerous thing".

The chart above shows their learning in the previous game, 6 months ago, with the blue and purple tower heights increasing as more knowledge was added. The red line is the learning curve they went through in the game. The grey bar represents their first tower, built with this hazy memory and "little knowledge" from the previous exercise. The green line is therefore the forgetting curve.

This result reinforces recognition of the frailty of human meory as a long term knowledge store, and therefore the need to support that memory through some sort of capturing and recording. Even 6 months is too long to leave knowledge in memory alone. We need to be capturing it as we go, even as an aide memoire, otherwise we lose it.

And when we come to use it again, we find we retain just enough to be dangerous.

Blog Archive