Thursday, 22 October 2020

Quantified KM success story #141 - After Action reviews deliver 25% performance increase in health teams.

The text below is from an article entitled  Getting the most from after action reviews to improve global health security

Image from wikimedia commons

"Tannenbaum and Cerasoli
conducted a systematic review of findings from 46 studies. Limiting their analysis to studies that reported on the impacts of AARs on “quantifiable aspects of performance” (e.g., in simulators, games, personnel records, self-ratings, performance appraisal ratings) they found that, on average, after action reports/debriefs improved effectiveness over a control group by approximately 25%. The results were similar across a wide variety of contexts, including teams versus individuals and medical versus non-medical situations".

Please note that this an increase in effectiveness of the teams, rather than an increase in effectiveness of the organisation through across-team knowledge sharing.  The referenced Tannenbaum and Cerasoli  study, a meta-analysis of published material, is also an interesting read, suggesting that AARs on average take 17 minutes, involve about 5 people, and are three times more successful when facilitated.

Here's a bit more text from the original article

"Both the United States Department of Veterans Affairs (VA) and the Joint Commission which monitors hospitals review incidents in their own form of after-action reporting. Each uses a systematic approach that incorporates root cause analysis into a review after a sentinel or adverse event has occurred where things did not go as expected. While limited evaluation has occurred of the effectiveness of the after-action reviews, at the VA, comparison of these reviews with prior approaches to reviewing adverse events showed a shift in the root causes identified, blaming individuals less and increasingly attributing the problem to systemic causes like communication and policies or procedures"

So not only do AARs improve team effectiveness, they start to change culture as well. 



Wednesday, 21 October 2020

4 dimensions for knowledge transfer

Here is a useful Boston Square which might help you unpack some of the assumptions behind knowledge transfer. 

  

Knowledge transfer is a term everyone uses, but often we can bring baggage and assumptions to the term. What exactly do we MEAN by transferring knowledge? How is it transferred, and what prompts the transfer?

Boston Squares are great for pulling apart a topic, and allowing you to untangle some thoughts which otherwise might get lumped together. The square shown here is often useful to help an organisation check its assumptions, and make sure it is not getting polarised in its thinking. 

 Here we pull apart knowledge transfer into the dimensions of Knowledge Push and Knowledge Pull (which you might call "Sharing" and "seeking"), and the dimensions of Documented and Undocumented knowledge.


We get 4 quadrants, which we could call Ask, Tell, Search, Share.

  • An Ask approach to knowledge transfer focuses on communities of practice, where people can ask questions of their peers
  • A Tell approach to knowledge transfer focuses on training, lectures, mentoring and coaching
  • A Search approach to knowledge transfer focuses on enterprise search, semantic search and AI
  • A Share approach to knowledge transfer focuses on sharing documents, lessons and best practices 
Which quadrant should your KM program address? Why, all of them of course. Probably it doesn't. Probably most of your attention goes to one quadrant, and the others are neglected. 

Use this Boston Square to check the balance of your KM program.


Tuesday, 20 October 2020

Mentoring? Or Dedicated Learning?

If mentoring is not working for you, try dedicated learning?


Image from wikimedia commons

Mentoring and coaching are tried and tested ways of transferring knowledge between an older experienced person, and a younger less experienced person. However this relationship often breaks down. In a recent study for a client we found that knowledge transfer through mentoring was running at about 20% of the efficiency that was needed.

When the relationship broke down, the two sides blamed each other (“he won’t tell me anything”. "he never asks me anything"), but in reality there were very few incentives to build the trusting relationship required for effective coaching. 

Part of the problem with the traditional view of mentoring is that accountability for knowledge transfer lies with the coach or mentor.

 It is written into their job description, they are coached in the process of mentoring, and they take a leading role. The status of "having knowledge" is compounded with the status of "driving the process". The mentee plays a passive role, and we end up with "Knowledge Push" if we aren't careful. We end up with Teaching; but not necessarily with Learning. At the same time, the mentor is still busy doing what he or she sees as "real work", and when push comes to shove, real work takes precedence over mentoring. As far as the coach or mentor is concerned, there isn't much of an incentive to do the coaching. This is especially true of the experienced person is due to retire. Ineffective transfer of knowledge is "not my problem" as far as the retiree is concerned.

More recently we have been looking at changing this relationship, and making the learner more proactive and less passive. 

If the mentee is given an active status, that of a "dedicated learner", and is trained in the skills and processes of knowledge elicitation and interviewing, there is a change in the dynamic. The mentee takes a more active, more leading role - almost like an investigative reporter, or an industrial spy. Then ineffective transfer of knowledge becomes "my problem" as far as the mentee is concerned. 

The transfer of knowledge becomes a Learning-driven process, not a teaching-driven process.  That learning can be planned, through a learning plan, and monitored and measured. And as the less experienced person learns, then instead of just taking notes in their personal note book, they start to build knowledge assets in the company knowledge base. So as well as transferring the tacit skills, a documented knowledge base can be created as well. Now we have knowledge driven by Pull, not Push, by demand rather than by supply, and we know this is more effective, at least in the short term.

If your mentoring program is struggling, then try empowering the mentee as a Dedicated Learner.

Monday, 19 October 2020

Why it is important to audit KM regularly.

There is a lot of value in a KM audit, so long as you do it regularly.


Group Audit
In this post, I suggested that there are two types of KM surveys; the audit and the assessment. I said the first was like counting the apples in your orchard, the second was like reviewing your farming methods.

It is the assessment - the methods review - that has the most value, in that it tells you how to improve your knowledge management approach. However the audit is also valuable, in telling you where you need to focus, and which problems you need to solve. There is merit in running both regularly. 

The audit measures the current management status of various knowledge topics - are they documented, are they covered by communities, are they held only in the heads of a few people approaching retirement? When you introduce your improved KM approaches, the audit should show improvement.

Our favoured approach to run the audit on an annual basis. It works like this
  • The KM team works with the knowledge owner, or process owner, to audit their own knowledge area.
  • Based on the results of the audit, they identify actions for the following year
  • The next year, the audit should show improvement
  • The KM team can use the audit results, and the change in results over the year, to construct a KM dashboard for reporting to senior management. If KM is working, all audit scores should show an increase over the previous year.
The Assessment measures how well the KM framework is operating, and whether it needs to be tweaked, augmented or updated. ISO 30401:2108, the ISO management systems standard for KM, requires the organisation to have a plan for regular internal audit of the KM framework against the organisations needs and the requirements of the standard. This is in order to drive continual improvement of KM. The assessment probably does not need to be done annually - maybe every 3 or 5 years might be more appropriate. But it needs to be done regularly. 

As part of your KM governance, make sure you conduct regular audits and/or assessments both of the health of the KM framework, and also of the level of management applied to key knowledge domains. 


Friday, 16 October 2020

Why, in KM, the best generals should not be on the battle field.

Your best performers are far too important to be working on projects - they should be teaching others to work on projects.

Image from Creazilla
under creative commons licence

Several times in my career I have been met with the objection that Knowledge Management will not work because the top experts - the people who hold most of the knowledge - are too busy to take part in KM. They are working full-time on the toughest projects or working with the most demanding clients. The theory is that the highest priority project should have the best people working on it. 

If the organisation has aspirations for growth or improvement, then that is a waste of knowledge. The expert can only be in one place, on one project or with one client, but their knowledge is needed in every place, on every project and by every client. 

 KM needs to offer them a new role (which should be seen as an opportunity rather than a threat) - to be the stewards and sharers of knowledge, rather than the sole holders. Their role is to make the organisation knowledgeable, not jut to be knowledgeable themselves. 

The best generals should be in the war college, not on the battlefield. Your best expert on disarming bombs should not be disarming bombs, but teaching others to do so safely. Knowledge needs to be spread as much as (or even more than) it needs to be concentrated.

We can see this approach in Shell, where the best experts become internal technical consultants, or in Rolls Royce, where the best experts are given a "Fellow" role. You can also see this approach in the Customer Service world, as illustrated in this article

In KM, the best generals should not be on the battlefield.

The role of the expert should not be to dedicate their knowledge to the toughest project, but to make sure that every project can apply their hard-won knowledge and experience.






Thursday, 15 October 2020

Tacit, Explicit and .... what? Different types of knowledge, and the definition minefield.

 For years, people have talked about tacit and explicit knowledge, but what about the other types?

Tacit and explicit knowledge have been assumed to be the two main types of knowledge for most of Knowledge Management's short history. Polanyi described tacit knowledge as the things we know without expressing or declaring them ("Tacit" means silent or unspoken). He also said that "we know more than we can tell" which can be taken as meaning that some or all tacit knowledge cannot be "told". Polanyi did not define what explicit knowledge was. 

Nonaka and Takeuchi also addressed tacit and explicit, suggesting at one point that tacit knowledge can become explicit through codification ("transforming tacit knowledge into explicit knowledge is known as codification") and suggesting at another point that tacit knowledge is uncodifiable ("tacit aspects of knowledge are those that cannot be codified"). They also implied, in their examples, that codification equates to documenting, although when they first mention explicit knowledge they describe it as "explainable" (which is what explicit means in dictionary terms).

So the confusion began. 

  • Is tacit knowledge codifiable or not?
  • Is all knowledge in the head tacit, or is some of it explicit?
  • Is explicit knowledge always documented?
  • Is documented knowledge still knowledge, or does it become information as soon as it is documented?
Many of you would give firm answers to each of these questions, but those firm answers would not always coincide. 

Then we introduce the idea of implicit knowledge. This sometimes used as a synonym for tacit knowledge (see example) and sometimes as a descriptor for knowledge which can be expressed but has not yet been (definitions 2,3,4, 6 and 7 here).

But whatever you words you use you can divide knowledge into a number of types:

1) Knowledge that is held by individuals and which cannot be expressed or articulated. Variously referred to as tacit or implicit
2) Knowledge that is held by individuals and which could be expressed or articulated, but which hasn't been yet. Variously referred to as tacit, implicit, or explicit.
3) Knowledge that is held by individuals or groups which has been expressed or articulated, but which is not yet codified or documented. Variously referred to as tacit, implicit, explicit, or (verbal) information.
4) Knowledge which has been documented. Variously referred to as explicit knowledge, or information. Or, as I would argue, both knowledge AND information.

See the confusion?

This is partly why, when we wrote ISO 30401:2018, the ISO Management Systems Standard for KM, we avoided the use of the terms Tacit and Explicit altogether, and used the terms Codified and Uncodified instead. 

5) Also, I increasingly believe there is a fifth type of knowledge which I like to call "embedded," which is knowledge that is encoded in the structures, strategies and tactics of organisations. Our organisations often operate the way they do as a record of past learning, either conscious and unconscious. That knowledge is nowhere written down, but permeates the way the organisation acts and responds. This is regardless of the people within it - introduce new people and the organisation still works the same way.

6) Also nowadays we could add a sixth type, which is knowledge encoded or embedded into the way our machines and our technology operates. Maybe we call this "machine knowledge". Much of this encoding is currently done through an analysis of human knowledge of types 2 and 3 above, and sometimes through an analysis of type 1. The famous example from Nonaka and Takeuchi is of the Matsushita bread-making machine, where the dough-kneading mechanism was only perfected when one of the engineers spent time working alongside master bakers to observe the unconscious "twist-stretch" mechanism they used. Many machines can also improve their machine knowledge through feedback loops (machine learning).

As an aside, it is interesting to see how machine knowledge now seems to be encroaching on areas which once were seen as entirely tacit and human. "How to ride a bicycle" has often been used as an example of pure tacit knowledge which cannot be expressed, and yet machines can ride bicycles. "How to recognise a face" is still given as an example of tacit knowledge on Wikipedia, but machines can recognise faces; even our smartphones can do this. 

However you define the terms, your knowledge management framework has to deal with all these types of knowledge.

Type 1 can be transferred only through coaching, mentoring and apprenticeships.
Type 2 can be expressed through questioning, through interviews and lessons capture processes.
Type 3 can be shared through communities of practice, peer assists, teaching, and many other methods
Type 4 is what we manage in our knowledge bases
Type 5 comes when we act on our knowledge in order to improve the structures, strategy and tactics of the organisation
Type 6 comes when we embed our knowledge into machines and algorithms.

Don't worry too much about definitions - recognise the different types and how they must be addressed, and make sure your KM program addresses them all.

Friday, 9 October 2020

How organisations of different KM maturity plan to use the KM standard

The 2020 Knoco survey shows (among very many other interesting things) data on how organisations plan to use the ISO 30401 standard. I have further analysed the dataset to see how the planned usage varies with KM maturity. 


213 people answered the question, and the answers are shown in the pie chart below. Click on the picture for optimal resolution.



The various options on the chart represent escalating levels of involvement with ISO 30401:2018; the ISO management system standard for Knowledge Management, and the pie chart shows the percentage of respondents choosing each as the highest level of planned usage. 

19% of the respondents were unaware of the standard, and 27% know about it but will not buy or use it.

The remaining 54% will make use of the standard in some way, even if they have not quite yet decided how. Only a very small percentage(4%) are seeking, or have achieved, certification. “Other” options include the following: 

Balancing ISO against operational readiness needs. Champion is working to make the value connection: I am aware of it but not using it in (Organisation) as far as I know: I'm in the translation workgroup: organisation is not interested in using it: Probably will do it but not yet: Still early days to think of an ISO certification: Unfortunately seen as “a nice to have” and not top priority: We have a copy of it and would like to implement it within our firm, as a way of embedding KM but we are aware, from KM forums (in legal sector), that it only has one moderator at the moment and is struggling to be implemented in organisations, so we have no examples of where it has been successfully been implemented to demonstrate to our senior management - i would like to learn more about how this could be done.: While I am aware of ISO 30401, business is not. As KM is not openly identified as a business priority and under resourced the will to bring ISO 30401 to the table is not there.

The barchart below splits out these figures by the self-designated maturity of KM within the organisation. Again, click on the figure for better resolution.


We can see the following:

  • Organisations early in the journey are less likely to have heard of the ISO standard (bottom blue segment). This makes logical sense.
  • The percentage planning not to engage with the standard does not vary much with maturity (red segment). 
  • The percentage planning to use it to inform their KM program is greatest in the most mature KM organisations (purple segment), as is the percentage who have conducted an internal review or audit (orange segment).
  • The highest percentage planning certification, and the only organisations who have been certified, are in the "well in progress" category. When you think about it, that also makes sense. The standard adds most value as a check against your KM framework prior to finally embedding it into organisation process and structures. Once KM is embedded, its harder to change.
  • The "don't know" category is biggest when KM is new. Again - makes sense.
  • The "Other" category is biggest where KM is most mature. Again that sort of makes sense - if your organisation has KM fully embedded, the standard will have many more uses than seeking certification or as a yardstick - you can use it more creatively.



Blog Archive