Monday, 30 November 2015

How Poor KM cost Boeing $1.6

Poor knowledge management and a lack of Knowledge Retention can cost organisations a huge amount of money; witness this example from Boeing.


According to Energy Voice

When Boeing offered early retirement to 9,000 senior employees during a business downturn, an unexpected rush of new commercial airplane orders left the company critically short of skilled production workers.
The knowledge lost from veteran employees, combined with the inexperience of their replacements, threw the firm’s 737 and 747 assembly lines into chaos. Overtime skyrocketed and workers were chasing planes along the line to finish assembly.

Management finally had to shut down production for more than three weeks to straighten out the assembly process, which forced Boeing to take a $1.6 billion charge against earnings and contributed to an eventual management shake-up.

Friday, 27 November 2015

Does knowledge look different, looking back to looking forward?

I am recently returned from a busy few days with a manufacturing client, which has given me some food for thought.

We conducted a number of KM activities, including a Retrospect to identify and capture lessons from the last version of the product, and some knowledge gap analysis (KM planning) of future work. And it was only afterwards that I realised that all the backward looking conversation had been about Process, and all the forward looking conversation had been about Product.

Any manufacturing client looks at knowledge of product and of process - the ternary diagram from our 2014 KM Survey shows Manufacturing as half-way up the left hand side of the picture, midway between the Process and Product corners.

However the conversations we have about these two knowledge focus areas are different.

Looking back, the standard processes of Retrospect and After Action Review tend to identify the successes and failures in Process - the processes of design, manufacture, testing, technical decision making, and so on. If we want to review a Product, we need to specifically address this, maybe using something like the A3 process, or the Knowledge Briefs. 

Looking forward, the Knowledge Gap Analysis (part of the KM Planning methodology) tends to look at gaps in Product knowledge, rather than gaps in Process knowledge, unless we were deliberately to focus the conversation on process.

Both Knowledge Management methodologies come up with a list of improvement actions, and the combined list covers Product and Process, so all learning areas were eventually covered, but the reflection on this experience leaves me with a clear lesson.

When an organisation needs knowledge of more than one type (eg knowledge of Product and Process, or Product and Customer) we need to deliberately incorporate discussion of both of these knowledge types in our KM processes.

If we don't, then our knowledge focus may seem different depending on the KM processes we use, either looking backwards in a Retrospect, or forwards in a KM plan.

Thursday, 26 November 2015

HG Wells on Knowledge Management

An immense and ever-increasing wealth of knowledge is scattered about the world today; knowledge that would probably suffice to solve all the mighty difficulties of our age, but it is dispersed and unorganized. We need a sort of mental clearing house for the mind: a depot where knowledge and ideas are received, sorted, summarized, digested, clarified and compared 


H.G. Wells
Author of "War of the Worlds"
1940

Wednesday, 25 November 2015

3 levels of lesson learning

In a large organisation, lesson-learning can happen at multiple levels. 


In global oil and gas companies for example, there are often three levels

  • Firstly team learning uses After Action Review to learn within project teams. Discussions take place within the team, and any changes are to team processes. Lessons are documented in simple form, perhaps in Excel spreadsheets.
  • A more complex form of learning is learning from one project to another, using facilitated lessons-identification meetings. Lessons are collected, actions are assigned, and changes made to organizational process. Lessons are documented in lesson management systems.
  • Finally in analyses of major incidents, an investigation team is tasked with collecting observations, the insights, the learnings, and even the recommendations for action. Lessons are documented in evaluation reports.
In each case, and at each scale, the learning cycle is similar.

Lessons are identified through discussion and investigation, and pass through a series of stages, from context to observation to insight to lesson. If the lesson is to be truly learned (in other words, embedded into new ways of working), then there are further stages.  
 The lesson must be documented and validated, it must lead to assigned actions, those actions must be disseminated to the correct people through a lessons management system, the appropriate actions must be taken, and the lessons closed.

However the scale varies - from one project team, to multiple project teams, to the whole organisation, and as the scale changes, so does the rigour of, and the investment in, the learning process.

The table below shows how the learning stages and steps are applied at each of the three levels.

Step/Stage
Within a team
Between projects
Major Investigation
1.     Context
Discussed within the team After Action Review
Discussed within the retrospect, led by the facilitator
Investigated by the incident investigation team
2.     Observation
Discussed within the team After Action Review
Discussed within the retrospect led by the facilitator
Collected by the incident investigation team
3.     Insight
Discussed within the team After Action Review
Discussed within the retrospect led by the facilitator
Analyzed by the incident investigation team
4.     Lesson
Discussed within the team After Action Review
Discussed within the retrospect led by the facilitator
Recommendations made by the incident investigation team
5.     Documented lesson
Either not documented, or documented within a lessons log
Documented within a lesson management system by the facilitator
Documented as the incident investigation report
6.     Assigned action
Discussed within the team After Action Review
Either discussed within the retrospect led by the facilitator, or discussed after the facilitator by senior staff
Discussed and agreed by senior management
7.     Validated lesson/action
Validated within the team and by the team leader at the After Action Review
Validated a) within the lessons management system, or b) by senior staff
Discussed and agreed by senior management
8.     Distributed/notified lesson/action
Action assigned using existing team processes such as action log
Use of a lessons management system
Use of a system such as the ‘strategic implementation planning’ tool
9.     Taken action/change
Action taken by team member
Action taken a) by other teams, or b) by company authorities such as Subject Matter Experts
Action taken by company authorities
10.  Completed lesson
Action closed in action log
Actions tracked and lessons closed in lesson management system
Actions tracked and lessons closed by central team.
Monitor, review, report
Monitor through routine team action monitoring
Statistics created and reported from the lessons management system
Lesson closure reported by central team.

Tuesday, 24 November 2015

"What sort of Knowledge Management are you talking about"

We have already, on this blog, pointed out that there are "50 shades of Knowledge Management", and as a result, in any conversation about KM, you need to find out what the other person understands by the term.


There is no commonly accepted definition of KM, and as part of the KM hype wave in the early 2000s many parallel disciplines adopted the term "knowledge management" as somehow being more exciting that previously existing terms such as "content management", "document management", "librarianship" and so on.

As a result, we currently have many sub-groupings under the KM umbrella, each with their own understanding of the term.

This was brought home to me recently, when we received a request for KM services, which was clarified as

  • Records management/archiving 
  • search optimisation 
  • document tracking/user usage/version control 
  • user policies/protocols/permissions/protection 
  • bulk/batch scanning
All of this is a long way from the services we offer, which include

So far away, that it might almost be a different discipline with a different name. "Records Management" perhaps.

Knowledge Management is a big field, and a very loosely used term. 

Before having any conversation about KM, you need to ask "What sort of Knowledge Management are you talking about?"

Monday, 23 November 2015

How to use the Knowledge management Pull Matrix

Here is a really interesting blog post from NASA entitled "how rocket scientists learn" by Yasmin Fodil

It contains much that is interesting, and makes three main conclusions - 

  • Knowledge Management at Goddard (NASA) is all about people
  • Social Media Can Enhance Learning (but relationships matter)
  • Learning in Public is Hard, but Worth It.
It also contains the following matrix


Yasmin describes the table as follows

So as an individual trying to learn, I have my own experiences, which I can reflect on and share with others during pause-and-learns, through job rotations, case studies, and lessons learned documents. In turn, I can learn from case studies and lessons learned from other projects, which I can engage with by simply reading about, attending workshops, or engaging with my peers.

I like the concept, and particularly like the fact that is the whole matrix is pull-driven - "Who can I learn from", "What can I learn", "How can I learn it" - all driven by knowledge-seeking.

This is refreshingly different from the more normal "who can I share with", "where shall I store this" conversation. It's like a personal Knowledge Management Plan - all that's missing is the "What do I need to learn" element.

I would like to extend the table a bit, because there is a big jump between "my friends" and "the whole organisation". We could include in this, for example, "my project team", and "my community of practice", both of which are organisational constructs which don't necessarily map onto "my friends". Also, "company experts" are a source of learning too

Here's my extended version


I hope this is useful

Friday, 20 November 2015

Another step in the direction of KM standards

A knowledge management standard or guideline is a good thing, but it is even better if it fully appreciates what knowledge management really is.

Thanks to Paige Kane for alerting me to the pharmaceutical guidance document ICH PHARMACEUTICAL QUALITY SYSTEM Q10.

This guideline applies to the quality systems supporting the development and manufacture of pharmaceutical drug substances (i.e., API) and drug products, including biotechnology and biological products, throughout the product lifecycle.

One excellent element of this guideline is the inclusion of KM as part of the quality system, as follows:

1.6.1 Knowledge Management.    Product and process knowledge should be managed from development through the commercial life of the product up to and including product discontinuation. For example, development activities using scientific approaches provide knowledge for product and process understanding. Knowledge management is a systematic approach to acquiring, analysing, storing and disseminating information related to products, manufacturing processes and components. Sources of knowledge include, but are not limited to prior knowledge (public domain or internally documented); pharmaceutical development studies; technology transfer activities; process validation studies over the product lifecycle; manufacturing experience; innovation; continual improvement; and change management activities.
The text above already contains a warning of potential confusion (did you spot it?) and this confusion is confirmed in the glossary to the document, as follows:

Knowledge Management: Systematic approach to acquiring, analysing, storing, and disseminating information related to products, manufacturing processes and components. 

So - great - the document acknowledges that knowledge is a crucial element in delivering quality.  Not so great - it defines KM as the management of information - a definition of Information Management, not of Knowledge Management.

Overall I believe the inclusion of KM in this document is a positive step for the Pharma industry, and now just needs to take the next step and recognise that KM is not about the management of information, but about the management of knowledge, both documented and tacit.

Not all information is knowledge, and not all knowledge is documented in information form. Confusion between the management of information  and the management of knowledge will, I hope, be resolved in future issues of this guideline, and will provide the next step forward for the use of KM in the Pharma industry.

Thursday, 19 November 2015

8 approaches to Knowledge Transfer

Imagine you have some learnings within a team. How do you pass them on?

Image from wikimedia
The approach you take to Knowledge Transfer depends on the circumstances, and the answers to the three questions below will define 8 possible approaches.
  • Who needs the knowledge? The same team as generated it, or a different team?
  • When do they need it? Now, or at some undetermined time in the future?
  • Where are they? Near enough to sit down with, or somewhere remote?
So we have three variables, so we can't draw a Boston Square with four divisions - we have to draw a Boston Cube with eight. And in each of those divisions, we take a different approach to how we transfer the knowledge.

  1. Same team, same place, same time - hold an AAR. Discuss the learning, and help everyone internalise it.
  2. Same team, different place, same time (virtual team) - hold a virtual AAR. Discuss the learning, and help everyone internalise it. Checking for internalisation will be harder without access to body language.
  3. Same team, same place, different time - hold a Retrospect and update and improve your team processes, procedures and practices. Then if you follow those next time, performance will improve.  
  4. Same team, different place(virtual team), different time  -  conduct a Learning History and update and improve your team processes, procedures and practices. Then if you follow those next time, performance will improve.
  5. Different team, same place, same time - hold a Peer Assist. Discuss the learning, and help the other team internalise it. Or host a site knowledge visit
  6. Different team, different place, same time - hold a virtual Peer Assist. Discuss the learning, and help everyone internalise it. Checking for internalisation will be harder without access to body language. Set up a community of practice, or virtual coaching group.
  7. Different  team, same place, different time - hold a Retrospect and document a Knowledge Asset. When the knowledge is needed, find someone from the original team to talk through the Knowledge Asset.
  8. Different team, different place, different time - hold a Retrospect and document a stand-alone Knowledge Asset.
Different contexts require different approaches, for example the 8 approaches defined here

Wednesday, 18 November 2015

How KM and continuous improvement work together

Somebody asked me today what the link was between Knowledge Management and Continuous Improvement. I think the answer lies in the diagram shown here.

Diagram from a paper I co-authored
for the Society of Petroleum Engineers
"Implementing a Framework for Knowledge Management"
Continuous improvement (CI) is an attitude and discipline for always seeking to make work better. The Plan Do Check Act cycle is at the heart of CI, sometimes shown as a Plan Do Measure Learn cycle.

Knowledge Management (KM) is an attitude and discipline for organisational learning.  The KM cycle is sometimes expressed as "learn before, during and after"

KM and CI intersect at the Learn step, as shown in the diagram.

The link here between KM and CI emphasises three things.


  •  Learning comes after doing. You need experience from which to learn, and that needs to be either your Doing, or someone else's. 
  • Learning comes after measurement. It is through measurement (of whatever indicators you use to determine success) that you know whether the Doing was successful, and you know what needs to be improved or repeated in future. 
  • Learning comes before planning. Although the Deming cycle is usually described as "Plan-Do-Measure-Learn", it can equally well be described as "Learn-Plan-Do-Measure". Start with the Learn, and Learn before you Plan before you Do.

Tuesday, 17 November 2015

"Over the hedge" - knowledge sharing the long way round

There are many organisations where knowledge takes a very indirect route from knowledge provider to knowledge user. Instead of peer to peer knowledge sharing, knowledge has to go "over the hedge" from one department to another, transferred at management level.


This sort of "hedgehopperE KM operates in global siloed organisations, where only a certain level of manager or senior expert is allowed to travel to other units. These are the people who are allowed to hop over the hedges between the silos.

In a Hedgehopper company, if someone at an operational level has a pressing knowledge need, they ask their manager, who asks their manager, until the request reaches someone senior enough to travel. At the next global managers meeting (often called a global network, or even a knowledge sharing network), they can raise this question. Maybe someone knows about someone in their own organisation silo who can help, so they pass the question down the levels until it finds someone at operational level with an answer. In the better hedgehoppers, the asker and the answerer are put in touch with each other.

In the worst hedgehoppers, the answer also travels up the heirarchy to the travelling experts - then back down again on the other side.

Do you know the term "Chinese whispers"? This is Chinese Whispers with a vengeance.  Not only is the transfer of knowledge delayed until the hedgehoppers meet, the knowledge is filtered as it travels up and down heirarchical levels, until by the time the answer arrives, it may be too late, completely garbled, and largely irrelevant.

What's the alternative to hedgehopper KM?


The alternative is to allow peers all over the organisation to communicate directly, without having to go through managers. You set up the communities of pracice to allow peer networking, you encourage and promote peer assists, and you empower people to seek answers wherever they may be found.

Partly the hedgehopper concept comes from applying the concept of T shaped management, without realising that anyone in the organisation can operate in a T-shaped space - looking both vertically at the heirarchy, and horizontally at their peer group.

Monday, 16 November 2015

When (and when not) to use bullet points in Knowledge Management

It is common practice for meeting facilitators to record discussion on a flip chart as bullet points.  This does not work for many knowledge management meetings.

Bullet points are short phrases or keywords recorded as a list, rather like the example to the right.  They are not sentences, they do not explain, they contain little or no context, and are succinct summaries.

We record bullet points in meetings as a reminder of what we have discussed, and as a way of keeping track of the progress of the meeting.  The lack of context is not a problem, because everyone at the meeting was part of the discussion, and knows the context. Therefore the bullet points act like signposts or markers which remind participants of key points within a shared context.

When it comes to transferring knowledge to people who were not present, who did not take part in the discussion, and who do not have the shared context, bullet points usually do not work.  You cannot be "reminded" by a bullet point, if you were not part of the conversation, and providing someone with a list of bullet points ane expecting them to derive knowledge from them is futile.

 Therefore in knowledge transfer exercises where there is also a wish to capture the knowledge for others, or for future use, bullet points alone do not work.

In Retrospect (lesson learned) meetings, learning histories, or when capturing knowledge from individuals, you need a full transcript of what was said, in order to document the knowledge in full, with the necessary context.  You have to be able to speed type, or you have to record the meeting for later transcription.

On the other hand, in a meeting where knowledge transfer through discussion is the sole aim, with no need to record the knowledge for others, then bullet points are sufficient. In an After Action review or a Peer Assist, where the users of the knowledge took an active part in the discussion, and the knowledge will be used immediately, then it may well be OK to record bullet points, and to rely on the participants' memories and personal notes of the conversation to fill in the gaps.




Friday, 13 November 2015

Longford Refinery Disaster - operator error or KM failure?

This book, about the explosion at Esso’s Longford refinery in Australia, is a sobering report of a fatal disaster, and also can be interpreted as a story of an inneffective Knowledge Supply Chain.  It illustrates the potentially appalling consequences of the failure to supply operators with the knowledge they need to operate in a high risk environment.

Image from amazon

"Operator error" might be a first pass conclusion when something goes wrong, but very often you need to look deeper, and understand why the operator made an error.

Did they have all the knowledge they needed to make decisions? Did they have training? Did they have access to expertise?

The story of the Longford refinery disaster explores this question. The disaster is described in Wikipedia as follows;
During the morning of Friday 25 September 1998, a pump supplying heated lean oil to heat exchanger GP905 in Gas Plant No. 1 went offline for four hours, due to an increase in flow from the Marlin Gas Field which caused an overflow of condensate in the absorber. A heat exchanger is a vessel that allows the transfer of heat from a hot stream to a cold stream, and so does not operate at a single temperature, but experiences a range of temperatures throughout the vessel. Temperatures throughout GP905 normally ranged from 60 °C to 230 °C (140 °F to 446 °F). Investigators estimated that, due to the failure of the lean oil pump, parts of GP905 experienced temperatures as low as −48 °C (−54 °F). Ice had formed on the unit, and it was decided to resume pumping heated lean oil in to thaw it.
When the lean oil pump resumed operation, it pumped oil into the heat exchanger at 230 °C (446 °F) - the temperature differential caused a brittle fracture in the exchanger (GP905) at 12.26pm. About 10 metric tonnes of hydrocarbon vapour were immediately vented from the rupture. A vapour cloud formed and drifted downwind. When it reached a set of heaters 170 metres away, it ignited. This caused a deflagration (a burning vapour cloud). The flame front burnt its way through the vapour cloud, without causing an explosion. When the flamefront reached the rupture in the heat exchanger, a fierce jet fire developed that lasted for two days ......
Peter Wilson and John Lowery were killed in the accident and eight others were injured.....Esso blamed the accident on worker negligence, in particular Jim Ward, one of the panel workers on duty on the day of the explosion.  The findings of the Royal Commission, however, cleared Ward of any negligence or wrong-doing. Instead, the Commission found Esso fully responsible for the accident:
So what might cause apparent "worker negligence" (aka operator error) in cases like this?

The disaster happened when hot oil was pumped into the cold exchanger, which was the wrong thing to do, but why did the operators do this? The book mentions what it calls "latent conditions" which can cause operators to make poor decisions, such as "poor design, gaps in supervision, undetected manufacturing defects or maintenance failures, unworkable procedures, clumsy automation, shortalls in training, less than adequate tools and equipment (which) may be present for many years before they combine with local circumstances and activate failures to penetrate the system's many layers of defences".

If an operator does not have the correct training, or the correct procedures, then you could argue that they do not have the knowledge to make the correct decision, and so may end up making mistakes not through error or negligence, but through ignorance. 


If they do not have the knowledge they need to make an effective decision, then this could be seen to be a failure of the knowledge management system, for not providing the operators with the knowledge they need to avoid the error, to make the correct decision, or to take the necessary preventative action when things go wrong.

In knowledge management terms, the investigative commission found these three contributory factors (again, according to Wikipedia), which talk to a lack of knowledge on behalf of the operators, lack of access to more skilled knowledge, and lack of communication of knowledge - all of them potential KM failures
  • inadequate training of personnel in normal operating procedures of a hazardous process;
  • the relocation of plant engineers to Melbourne had reduced the quality of supervision at the plant;
  • poor communication between shifts meant that the pump shutdown was not communicated to the following shift.

The following quote from the book is a statement from the operator himself, and you can hear from the language he uses that this was way outside his experience and knowledge base.
"Things happened on that day that no one had seen at Longford before. A steel cylinder sprang a leak that let liquid hydrocarbon spill onto the ground. A dribble at first, but then, over the course of the morning it developed into a cascade ... Ice formed on pipework that normally was too hot to touch. Pumps that never stopped, ceased flowing and refused to start. Storage tank liquid levels that were normally stable plummeted ... I was in Control Room One when the first explosion ripped apart a 14-tonne steel vessel, 25 metres from where I was standing. It sent shards of steel, dust, debris and liquid hydrocarbon into the atmosphere".
In a situation like this, where the wrong operational decision can be lethal and operator error through ignorance cannot be allowed, effective knowledge management and an effective knowledge supply chain (in the sense of ensuring that people have access to the knowledge they need, at the time they need it, in order to make correct decisions) is not just a nice-to-have; it's a life saver.


Thursday, 12 November 2015

Deferring judgement vs the innovation funnel

One of the most important elements in innovation is the deferral of judgement.  That's why innovation funnels are such a poor innovation tools


In the video below, the creativity guru Min Basadur talks about how deferral of judgement (your own judgement, and that of others) is a key principle behind the innovative/creative process. 




New ideas are both fragile and incomplete. They can be easily killed prematurely by judgement.  Deferral of judgement allows them to be explored, combined and modified, so that ideas converge into robust innovations.

The problem with innovation funnels is that they are predicated on judgment.

An idea enters the funnel, and is immediately judged.

The purpose of the funnel is to weed out the ideas that don't work, rather than to explore and modify them until they do. As this article points out:

The entire focus of the funnel and stage-gate process is to ‘whittle down’ a large number of ideas to a smaller number by ‘killing off’ weaker ones and ‘picking’ the winners, rather than being constructive and finding solutions to the problems each idea poses.
The funnel focuses on judgement, administration and evaluation, assuming that ideas are easy to come by, that the majority are bad, and that the good ideas emerge fully formed.

Here is another article with some scary figures:

In their 1997 article “3,000 raw ideas = 1 commercial success!”, published in the Research Technology Magazine, Stevens and Burley summarized a study based on project literature, patent literature, and venture capitalist experience, and concluded that “across most industries, it appears to require 3,000 raw ideas to produce one substantially new, commercially successful industrial product”. The funnel they described acts as follows: 
  • 3,000 raw ideas turn into 300 ideas for which minimal action is taken (such as simple experiments, patent filing, or management discussion); 
  • 125 of the 300 ideas become small projects; 
  • 9 out of the 125 become significant projects with a significant development effort; 
  • 4 out of the 9 become major development efforts; 
  • 1.7 out of the 4 is commercially launched; and 
  • 1 out of the 1.7 launched (59%) becomes commercially successful (this last success rate varied from 40% to 67%, depending on the source of information, industry, and geography).

This is a mechanical approach to innovation - a ruthless culling of ideas before they have had any gestation time, and a relatively low level of attention paid to any individual idea (you can't spend long on each idea of there are 3000 of them).  It counter-incentivises the contribution of ideas. Why bother to contribute an idea, if there is only a one in 3000 chance of it being taken through to commercial success?

In Knoco we recommend a different approach.


  • The funnel approach should be applied not to ideas, but to problems, to allow you to  find the problems most in need of an innovative solution.
  • Each problem then becomes the subject of an innovation Deep Dive process, involving a cycle of fact finding, problem definition, idea generation, idea combination, solution finding, solution testing and acceptance winning. 
  • The success rate for each problem should be 100%, not one in 300

Don't ask your staff to contribute ideas, just so you can kill them. Do away with the judgement-based innovation funnel, and be creative instead.

Wednesday, 11 November 2015

People, Process, Technology, What?

There was a time when we through that the three enablers for Knowledge Management were People, Process and technology.  we now (if presentations at KM world are an indication) realise that there is a fourth enabler. But what is it?



It has been obvious for a while that people, process and technology are not enough - that there are many examples of KM where roles are in place, processes defined and technology acquired, but knowledge still does not flow.

At Knoco, we have been advocating four enablers for a while now (see "the 4 legs on the KM table"), and many presentations at KM world also showed a fourth enabler - a fourth circle on the diagram.

In quite a few cases, the fourth circle was "Content".

I think the addition of Content is a mistake, for the following reasons.


  • Content is a form of knowledge, and knowledge is what is being managed. Effective management of content requires roles, processes and accountabilities, but content cannot be managed using content. Roles, processes and technologies are enablers - they are input factors. Content is an output. Adding content to the list of enablers is like saying "the components for making babies are a man, a woman, a procreative process, and a baby". A baby is in fact the outcome of the man, the woman and the procreative process, just as content is the output of roles, processes and technology. 
  • If content is added as the fourth circle, then the assumption is made from the outset that KM equates to content management.  However we know that KM is about tacit as well as explicit, and about conversation as well as content. To add content as the fourth circle dooms you to focusing on the explicit content, and ignoring the conversations by which tacit knowledge is created and shared.
Some of the people I have spoken to on this matter say that "by content, we mean the rules and regulations around content - the taxonomy, metadata and so on".  However this is not content itself, it is the governance you place on content. 

Which brings me to the second approach to that fourth circle.

In the second approach, that fourth circle is represented by governance

This includes the governance around content, but also the governance around behaviours and conversations, and the governance elements of expectations, incentives, performance management and support.

At Knoco, this is also how we see the fourth circle. Governance is an enabler to knowledge flow - an enabler of conversations and content - and is the weakest element in many KM programs. Governance is the way that behaviours are embedded and supported, will evolve as KM implementation progresses, and can be described in 5 letters.

Adding governance as the fourth circle prompts you to focus no this often-missing element, which may be the difference between success and failure of your KM program, whether you focus on content alone, or on all types of knowledge.

Tuesday, 10 November 2015

Measuring the waste in the KM supply chain

If KM is a lean supply chain of knowledge, how can you measure the waste in order to eliminate it?


Shredded waste, image from wikimedia commons
The concept of Knowledge Management as a supply chain is one I have been incubating for a few years (see here, here, here, here for example). I presented the idea at KM World last week, and got some very good feedback.

I presented the idea of KM as a supply chain, providing knowledge to the knowledge worker, and used the concept of the lean supply chain to suggest that we could eliminate "the 7 wastes", and make the transfer of knowledge more efficient.

Then one person asked "how can we measure that waste"?

I didn't know the answer, but said I would think about it and blog an answer.

Here it is.


  • Waste #1. Over-production—producing more knowledge than we need.

We might measure this by measuring how much of what is published is actually useful. We could for example look at the read-rates of content (how much content never gets read), or the duplication of content. We could look at the push/pull ratio in communities of practice, balancing the number of question-led discussions against the number of publication-based discussions (see this analysis of linked-in discussions, for example).


  • Waste #2. Waiting. 

Here we measure the clock-speed of knowledge, such as the time it takes for a community question to be answered, the time it takes to find relevant synthesised knowledge, or the time it takes for lessons to be a) collected and b) embedded into guidance.


  • Waste # 3. Unnecessary transport of materials. 

 In our knowledge management world, this really refers to hand-off, and we might measure the number of links or steps between knowledge supplier and user. Communities of practice, for example, where "ask the audience"-type questions can be asked, and answered directly by the knowledge holder, will minimise the number of handoffs. With a large community of practice, everyone is at One Degree of Separation. 

  • Waste # 4. Non-value added processing—doing more work than is necessary. 

We might measure this by looking at the degree of processing the end user has to do to get an answer to their question, and how much synthesising is done by the user, which could be done further up the supply chain.  For example, does a user have to read and understand all lessons in a database on a particular topic, or have these already been synthesised into guidance?


  • Waste # 5. Unnecessary motion. 

We measure this by counting the number of places the knowledge user needs to go to in order to find relevant knowledge. Do they have to visit every project file to find lessons, or are the lessons collected in one place? Is there one community of practice to go to, or many? Linked-in, for example, has (or had at one time) 422 discussion groups covering the topic of Knowledge Management rather than only one. That is a waste of 421 groups (99.8% waste).


  • Waste # 6. Excess inventory
Like waste 1, we look at the unnecessary, duplicate or unread knowledge in the knowledge bases and lessons learned systems.


  • Waste # 7. Defects

Here we measure now much knowledge is out of date, and how much is poor quality. Some organisations, for example, measure the quality of lessons with a lessons database, and often find that much of the content is of very poor quality.

Monday, 9 November 2015

10-step creative process (video)

Here's a fantastic video about the creative process - really inspiring!


Friday, 6 November 2015

Is Knowledge Management certification counter-productive?

Last month I posted about the planned ISO standards for KM, and received some positive feedback, and some negative feedback.

Christian De Neef, for example, posted this comment:
If we look at the history of ISO 900x quality standards, it appears that some organizations complied with the standard because they needed to (to be able to get business from government for example) whilst others did the effort because they actually “believe” in quality. The first category has never achieved anything more than administrative compliance. On the other hand, the companies that worked on quality management initiatives because it was part of their values, have achieved remarkable success. Possibly, they would have been successful even without the ISO standard. I'm afraid that a KM standard would go the same path: it's not about the standard, it's about believing KM is at the heart of managing your organization!

Certainly a standard can, in some cases, be treated as a compliance exercise. This is true of the ISO Quality standards - some companies "tick the box", others take them seriously.  If a company was immediately told "you must be compliant against KM standards" (for example in order to qualify for a big contract) there might be a tendency to just tick the boxes.

But not always.

To explore this issue, let us divide companies by the following two vectors, to create a Boston Square;

  • Whether or not the company planned to do KM anyway before a requirement for certification was identified, and
  • Whether or not the company will take KM seriously as a core management discipline

These two vectors give us four fields.


The 4 quadrants are as follows

Where the company plans to do KM and do it seriously, the KM Standard should be of great help in avoiding common pitfalls and delivering an effective KM program. This is the primary purpose of the standard - to help people who want to do KM, to get it right by applying the basic principles (It is surprising how few companies even get the basic principles in place)

Where the company is not already planning to do KM but will take it seriously the standard should be of great help in introducing a new company into good-quality KM.

Where the company was planning to do KM but was not going to take it seriously, at least the standard will give them some guidance on avoiding the common pitfalls (though there should be few companies in this quadrant - why want to do it, if you don't want to do it well)

 Where the company was not already planning to do KM, is forced into it (perhaps by a contract requirement) and will not take it seriously, the standard will add no value, and will be treated as a box-ticking exercise. Even then, the box-ticking compliance may surprise them when and if KM starts to add value.

In three out of the four cases above, the standard adds value.

A standard is not a substitute for a "wish to take KM seriously", it is an aid, if that wish exists, to getting it right and avoiding the common mistakes we see so frequently.

Thursday, 5 November 2015

The role of Warnings in Knowledge Management

When we look at horizontal peer-to-peer knowledge transfer in an organisation, the knowledge which is transferred tends to be knowledge of practice, knowledge of product, or knowledge of customer. With vertical knowledge transfer - transfer of knowledge between workers and management - a crucial component of the knowledge which needs to be transferred is warnings.



Warning!
Originally uploaded by Håkan Dahlström
Nancy Dixon identified the 3rd age of KM being the integrated flow of knowledge up and down the hierarchy. This is still a very difficult thing to get right, as is profoundly illustrated by Christopher Burns in his book "Deadly Decisions - how false knowledge sank the titanic, blew up the shuttle, and led America into war". Burns mentions several cases where warnings, transmitted from workers to management, have been ignored, downplayed or rationalised away completely. He cites many high profile examples
  • Multiple warnings, often very detailed, that Al Qaeda was planning a major assault, using aircraft, within the USA
  • Repeated warnings that the O-rings on the Challenger shuttle were at risk at low temperatures (the same O-rings that failed at low temperature, with catastrophic loss of the shuttle and all crew)
  • Warnings that the Titanic was steaming into an ice field
  • Warnings that the cooling water system on the Three Mile Island plant was faulty, and might lead plant engineers to make decisions that could lead to melt-down
In his book, Burns talks about the psychology of information and knowledge processing, and reinforces how we form mental models which can be difficult to shift. A strong mental model can reject facts that don't fit, and companies and organisations can create structures that actually make this worse. The Bush Administration, he argues, was particularly bad at this, surrounding the president with like minded people, and producing a hierarchical knowledge supply chain which filtered out news that didn't fit the preferred model. The knowledge supply chain was fed at the base with warnings that might have averted 9/11, and might have avoided the Iraq war, but these warnings became weaker as they moved up the hierarchy, or were filtered out completely. The knowledge that was supplied to the top, was the knowledge that The Top wanted to hear.

However if an organisation is to avoid disaster, it must be very sensitive to warnings. Warnings cannot be filtered out or ignored, if we want to avoid our own versions of the Titanic, 9/11, the Enron collapse, the Challenger disaster, or Three Mile Island. The knowledge supply chain must carry these warnings faithfully and accurately, Burns says that

"Warnings are a special class of dissonant information and they are difficult to heed for three reasons. First warnings .... often come from people deep within the organisation who have few credentials and are often hard to understand. Secondly, they contain a prediction about the future based on facts, values and concepts which might be different from those of the listener. It is important for the person giving the warning to remove as many of these obstacles as possible. And third, there's a pathology of giving and receiving warnings that needs to be overcome".
He describes this pathology as the warner, anxious to get the message across and worried that the "warnee" will not listen, having a tendency to overstate the danger. The warnee gets used to these overstatements, and discounts the significance of the message, which prompts the warner to even greater exaggeration (the "cry Wolf" effect). He says that the only way around this is to lay out the facts for the warnee, and let them connect the dots themselves. The end result is that warners find warning to be exhausting, confrontational and career-threatening. Many of the people Burns identifies as having tried to deliver warnings, either lost their jobs or retired soon afterwards.

So to allow warnings to reach the decision making layer, we need
  • an openness at senior level to dissonant voices and to the "weak signals" of warnings (perhaps using an analysis function specifically to look for these)
  • a knowledge supply chain that is as short as possible, either through a flat information hierarchy, or the sort of cross-hierarchy knowledge sharing events that Nancy Dixon describes
  • to reward warners rather than punish them, much as people are now encouraged and rewarded in safety-conscious cultures for identifying near misses or unsafe conditions. In a safety context, people are encouraged to warn, and a lack of warnings is seen as a sign that something has gone wrong with the system. We need a similar approach to warnings in all areas - not just safety warnings, but warnings of changes in the market, warnings of inefficient processes, warnings of complacency and of obsolete thinking.
Making the vertical knowledge supply chain work efficiently and effectively may just be the biggest challenge that will face Knowledge Management going forward.

Wednesday, 4 November 2015

The Grandparent effect

A couple of presentations chimed for me at KM World yesterday.

Firstly a statement by David Snowden about how older staff and younger staff are more receptive to knowledge and ideas than middle-range staff, and how the retention and transfer of knowledge between these two age ranges may therefore be easier (he called it the grandparent effect).

Secondly a range of stories from Colin Cadas and Katarzyna Cichomska on the KAMP (knowledge acquisition and modelling process) at Rolls Royce, where young staff were trained to capture knowledge from people due to retire. In one case, a group of engineers with 30 years experience were retiring, and the KAMP project was run by a group of 19-year-old new hires, who had been given training in Knowledge elicitation and packaging.

There are several advantages to pairing up very experienced and very inexperienced staff in knowledge capture programs, rather than expecting the successor to the experienced person to capture the knowledge;
  • There will be no rivalries between the experienced person and the junior. The junior is "no threat"
  • The experienced staff will explain all the details, rather than making assumptions about what the juniors already know
  • The juniors take the documentation load from the seniors
  • The need to document means that the juniors are active participants rather than passive listeners. They learn better when they need to create the official record.
  • When the experts check the documentation, this provides a second learning opportunity for the youngsters
  • They leave the program with a training in KM techniques
  • They leave the program with an "ownership" stake in the knowledge record, and are much more likely to refer to it and re-use it


Monday, 2 November 2015

How to introduce After Action Review

Here's a story about introducing After Action Review at an industrial plant. The story is told in quotes from the KM people involved. There's some good learning here!



factory-worker
Originally uploaded by Kalinago English
"It was important that the management group here and all the people would be aligned, including the contractor. Our CKO came in and put on his presentation to the management group, as well as to the contractor, and after the presentation nobody had any doubt that this was intuitively the way to go, though there was scepticism about the AARs themselves. To a person they supported the concept, but I am not sure they felt comfortable that it would work out in the field".
"We got the senior people in the plant committed to the idea, and some good people seconded onto the project. That was the fundamental key to success"
"I think the biggest disappointment was at the stage in the training where we get to the bit of ‘now how do you apply it’. We've gone through the chalk and talk, we've done AAR's in Bird Island, building the tower, and now - what is going to happen in the real world? We have the desire that we come out with a specific statement, and it just doesn’t work - people need time to let the ideas they've been presented with sink it"
"After the 2nd-3rd day of meetings with the supervisors, talking about general things, you could tell they were losing interest, so I said ‘let me work with one of your craft teams’; and it took off from there. That’s what I mean by getting your foot in the door; they didn’t want me bothering the workers, but it became apparent to the Front Line Supervisors that this was they way to go. So we got it to the field; where the rubber meets the road. That’s where you get most from an AAR, and you can start cascading it up. Unless you get in at ground level you miss a lot of resources".
"We took what they gave us. Because there was scepticism, the closer to the date we got the more sceptical and worried people got. They backed me up to where they gave me only 2 support teams and no crafts people. I was discouraged, but took what they gave me and worked with that. I thought, well if that’s what they will give me I will work from there. So I got my foot in the door and expanded it from there. We were willing to be flexible and take what they gave us. It would have been real easy to drop the thing. My mindset was that if you start working with the supervisors, very quickly they will get into their own area of work, and will not have that much in common".
"These guys would be craftsmen themselves and become Front Line Supervisors during turnaround. Good workers, in the plant all year, with existing skills and knowledge, and some leadership ability. They are not used to conducting meetings or facilitating, so it was important to give the support. Get out there, facilitate a couple, bridge it over to them and be there with them, then back away and let them go".
"We went from a formal workshop approach, to a very informal workshop round a table, and field support afterwards. My advice to others would be to know your customer, know who you’re dealing with, know the people, and know what they are more apt to expect. If you’ve got front office people, more technically oriented, and they have been to workshops in the past, then the workshop training is nice; good hands on stuff. But if you have people who are used to moving all day, don't sit in offices, don't go to workshops, hands-on people, then I would use this informal approach, with field support. I’m not saying that the front people wouldn’t enjoy the workshop at some time, but where we were, they were wanting to get on with the job".
"Another point was that there was some action taken almost immediately after things were brought up. We were not just generating Lessons Learnt, but things were improved the next day. That helped to sell the crafts people and the foremen on the use of AARs".
"I generated the lessons learnt on a day to day basis, and I highlighted the day before in bold. I gave 2 lots of feedback. I gave the 1 pager back to the team the next day, and talked from that a bit. They saw how it was being used, and how action was coming out. I also took the lessons and I had 2-3 minutes in the 3pm turn-around meeting. I spoke about 1 or 2 of the lessons and that helped to build some credibility in the teams, and hopefully planted the seed for the next exercise. The report of lessons got to be 5-6 pages, and typically these would be read during the meeting and they took them with them - you never saw them left on the table at the end of the meeting".

Blog Archive