Friday, 16 February 2018

Why beginners and experts behave differently in KM

Experts and beginners behave differently in Knowledge Management systems. Here's why.


Great Meadows Fishing Day 2010
Great Meadows Fishing Day by U. S. Fish and Wildlife Service, on Flickr
Confucius said "Shall I tell you what true knowledge is? When you know, to know that you know, and when you do not know, to know that you do not know—that is true knowledge". So the true expert is the person who knows what they know, and what they don't know.

Before you get that true knowledge, before you become an expert, you don't know what you know and you don't know what you don't know.  That's a description of a beginner, or a novice; beginners do not yet know the level of their ignorance.

In knowledge management terms, that means you have to treat the beginners and the experts differently. Experts, and budding experts, will become fully involved in Communities of Practice - gaining knowledge through discussions and through questions., When they need knowledge, they will ask for it and receive answers. Their questions are clear and focused, because they know what they don't know. They won't tend to use the Knowledge Base so much, as they already know what they know, and are looking to "fill in the gaps" in their knowledge that probably aren't in the knowledge base.

The novices, on the other hand, will not take part in the community discussions, although they may "lurk" - read the conversations but without taking part. Because they don't know what they don't know, they don't know what questions to ask. When they do ask questions, they tend to be general and basic rather than specific and targeted. However the beginner will find the knowledge base - the wikis, the FAQs - very useful, because it gives them the full picture. It shows them all aspects of the knowledge, so they can understand the full range of the things they don't know.

The demographics of the workforce determines which knowledge management tools to focus on. A company with large numbers of experienced staff will get huge value from communities of practice, and from question-led discussions among the experts and budding experts. A company with large numbers of new staff and inexperienced staff may get more benefit from building FAQs, knowledge bases and wikis. 

Thursday, 15 February 2018

What to do when knowledge is a core product deliverable

For some projects, knowledge is their most important deliverable, but how is that deliverable defined?


We are used to thinking of knowledge as an input for a project, but it is often an output as well. Projects can learn new things, and can create new knowledge for an organisation. Often we assume that this new knowledge will be transferred through the lesson learned system, but is that really enough?

Usually it isn't, and instead a different approach might be better.

Lessons are increments  of knowledge, usually identified after the event, at the end of an activity or a project stage. In an ideal world every lessons would be associated with an action, and each action would lead to the update of a best practice, a doctrine or a corporate standard. Lessons are usually captured in a team meeting such as a Retrospect.

However if a project is doing something new - something which has never been done before - then the standard lesson approach is insufficient. Rather than identifying and capturing the learning after the event, the organisation should identify the potential for learning at the start of the project, make sure resources are assigned to learning, and require the project to create a guideline, best practice or standard as a project deliverable. 

Imagine an organisational project to set up the company's first manufacturing facility in Asia - the first of many such plants in an expansion program. The project is expected to deliver a manufacturing plant with a certain production capability, and the success of the project will usually be measured by whether the plant is delivered on time, to quality and to budget. However the success of the program will be influenced by how much knowledge is passed from the first plant to the others, and the value of this knowledge may be higher than the value of the plant itself.

Therefore the project can be given a second deliverable - to create best practice guidance documents, doctrine or first-pass standards on topics such as
  • Doing business in that particular Asian country
  • How to negotiate the bureaucracy
  • How to obtain permits
  • How to construct the plant efficiently and effectively
  • How to recruit a skilled workforce
and many other topics. These deliverables should be managed through the project KM plan, and reported to management the same as other deliverables.

This set of knowledge deliverables could be given its own resources and its own workstream, in order to make sure that the knowledge is captured. Without this focus on knowledge, it is quite possible to get to the end of the project and find that no knowledge has been captured at all. 


Wednesday, 14 February 2018

How many KM pilots do you run at once?

KM Pilots are a key step in agile KM implementation, but how many pilots do you run?


Knowledge Management pilot projects are a core componment of KM implementation. As we explained last month, a pilot project uses KM to solve a business problem in order to test and demonstrate that KM can do what it is supposed to do, and so that you can learn enough to improve and enhance the framework using experience from the pilot.

But how many pilots do you need, and how many can you run at once?

The answer is - you need enough pilots with enough positive results and enough learning that

  • you have fully tested your Knowledge Management Framework, and
  • you have enough evidence to convince both senior management and the knowledge workers that KM adds enough value to be adopted.

As for how many you can run at once, that depends on the level of coaching, mentoring and support resources you have available. Do as many pilots as you can handle, and no more. The process you need to decide which pilots to undertake is as follows:

  1. Canvas the business to find out a list of business issues which KM can help solve. This blog post gives you some pointers which business issues to look at, and you should be able to come up with a long list. 
  2. Rank the pilots against the 4 criteria of potential measurable impact, management support, doability and abiklity to upscale (see blog post for more guidance).
  3. Starting with the top ranking ones, select as many as you can support given the time and resources.
  4. Also try to select a portfolio of pilots that will test all elements of the KM framework.

In BP, for example, with our central team of 12 full-time KM staff, we ran 4 pilots at once. In Mars, with a smaller team, they ran 2 a year.

Choose your pilots wisely and run as many as you can handle until KM is tested, refined and proven

Tuesday, 13 February 2018

Good, cheap, fast - choose all three

There is a well known saying; "Good, Fast, Cheap - pick any two." It's wrong.



The idea behind this saying is that there is a certain amount of work to be done to deliver a task, service or product, and that work is bounded by the limits of cost, time and quality.

If you try to improve any two of these factors, the third will be impacted. If you want something fast and cheap, for example, it won't be any good. If you want it good and fact, it wont come cheap.

It's like seeing work as an incompressible cube - if you press on two axies, the third will expand.

This is only true if the work really is incompressible. There is in fact a way to make things faster, better and cheaper, and that is the removal of waste.

If there is waste in the body of work, then removal of the waste will alllow all three axes to contract.

This is the principle behind Lean approaches to manufacturing, supply chain and product design, and is also an area that Knowledge Management can help with. Through effective KM, we can reduce waste from activity, and compress the "work cube".

Good, cheap fast - if you use KM to reduce waste from work, you can pick all three.

Monday, 12 February 2018

Connect and Collect - the left leg and right leg of KM

The Connect and Collect approaches in KM are like the left leg and the right leg- you need to use both. 

image from PublicDomain Pictures

I was working with a client last week who is very interested and enthused about the use of Knowledge Management Processes to drive conversations between staff, as an antidote to previous attempts to collect knowledge. These previous attempts had resulted in a huge lessons database which people viewed as a chore, and a waste of time. 

Much as I applauded their new focus on conversation and Connection, I urged them not to neglect the Collect part of the knowledge cycle, as these two aspects of KM go and in hand.

In fact, they are like the left leg and the right leg. A focus on Connecting can help you make a great stride forward, but you need the other leg to catch up if you want to make real progress. 

I told them this story

We were working with a KM team, who had asked us to come into their organisation and run some Retrospects from major successful bids. They wanted to develop and deploy knowledge of how to bid successfully. 
We held a series of Retrospects, and they worked very well. We had some fantastic dialogue within the bid team, and with the internal client, and identified a series of learning points. We found some really good success factors whch should be repeated in future, and whole set of opportunities for improving the bid process, including some things that were really frustrating the bid teams (mostly related to inappropriate company policies) and we communicated these to other projects. Everyone was very enthused by the process.  
A few months later the client called, and said "That Retrospect process is rubbish". That took me aback, as I know from experience that it is a very powerful and robust process, so I asked him why he said this. He replied - "those issues that were frustrating the team when we started, are still there. They have come up again in the latest Retrospects. Nothing has been changed". 
 Well - nothing would be changed, if all they did was hold Retrospects. Retrospects are great for identifying team learning, but there needs to be a follow on process to take action on the issue, and for this particular company, those actions needed to be taken at a high level in the organisation. They had not implemented a process or workflow for documenting the lessons addressing the actions, and had no engagement from senior managers in the learning process. Retrospects, like so many KM tools, need to be part of a system, and no tool in isolation will stand in for the system as a whole.

Connect and Collect  - Conversation and Content - need to work together. Conversation is where content is born, and content is something to talk about together. Retrospects need to work together with the lesson management system, not in isolation.

Connect and Collect are like the left leg and right leg of KM - there is only so far you can travel using one and not the other.

Friday, 9 February 2018

Knowledge drinking fountain, or knowledge firehose

Too much knowledge is a bad thing. It's better to drink from the fountain, than be hit by the firehose.


running faucet Larry Prusak and Tom Davenport wrote in their classic book "Working Knowledge" that "Knowledge can move down the value chain, returning to information and data. The most common reason for what we call “de-knowledging” is too much volume”

We see that happening in many organisations, who seem to create a deluge of internal emails, blogs, microblogs etc which can become overwhelming. You can no longer drink from this Knowledge Firehose - you become overwhelmed and drown in the torrent of (what has become)  information and data. Knowledge can move so far down the value chain that it becomes Noise.

The Knowledge Firehose is counter-productive, and can destroy exactly the value you are trying to create. Instead, aim for the Knowledge Faucet - "Knowledge on tap".

You do this in three ways.

Don't broadcast - narrowcast. There is no need to send people everything - set up defined channels so that new knowledge can reach the people who need to see it, and will not clog up the inboxes of people who do not need to see it.

Work through Pull, not Push. Rather than publishing, work through seeking and searching. Make knowledge visible, findable and available, and develop a culture of learning before doing - looking for knowledge when it is needed.

Aim for "Just In Time" knowledge supply. Ideally knowledge should be transferred to people when they need it, not "in case they need it". If you can automate this supply, even better! The main point here is that people don't pay attention to knowledge until they actually need it. Up until that point, it's Noise. At that point, it becomes of huge value to the receiver.

Thursday, 8 February 2018

Wishful thinking - the inevitable outcome of "not knowing"?

The almost inevitable outcome of "now knowing what you don't know" is wishful thinking. Even the use of benchmarks may not help.



Wishful Thinking Wishful thinking is one of the curses of project management.  Any project team without a perfect knowledge of the challenges that they will face in a project, tend to underestimate them.  They assume things will work well, they assume the “best case scenario”, and they end up with an over-optimistic view of the project, an over-optimistic view of costs, and an over-optimistic view of schedule.

Daniel Kahneman gives a great example of this in his book “Thinking, Fast and Slow”.

He describes a project, many years ago, where he convened a team to design a new high-school curriculum and to write a textbook for it (ironically, it was about judgement and decision making).  They had had several team meetings to construct an outline of the syllabus, had written a couple of chapters, and even run a few sample lessons.  They decided to do some planning, and to create an estimate of how long it would take to submit to finish draft of the textbook. Kahneman knew that one of the most effective ways of estimating is not to start with discussion, but to get everybody to individually submit their judgment, so has asked everybody to write down their estimates, and then collected these in.  Estimates ranged from 1 ½, years to 2 ½ years with a median of two years, to finish and submit the first draft.

Then he had another bright idea. He asked one of the curriculum experts on the team whether he could think of any examples of similar projects in the past, and how long they had taken.

“He fell silent” Kahneman writes. “When he finally spoke, it seemed to me that he was blushing; embarrassed by his own answer: ‘You know, I never realized this before, but in fact not all the teams at a stage comparable to ours ever did complete their task. A substantial fraction of the teams ended up failing to finish the job’”.  That fraction was 40%.

Kahneman then asked how long it took those who actually had finished the job.
“I cannot think of any group that finished in less than seven years” he replied, “nor any that took more than 10”. 
(Note that this guy himself had, shortly before, estimated it would take about two years!).  Then Kahneman asked how the current team ranked compared to the others (perhaps they were much better and could finish much faster!).  “We are below average” he replied “but not by much”.

So now the team had new knowledge, that comparable or better teams often fail at this job, and those that succeeded took four times longer than the groups estimate.

So what do you think the group did?

Kahneman tells us

“our state of mind when we heard Seymour is not well described by stating what we “knew”.  Surely all of us now “knew” that a minimum of seven years and a 40 per cent chance of failure was a more plausible forecast than the numbers we had written on a slips of paper.  But we did not acknowledge what we knew.  The new forecast still seemed unreal, because we could not imagine how it could take so long to finish a project that looks so manageable.  All we could see was a reasonable plan that should produce a book in about two years. ……….  The statistics that Seymour provided were treated as base rates normally are – noted and promptly set aside.  We should have quit that day.  None of us were willing to invest six more years of work in a project with a 40 per cent chance of failure …..  After a few minutes of a desultory debate,  we gathered ourselves together and carried on as if nothing had happened”.

The book was eventually finished eight years later.  The initial enthusiasm for the idea in the ministry of education had waned by the time the text was delivered, and it was never used.

This is a very interesting story.  Not only did the group not know what they didn’t know, they were unable or unwilling to accept the new knowledge when it was presented.  They ignored it, and continued with the wishful hope that they would be finished in two years.

Kahneman concludes from this that there are two different approaches to forecasting, the inside view and the outside view.  The inside view is based on “what you know that you know”, and what the team knew was that they had made some good progress already, albeit completing some of the easiest chapters at a time when enthusiasm was at its peak, and they extrapolated from this good progress.  But they didn't know what they didn't know, and they didn't foresee the bureaucracy, the distractions, and the conflicts that would eventually arrive.  However because they were anchored to their inside view, they would not accept the outside view, even though the outside view was based on reliable baseline statistics.

The rest of Kahneman’s book, which I highly recommend, explores other aspects of the psychology of decision-making, and gives many examples of how people will make wrong decisions as a result of over confidence through limited data.

There are many are implications here for knowledge management; the need to access outside knowledge through activities such as peer assists, the need to collect baseline data on performance to act as a “reality check” for optimistic teams, and the need for continuous project learning to recognise when predictions are optimistic, and to renegotiate the prediction.

 Without effective knowledge management, and without effective knowledge-based decision making, project predictions will be, as Kanhneman’s project was, based largely on wishful thinking.

Blog Archive