Wednesday, 12 December 2018

The risks when an algorithm takes your job

An interesting Forrester blog highlights some of the risks of process automation


NAO Robot
image from wikimedia commons
We live in a world where automation is beginning to impact knowledge work, in the same way that it impacted manual work in the last century.  On the one hand this is great news for organisations, as it can potentially revolutionize the productivity of the knowledge worker. On the other hand it brings risk.

One attractive opportunity is process automation, where a process that a human used to operate can become automated. The rules, heuristics and knowledge applied by the human can be extracted, using various knowledge management techniques, and turned into an algorithm which a computer or robot can use.

So a job like drafting a will, or cooking a meal, or monitoring a refinery, can be automated. Human knowledge is converted into algorithms, and the know-how that a human used to employ is passed to a machine which will reproduce the logic faithfully, tirelessly, without error, and for a fraction of the lifetime cost.

The problem of course is that know-how is not enough, and we also need know-why.  The know-how is great in a predictable environment, but the know-why is needed once you move into uncharted territory.

That's one of the messages given in this Forrester blog entitled "Ghost In The Machine? No, It’s A Robot, And It Is Here To Help". The author, Daniel Morneau, is an advisor on the Technology council, and writes in the blog about robotic process automation, the benefits it will bring, and the governance it will need.

He also quotes one Industry leader who identifies a risk he had not anticipated:

"The hard lesson I learned is that once that knowledge is built into the bot and the employee goes out the door, it’s gone forever,” he said...“We captured the process in the code, right? I mean, the bot knows how to follow the process; we’ve just lost the business logic behind it”

When Moreau asked him what he would do differently, having learned this lesson, he replied

"I’d document the business logic where I can (and) I’d find a way to keep the best employees whose roles are being replaced so their deep understanding of the business logic can be available as we continue to support our businesses. I mean, the business function that that system is used for is not going away, and having employees who have a deep understanding of our business is the hardest thing to hire for".

So that's an interesting conclusion about the need for human knowledge of business logic.

We may increasingly outsource some of the know-how to the robots, but we still need to retain humans with the know-why. 

Tuesday, 11 December 2018

A timeline of KM at McKinsey

McKinsey is one of the leading Knowledge Management organisations in the world. Here is how they got there.


Image from wikimedia commons
I have referred to McKinsey a few times on this blog, describing their approach to knowledge centers, and some of the KM roles they have in place. McKinsey are one of the leading firms in KM terms, and a many-time winner of the MAKE awards.

Below, from this case study, I have distilled a roadmap of how they got to their leading position. The case study was written in 2006, but seems to take the story up to the end of the millenium.

  • 1970s, period of slow growth and increased competition for McKinsey, decision to invest in Expertise and strategy. "One Firm" policy. Various informal networks but no attempt to reuse knowledge from one assignment to the next.
  • 1976 - new MD and increased focus on skill development, training and functional expertise. Creation of a standard classification of key tasks.
  • 1980 - Fred Gluck (later to become MD) starts a group focusing on knowledge building. Practice bulletins introduced.
  • 1982 - Creation of 15 "centres of competence" covering business areas such as finance, logistics, and strategic management. Experts appointed as practice leaders, each area with a full-time practice coordinator. 
  • 1980s - still no mechanism or process to capture knowledge. COnsultants encouraged to publish and share, but contributions are rare.
  • 1987 - formal KM project set up, with 3 objectives; a database of project details (Practice Information System - PIS), a knowledge base covering the main areas of practice (Practice Development Network - PDNet), an index of specialists and core documents (Knowledge Resource Directory - KRD).
  • late 1980s - KRD evolves into McKinsey yellow pages, PIS and PDNet populated
  • 1990 - Introduction of client service teams focused on working long term with clients and developing deep understanding of client issues. Staff encouraged to publish books and articles.
  • 1993 - McKinsey spending $50 million annually on knowledge building
  • 1994 - Rajat Gupta becomes MD, strong supporter of KM
  • 1995 - "Practice Olympics" created in order to promote development of practice knowledge. 
  • 1995 - establishment of first Knowledge Center in India, focused on provision of research and knowledge to client-facing consultants. 
  • Late 1990s - McKinsey develop a "personalisation" strategy for KM, with a focus on dialogue and knowledge sharing between individuals, and the development of knowledge haring networks.  Editors employed to convert client PowerPoint decks into reference documents with quality ratings.
That's where the case study stops. However through this 20-year history we can see the genesis of the McKinsey KM organisation, with its 2000 knowledge professionals, its 6 global knowledge centers, and its practice-focused roles and networks. 


"Our work is founded on a rigorous understanding of every client’s institutional context, sector dynamics, and macroeconomic environment. For this reason, we invest more than $600 million of our firm’s resources annually in knowledge development, learning and capability building. We study markets, trends, and emerging best practices, in every industry and region, locally and globally. Our investment in knowledge also helps advance the practice of management". 



Monday, 10 December 2018

KM - managing container, or managing content

KM can be addressed in two ways - managing the container in which knowledge is carried (the people or the documents) or managing the contents held in that container.


Image from wikimedia
I blogged last week about "fuzzy statements" and how these need to be avoided if knowledge is to be transferred effectively from one person to another. One of the replies I received on LinkedIn was the question whether Taxonomy could help with fuzzy statements. 

My answer was No - that the fuzzy statement was an issue with the content of the knowledge, whether this was a piece of advice in a lesson database, a message on a forum, or a lesson in a database. The Taxonomy did not affect the content, but categorised the container - the post, document or article.

I would like to explore that idea of content and container a little more.


Undocumented Knowledge

If we look at the knowledge held in people's heads, then managing the container equates to managing the heads; hiring the people, moving the people to tasks where they are needed, and categorising the people based on what they know.  Managing the content equates to setting up and facilitating the conversations through which people learn, and through which their knowledge - the content of their head - evolves.

Managing this content is the province of Knowledge Management, or at least that part of KM that covers Conversation.

Managing the heads themselves, and assuming these heads carry useful knowledge, comes more under the province of HR. This is old-style prehistoric KM - the idea that if you move knowledgeable people about you are managing knowledge.  This is true, but only to a very limited extent.

Managing the container Managing the knowledge content
Hiring
Manpower allocation
Succession planning
Expertise directory
Competence mapping
Mentoring
Coaching
Community of practice discussions
Knowledge transfer conversations (Peer assist, knowledge exchange)
Training


Documented Knowledge

If we look at the knowledge held in documents, then managing the container equates to managing the documents themselves, and managing the content equates to process you put in place to ensure the content of these documents is useful, valuable and correct knowledge, written in such a way that it will be understandable to the reader.

Managing this content comes is the province of Knowledge Management, focusing on effective capture and update of documented knowledge, ensuring the contents of the document are valid and useful.


Managing the documents themselves, ensuring they are categorised and findable while assuming these documents carry useful knowledge, comes more under the province of Information Management.

Managing the container Managing the knowledge content
Taxonomy
Metadata
Search
Portals
Intranets
Collaborative authoring (wikis, team knowledge capture, community knowledge bases)
Facilitated capture (interviews, lessons)
Validation
Correlation and comparison
Feedback

Of course both the knowledge content and the knowledge containers need to be addressed in any complete and holistic KM endeavour, and KM needs to work with both IM and HR to this effect, but KM alone is responsible for ensuring the quality of content. If all we do in KM is worry about taxonomy and portals, then we are concentrating only on the containers and neglecting our primary role of looking after the content.

If we take shipping container traffic as an example, managing the containers is the job of the docks, the shipping authorities and the hauliers. Managing the content is the job of the exporters and importers. Together both parties ensure that quality goods reach the right market.  

Similarly, KM, IM and HR combine to ensure that quality knowledge reach the people who need it, and for that to happen, KM's primary role is looking after the quality of the content in the heads and in the documents.


Friday, 7 December 2018

How to talk to the business about KM

Communicating KM to the business requires using business terms, not KM terms.



Talking to the manager of Publix in Naples Knowledge Management is not an end in itself, it is a means to an end, and the end is a more efficient, effective and productive organisation.

The senior and middle managers in your organisation are not interested in Knowledge Management - only in what it can do for their part of the business.

Therefore when we talk with the business stakeholders, we need to talk in their terms, and address the things they are interested in. We need to use words they are familiar with, rather than KM jargon.

One approach is to rebadge KM in words they already know, like innovation, collaboration and learning. So instead of talking to them about Knowledge Management, we talk to them about the following;

  • Innovative products - bringing together the knowledge of our people, as well as external knowledge, to build new ways of doing things, new products, and new lines of business. Here you use KM processes such as business driven action learning.
  • Collaboration solutions - bringing together knowledge from different parts of the business to develop better ways of working - using the knowledge we already know, but which is scattered and siloed. Here you use KM processes such as communities of practice.
  • Empowering the front-line with knowledge - arming our customer-facing staff with the knowledge they need to close the deal, or delight the customer. Again communities of practice are important here, and effective knowledge bases.
  • Harmonising the way we work - comparing and learning from the disparate practices across the organisation, to find the ones that work best in given circumstances. Here you use KM processes such as Knowledge Exchange.
  • Learning from Experience - ensuring our projects and business activities do not repeat the mistakes of the past, but build on the successes. This is the whole area of project-based learning.
  • Stopping the brain-drain - addressing the risk of losing capability as knowledgeable people retire. Here you use KM processes such as Knowledge Retention.
  • Speeding up the learning curve - either for new-hires coming into an expanding business, or for new areas of the business (new markets, new products, new geographies). This will require a combination of many of the KM approaches above.

Address these business issues one by one, starting with the most urgent and bringing in KM solutions as you go, and pretty soon you will have a complete Knowledge Management Framework in place!

Thursday, 6 December 2018

Should you allow people to be anonymous in company online forms?

Is anonymity a good thing in online organisational (in-company) knowledge sharing forums? I suggest it is not, and my reasoning is below.


Public domain image from SVG
When you first set up knowledge sharing forums, it can be tempting to allow people to contribute anonymously, to reduce their fear of exposure. But is this a good idea?

Please note I am not talking about public forums, where people may want to talk about personal problems - relationships problems, abuse, addiction - which they do not necessarily want their family and neighbours to know about. Nor am I talking about anonymous activism, or Wikileaks. I am talking about knowledge-sharing communities of practice as part of an organizational Knowledge Management framework.

There are arguments for and against anonymity, and lets look at those first.

Arguments for anonymity

  • In a toxic culture, where knowledge is power, it can be a risky act to challenge the status quo. To ban anonymous comments, is to remove the possibility of honesty. An anonymous forum creates a safe space for knowledge sharing.
  • In a non-Western culture, where admitting mistakes is not acceptable, it can be very difficult for people to admit they don;t know, and to ask for help. Anonymity again gives a safe space for asking.
Arguments against anonymity
  • People are more likely to share positive knowledge if they get credit for it (see my blog post on keeping the name with the knowledge).
  • People are more likely to use the knowledge if they trust it, and if they trust the source. I remember, when testing an anonymous knowledge asset in an organisation, how people responded "Why should we trust this, if we don't know where it comes from".
  • It is very difficult to learn from the written word. Most effective knowledge systems allow you to find the contributor of a lesson, a good practice or a document, and to speak with them to learn more. With anonymity, this is not possible.
  • If the culture is difficult, toxic, or intolerant of mistakes, then an anonymous forum  acknowledges publicly that you have to be anonymous to share knowledge, and so to an extent perpetuates the culture. Conversely, if people can see knowledge being shared openly by brave souls, and those brave souls being praised and rewarded for it, then you have the potential to change the culture.

That last one is the clincher for me.

If you need to be anonymous to share knowledge in your organisation, something is badly wrong. Work with the culture, sure, for example providing named individuals who can share your knowledge for you if you are not brave enough, or provide alternative safe spaces where knowledge can be discussed and shared without anonymity, but don't reinforce a bad culture.

Instead, seek to influence it; seek to change it.



Wednesday, 5 December 2018

The curse of knowledge and the danger of fuzzy statements

Fuzzy statements in lessons learned are very common, and are the result of "the curse of knowledge"


Fuzzy Monster
Clip art courtesy of DailyClipArt.net

I blogged yesterday about Statements of the Blindingly Obvious, and how you often find these in explicit knowledge bases and lessons learned systems, as a by-product of the "curse of knowledge".

There is a second way in which this curse strikes, and that is what I call "fuzzy statements".

It's another example of how somebody writes something down as a way of passing on what they have learned, and writes it in such a way that it is obvious to them what it means, but which carries very little information to the reader.

A fuzzy statement is an unqualified adjective, for example
  • Set up a small, well qualified team...(How small? 2 people? 20 people? How well qualified? University professors? Company experts? Graduates?)
  • Start the study early....(How early? Day 1 of the project? Day 10? After the scope has been defined?)
  • A tighter approach to quality is needed.... (Tighter than what? How tight should it be?)
You can see, in each case, the writer has something to say about team size, schedule or quality, but hasn't really said enough for the reader to understand what to do, other than in a generic "fuzzy" way, using adjectives like "small, well, early, tighter" which need to be quantified.

In each case, the facilitator of the session or the validator of the knowledge base needs to ask additional questions. How small? How well qualified? How early? How tight?

Imagine if I tried to teach you how to bake a particular cake, and told you "Select the right ingredients, put them in a large enough bowl. Make sure the oven is hotter". You would need to ask more questions in order to be able to understand this recipe.

Again, it comes back to Quality Control.

Any lessons management system or knowledge base suffers from garbage In, Garbage Out, and the unfortunate effect of the Curse of Knowledge is that people's first attempt to communicate knowledge is often, as far as the reader is concerned, useless garbage.

Apply quality control to your lessons and de-fuzz the statements

Tuesday, 4 December 2018

The curse of knowledge, and stating the obvious

The curse of knowledge is the cognitive bias that leads to your Lesson Database being full of "statements of the obvious"



Obvious sign is obvious. There is an interesting exercise you can do, to show how difficult it is to transfer knowledge.

 This is the Newton tapper-listener exercise from 1990.

 Form participants into pairs. One member is the tapper; the other is the listener. The tapper picks out a song from a list of well-known songs and taps out the rhythm of that song to the listener. The tapper then predicts how likely it will be that the listener would correctly guess the song based on the tapping. Finally, the listener guesses the song.

Although tappers predicted that listeners would be right 50% of the time, listeners were actually right less than 3% of the time.

The difference between the two figures (50% and 3%) is that to the tapper, the answer is obvious. To the listener, it isn't.

This is the "curse of knowledge".

Once we know something—say, the melody of a song—we find it hard to imagine not knowing it. Our knowledge has “cursed” us. We have difficulty sharing it with others, because we can’t readily re-create their state of mind, and we assume that what is clear to us, is clear to them.

Transferring knowledge through the written word (for example in lessons learned, or in online knowledge bases) suffers from the same problem as transferring a song by tapping. People THINK that what they have written conveys knowledge, because they can't put themselves in the mind of people who don't already have that knowledge.

Just because they understand their own explanations, that does not mean those explanations are clear to he reader.

This effect can be seen in written knowledge bases and lessons databases, and often appears as Statements of the Blindingly Obvious (SOTBOs).

These are statements that nobody will disagree with, but which carry obviously carry some more subtle import to the writer which the reader cannot discern. These include statements like
  • "It takes time to build a relationship with the client" (Really? I thought it was instantaneous). 
  • "A task like this will require careful planning". (Really? I thought careless planning would suffice)
  • "Make sure you have the right people on the team." (Really? I thought we could get away with having the wrong people)
  • Ensure that communication and distribution of information is conducted effectively. (Really? I thought we would do it ineffectively instead)

The writer meant to convey something important through these messages, but failed completely. Why is this? Often because the writer had no help, no facilitation, and was not challenged on the emptiness of their statements.

In each case, any facilitator which had been involved in the capture of the knowledge, or any validator of the knowledge base, would ask supplementary questions:

  • How much time does it take? 
  • What would you need to do to make the planning careful enough? 
  • What are the right people for a job like this? 
  • What would ensure effective communication?
This further questioning is all part of the issue of knowledge quality assurance, to filter unhelpful material out of the knowledge base, or lessons management system, and to turn an unintelligible set of taps into a full tune.

Without this, people rapidly give up on the knowledge base as being "unhelpful", and full of SOTBOs.




Blog Archive