Thursday, 6 October 2011
In knowledge audits, we seek to measure the status of knowledge within an organisation, to see whether it is at risk, and if so, to what degree of risk, so that we can prioritise our KM efforts.
The problem begins with the “measurement”. How can you “measure” the state of knowledge? Knowledge is not something that can be counted, or weighed, or measured.
All our efforts so far have had to fall back on a qualified analysis of knowledge. We ask for marks out of 10, or marks out of 5, for things like the in-house level of knowledge, the level of documentation, the spread of knowledge within the company, the maturity of the topic and so on.
It is all very subjective, even when you give guidance in terms of descriptions of what the levels 1 to 5 “look like”, but at this stage I think that this is the best we can do. We bolster this by using the same person to run multiple audits, and to sense-check the results, but there is that underlying subjectivity.
You can argue that if all we are looking to do is to rank knowledge topics for attention, and to provide a relative baseline to show improvement and to create a dashboard, then subjectivity is OK so long as it is consistent.
But does anyone else have a more objective approach to the measurement of knowledge, as part of an audit? Suggestions welcome.