Monday, 8 November 2010

The role of analysis in knowledge management

The more I work with public sector organisations in knowledge management, the more I come across things that are unusual, if not unknown, within the private sector. In industrial knowledge management, the project teams tend to do their own analysis of lessons learned. They look at performance, they look at outcome against expectations, they identify the learning points, and they decide the actions needed to embed the learning. In the public sector, the analysis is often done by a different group.

Take for example the NATO Joint Analysis And Lessons Learned Centre (note how they link analysis and lessons learned into a single organization). Their learning cycle is based on the Observe, Orient, Decide, Act loop (OODA). Here individuals involved in an activity, maybe an exercise or maybe a military operation, will make a number of observations about things that went well, things that did not go so well, and ways in which performance mis-matched against expectation. The analysis of these observations will often be done by a separate team of analysts. The purpose of this analysis is to understand the root causes of the observations, to investigate the value of potential action plans, and to appreciate where lessons are being identified that could benefit the wider organisation (see the Jallc analysis handbook). The analysts can then make recommendations to senior people, who can make the decisions and decide the actions which complete the learning loop.

Similarly many government organisations have analytical teams. DFID and the Foreign Office have Analytical teams, and their remit at least partly to look at lessons from the application of overseas policy. They base these analyses on external assessments, and sometimes on data they have gathered themselves, and on interviews with their own staff and with external people. In fact one of the key cornerstones of evidence-based policy, is that the effect of policy should be evaluated through evidence, and there needs to be one or more groups to analyse that evidence and to recommend lessons to be learned. As in the NATO example, the analysts can make recommendations to senior staff, who can make decisions about future policy.

So why doesn’t this happen in industry? Should there be an analysts who, looking at the observations from the teams, picking out the generic learnings, and making recommendations for action?

I think partly the situation is simpler in industry. Seldom are you looking at anything of the complexity of a military operation, or government policy. Instead you are looking at decisions made within a project, or within a factory, or within a marketing and sales team. The situation is less complex, and observations can usually be analysed by the team themselves, with the help of a facilitator.

Secondly, when the situation becomes more complex, that’s when analysis is introduced. If there is a major accident, a big incident, a serious environmental breach, then an investigation team is convened and analysis happens (see chapter 13 of the Lessons Learned Handbook).

However I think there is an area where industry could learn from the public sector, and that would be to have a small analysis group trying to pick up the “weak signals”, looking at the lessons from across the organisation, looking at trends, looking at common reasons for failure or common repeat mistakes. They may well pick things up by taking a high level view, that individual project teams might miss. Sometimes this happens; sometimes industry will undertake a portfolio review, or a “train wreck analysis;” looking at common reasons for failure, but this tends to be ad hoc and one-off. There is certainly a case for having a small group with this as their fulltime remit.

Similarly, the public sector can learn from industry. High level analysis and evaluation is not the only way of learning. It can form part of a learning loop, but this is a very long-cycle learning loop which may take years for the actions to be identified and closed out. It is also a learning loop which involves many parties – the people who make the observations, the analysts themselves, the people who make the decisions, and the people to take the actions. These are four parties, and every time you pass accountability from one party to another, there is a chance that the ball is dropped. The public sector could look also at introducing a shorter learning loop, where lessons and recommendations are picked up from the implementation teams and the field staff and routed through to quick action. Here NATO have got the best structure, through combining analysis and lessons learning into a single team.

No comments:

Blog Archive