
I was reading recently a publication from some old colleagues in BP, entitled "Development of an Active Global Lessons Learned Database". You can order this paper from
here - it's reference number is SPE 64529, and it was published in 2000. It's great overview of the sort of technology you need to support a Lessons Learned Program, and it also contains a set of success-case stories of knowledge transfer, and one case where knowledge transfer would have been possible, but didn't happen. There are some technical terms in there, but you can follow the principles.
"The West of Shetland (WoS) team received a pushed
lesson from the Southern North Sea (SNS), team concerning the incompatibility
of a double shoulder connection. This identified a “new” WoS risk since they
planned to run similar connections. Mitigating actions were put in place to
significantly reduce the risk of twist off and the operation was performed with
no lost time. The power in this example is that the WoS team would not normally
consider SNS a relevant analogue due to the different type of operations,
deepwater drilling from semi submersibles versus jack up drilling in shallow
water. Based on the WoS profile stating what equipment they were interested in,
they were in fact comparable, and the relevant SNS lesson was pushed to the WoS
team".
"The SNS team have validated
over $3,500,000 worth of lessons in their ongoing operations and are using them
to demonstrate an auditable continuous improvement process to add value to
future planning. An unexpected bonus that they have observed is that through
using LINK they have built stronger lines of communication between rigsite and
office based staff. In their team the wellsite supervisors and engineers can
now see that their ideas for improvements are being used to do things better in
future and not being lost in the system".
"An Alaskan Shared Services
Drilling operation suffered a High Potential Incident where a 30lb derrick pin
dropped 85feet to the rig floor during a rig move. It narrowly missed a
roughneck tearing the pocket of his coveralls. The team involved entered a
safety bulletin into LINK and put in place mitigating actions to ensure repeat
occurrence could not happen in any other BP Amoco Alaskan operation. The safety
bulletin was pushed out to all teams globally to ensure that mitigating actions
could be put in place at all relevant operations".
"The Valhall team in Norway
drilled a long horizontal chalk section using a rotary steerable tool, which
was a new approach compared to their normal practice of using a PDM. The
section was drilled significantly quicker than normal. A wiper trip with a hole
opener was deemed necessary prior to running the production liner. The same
stiff BHA that had been utilised on previous wells was used because of the
requirement to get the 1600 metre horizontal liner to bottom. However, the BHA
became stuck in the chalk section. The BHA was pulled free after acidising, but
the stuck pipe incident made the wiper trip 24 hours longer than necessary and
cost $120,000 in lost time. After the incident, the team accessed other teams’
lessons using the same rotary steerable system through the search engine of
LINK. They found 11 recent lessons with two directly relevant. With hindsight
the team realised that if they had known about the other teams recent lessons
they would have modified their risk picture related to the BHA design. This
would have significantly reduced the risk of stuck pipe occurring".
4 comments:
Hi Nick, like the post. I agree with your views...My PhD lessons learned research has taken me down the safety lessons learned connection (safety culture, just culture etc). On another well known incident, I recall the review of the BP Deepwater Horizon accident investigation revealed how lessons learned of previous “well control event incidents” and “lines of communication” were not acknowledge or addressed and was a contributing cause to the failure (BP 2010, Deepwater Horizon Accident Investigation Repor: Cleveland, C 2011, Macondo: The Gulf Oil Disaster, Encyclopedia of Earth, http://www.eoearth.org/article/Macondo:_The_Gulf_Oil_Disaster?topic=64403).
I understand that NASA uses the BP Deepwater Horizon incident as a lessons learned case study paying particular attention to communication deficiencies around government oversight, disregard of data, testing, changes to process, safety culture and lessons learned from previous incidents (NASA 2011, The Deepwater Horizon Accident: Lessons for NASA, Academy of Program/Project & Engineering Leadership, http://www.nasa.gov/pdf/592629main_BP_Case_Study_29AUG2011_FINAL.pdf).
I would appreciate your views on how high reliability organisations operate and their connection with organisational learning by addressing safety problems. They seem to have a flexible and informed reporting systems with a strong commitment to a safety just culture environment. How do we transfer this approach to safety into operational and project management lessons learned?
Regards, Stephen
http://www.pmlessonslearned.info/
Hi Stephen
that's a big question, and I don't really have time to answer it just now I am afraid, as I am working overseas with a client and a deadline! Can I park the question until I can give it the thought it deserves?
Thanks
Thanks Nick, look forward to an opportunity to catch-up with you.
Regards, Stephen
Stephen, sorry for the delay. There are two good books I would recommend for a discussion of high reliability organisations and safety - one is Flirting with Disaster - http://www.amazon.co.uk/Flirting-Disaster-Accidents-Rarely-Accidental/dp/1402761082 and the other is Managing the unexpected - http://www.amazon.co.uk/Managing-Unexpected-Resilient-Performance-Uncertainty/dp/0787996491.
There is too much to summarise here, but some things to consider are
a) Building a resilient system to start with (for example, the loss of the Herald of Free Enterprise may have been partly down to a system which relied on a crew member warning the captain if the bow doors were open, rather than confirming that they were shut. The crewman fell asleep, the captain heard no reason not to to set sail, and the ship sank)
b) Continual awareness and discussion of risks
c) Continual, reinforced, willingness to stop operations the moment things do not go to plan (for example the Aircraft carrier crewman working on the flight deck who found a spanner missing, from his toolbelt, and who closed down the entire operation until it was found. And was praised and rewarded for doing so).
A high reliability organisation seeks to learn from every near miss or potential incident, and to avoid the creeping complacency that allows people to live with errors and make-do's.
Post a Comment