A Safety Case LO2746

Sun, 10 Sep 1995 22:36:08 EDT

Replying to LO2304 --

[...This msg forwarded to learning-org by your host at Geof's

I am finally finding the time to respond to those who commented on my
original post "A Safety Case" - LO2304. Rather than send this back
through the entire LO list, I am sending to those who responded. (Rick
- if you want to send it to the LO list, feel free).

Responders were Rick Karash - LO2314 & LO2333, Barry Mallis - LO2320,
Andrew Moreno - LO2328 & LO2417, Jack Hirschfeld - LO2336, Carol Anne
Ogdin - LO2337, and Clyde Howell - LO2402.

Thanks to all of you for sharing your thoughts on this complex issue.

I have read/studied your comments, talked more with fellow colleagues
at work, and offer the following summary of new and modified thoughts.

My best understanding of organizational safety performance would be
described by the speeding ticket analogy - as awareness goes down,
safety behavior degenerates (speeding) and the potential for accidents
goes up (speeding ticket). When an accident occurs, management gets
excited, new safety programs are created, all-hands meetings are held
to emphasize the need to improve safety performance. After a time
delay (where no additional accidents occur - either because people are
acting more safer, or accidents get hidden under the rug, or the
statistical phenomena of "random accident clustering" is over), a
perception develops that management has "solved" the safety problem
(this reinforces the traditional management response). After time,
management attention and safety awareness go down, until the next
accident occurs. A balancing loop.

With the better understanding this systems story provides, what are the
systemic leverage points for improving our management practices ?

I mentioned the previous contractor was almost obsessive with safety.
All meetings began with a discussion of safety. Everyone kept to the
crosswalks, even if it meant walking extra steps, etc. These practices
seemed to keep the ongoing safety awareness level up, perhaps to the
point that the potential for occurrence of accidents was reduced. How
do these practices fit in the balancing loop model above ? They don't
seem to be variables influenced by other variables in the system.
Perhaps they could be called "pulse generators" that are activated
automatically on a periodic basis whether they are needed or not. An
analogy might be the automatic opening and closing of a pressure supply
to a pressurized air tank that is used for inflating tires, whether the
container needed the air or not.

Using the speeding ticket analogy, another example might be a "voice
reminder" on maintaining proper speed that activates every hour while
the car is on. So rather than being a variable in part of a feedback
loop (since it is not influenced by the system), these pulse generators
are linear, fixed input signals that influence the system variables.
Other organization examples might be work standards, principles, etc.
Rick - if this makes sense (rather than just being senseless babble),
how would you reflect that in a system model ? Could this "automatic
pulse generator" concept be used to identify system changes that would
sustain a level of performance, or keep it above a thresh hold ?

Our discussions seemed to reflect a consensus that tabulating accidents
is the wrong means for measuring real safety performance, and that
there might be precursors that would provide more timely, effective
feedback from which to act on. Examples discussed were periodic
monitoring of unsafe acts (or safe acts) and housekeeping. Perhaps
ongoing monitoring of safe/unsafe acts are a balancing feedback loop
similar to a speedometer. With a speedometer you know where you stand
in real time relative to the point of exceeding the speed limit.

As far as applying reinforcing loops to safety performance, perhaps
monitoring unsafe acts are a means for improving safety performance. I
agree with Clyde that focusing on unsafe acts rather than safe acts can
create negative results, especially with traditional approaches. But
perhaps focusing on safe acts exclusively (based on the defined safety
standards) limits the ability to improve. To repeat Rick's question in
his LO2333 - "What can we do in our organizations to improve the flow
of feedback about unsafe or just ineffective things so our systems can
be improved ?" I inclined to believe focusing on unsafe acts (and
ineffective behaviors) is a key to a reinforcing, continuously
improving (learning) feedback loop. The key here is using an effective
approach that avoids the negative aspects. That's where the "behavior-
based safety program" Clyde mentioned may be of value. The program is
based on using trained safety observers (consisting of peer workers,
similar to Barry's suggestion of "safety leaders") for the ongoing
safety monitoring program. They observe work in the field providing
instant peer feedback and reinforcement on good safety behavior and
correcting observed unsafe behavior. Using their field observations,
over time they also look at how to improve the safety standards on an
ongoing basis.

Regarding Jack's LO note on achieving a safe work environment, perhaps
the goal that allows for a self-amplifying (learning) loop could be
described in three dimensions:

* knowledge - knowing how to act safe
* attitude - they know how to act safe, but they must "want" to do it
* awareness - they know how, they want to, but they must remember and
pay attention to the details

Perhaps some means of measuring these dimensions periodically would
provide a feedback loop for ongoing improvements.

Regarding the discussion on checklists and use of computers, I have
some first hand experience on this one as the self assessment lead for
the maintenance function. Our organization as four facilities,
physically separated and organizationally separated into facility
teams. Assessment "lines of inquiry" are specified to provide some
structure to ensure certain regulatory requirements are assessed.
Because our site has E-Mail/network infrastructure, we have an easy
means of sharing information. Assessment checklists can be revised
(read that "improved) based on field feedback instantly. Because these
checklists are available through the network (we have public folders
where we share information), checklist improvements are instantly
available to everyone. The primary value the public folder assessment
checklist brings is the ability to facilitate shared learning quickly
and painlessly - a real plus in my book. All said, the checklists are
still just plain text files in a checklist format - a simple tool as
Andrew said in his LO2304. I guess the additional technology of being
able to share the checklist information (and improving it on the spot)
through the public folder network has been the real learning key for an
organization physically dispersed and organizationally separated into
to facility-based teams. So, having a centralized dbase (if that is
what we can call this checklist) is OK with me, as long as the users,
improvers, analyzers of the dbase are the implementers (the ones
assessing and being assessed). Perhaps another way to say it is shared
learning implies some centralization or focal point of information.
Again, the key to me is there is no central staff doing the analyzing,
improving, etc. Rather the centralized effort is the facilitation of
shared learning among the implementers.

In response to Clyde's assessment of our site in his LO2402, I would
summarize his description as being full of traits common with large
hierarchical, bureaucratic organizations. That's why I believe better
government means less of it (has anyone ever considered redefining the
role of government as being a facilitator of the states ??)

Regarding Andrew's discussion on the CEO being accountable to the
customer and the board, etc, etc, I believe that approach would be
ineffective in the bureaucratic relationship this site has with its'
customer, a federal department (as mentioned above, I have formed an
opinion that government - at least our current government - can only be
improved by reducing it - but that's another story). Any attempts to
do so would get lost in the fuzziness of contractor-customer
communication (ie, time delays, numerous organization layers,
incompetence or lack of knowledge over assigned area of responsibility)

Thanks again for your enlightening comments. Hope this summary was of
some value to you.