Role of Evaluation in LO LO9775

Eric Bohlman (ebohlman@netcom.com)
Thu, 5 Sep 1996 17:18:40 -0700 (PDT)

Replying to LO9752 --

On 5 Sep 1996, Rol Fessenden wrote:

> Just to be pragmatic for a minute, as a manager, the whole point to
> measuring is with an intent to alter. In fact, as the saw goes, what gets
> measured gets done, so what we choose to measure in a very real way
> impacts how people will behave, and what they will emphasize. Measurement
> leading to change in this case is a positive.

But in many cases, the change isn't necessarily the change that was
intended. Let's say we have a technical support operation for a software
firm. If management chooses to measure the length of the calls, the
representatives will get the message that they should try to minimize the
amount of time they're on the phone with customers, and will in fact do
this. The problem is, they'll do it by means like telling the customer
"try this and if it doesn't work, call back" where "this" is the first
thing that comes to their head, rather than discussing several
possibilities with the customer. The effect of this is that even though
each individual call becomes shorter, it takes more calls to resolve a
problem and, since each time the customer calls and gets a new rep, he has
to explain the problem all over, the total time the entire staff needs to
spend on the phone to resolve a problem actually *increases*.

The problem here is that the staff have optimized their behavior for a
specific characteristic of the measurement process (in this case, the fact
that time per call is being used as a surrogate marker for time to resolve
problem) rather than for the intended outcome. Similarly, if you evaluate
programmers based on how many lines of code they generate per day, they'll
be disinclined to use any techniques that reduce the number of lines of
code needed to perform a particular function. I call this "optimizing to
the instrument" or the "40-yard fallacy" (professional football teams
measure prospects' "speed" by timing them in the 40-yard dash, and there
are plenty of specialized coaches who help potential rookies decrease
their time by working on their sprint-start technique, which is not
transferrable to game situations).

Sometimes the problem is that the measurements are made periodically even
though the process being measured has no natural periodicity. This leads
to phenomena like shipping partially finished output at the end of the
quarter and other less extreme, but nonetheless non-value-adding, behavior
intended to synchronize outcomes with the measurement process. In
_Relevance Regained_, H. Thomas Johnson points out that *continuous*
measurement of outcomes, the most "short-term" measurement of all,
frequently doesn't have the negative effects that we usually associate
with a "short-term" mentality.

Eric Bohlman (ebohlman@netcom.com)

-- 

Eric Bohlman <ebohlman@netcom.com>

Learning-org -- An Internet Dialog on Learning Organizations For info: <rkarash@karash.com> -or- <http://world.std.com/~lo/>