On 21 Jan 97 at 1:27, JC Howell wrote:
> In my mind (and experience) indicators are not necessarily "objective,"
> nor are they always "pointers to a desired level of performance." In
> theory they COULD be. But what usually happens is that indicators simply
> ARE. And that's okay.
Noted. However, I'm not sure how a bad experience can quickly dismiss the
importance of the relationship between indicators and desiresired levels
of performance.
> What gets measured is what gets attention. Indicators and measures help
> direct attention toward certain areas. It, therefore, follows that these
> areas should improve, or at least show some significant change.
> Indicators and measures rarely are drawn from underlying systems, though.
> The underlying systems are most often revealed or determined based on the
> data resulting from indicators. Often these systems are generally unknown
> prior to this. Designing a measurement and indicator system based on a
> number of underlying systems is a nice idea, but this presumes that those
> systems are the ones which actually influence performance. Often this is
> not the case.
Again, I think you are make a series of assumptions from a straight
forward comment on the relationship between indicators and their
relationship to a system.
> If this is true, then intentionally designing a measurement system in this
> manner is an arrogant approach that can lead to other unhealthy conditions
> as peformance does not match expectations.
Again, major leap.
> I firmly hold that poor management is poor management and no amount of
> tools, tricks, fads, or whatever can help that. Only change on the part
> of management (managers) can influence this. If bad managers put together
> bad measurement systems that measure and reward the wrong things,
> performance will go downhill. The indicators are not bad indicators. They
> may be very good indicators ... of the wrong things.
Again, the assumption is poor management has a relationship with bad
measurement systems.
> What has proven to be a more useful approach is the development of a
> measurement system after in-depth analysis of and reflection on the
> organizational purpose, direction, and strategy. Once this system has
> been developed, the most probable underlying systems are indentified and
> interventions to influence these systems in desired directions are
> formulated. How these systems are responding is the subject of
> sub-measures.
This last paragraph is surprising. It reads counter to your orginal
arguement.
I'm really surprised. For two reasons. First, my orginal comments came
out of some interesting comments on measurement. Second, while I respect
experience, I also understand the importance of looking at issues from a
variety of angles. I really get the feeling in your notes your past
experience with indicators has been anything but helpful. In my past
experience, indicators have been very helpful and the process healthy.
How organizations use measurement as part of the learning process is very
important. The key is to make the process and outcomes meaningful,
constructive and useful for everyone. At the end of the day, everyone,
including management can really benefit from the experience.
So, some additional comments.
Cheers,
=============================================
Ethan J. Mings
Write to me at "thedesk@idirect.com" or visit our WWW Page at
"http://ourworld.compuserve.com/homepages/ethan_mings/emingsh.htm"
"Where organizational economics is about life, not theory"
--"Ethan J. Mings" <thedesk@idirect.com>
Learning-org -- An Internet Dialog on Learning Organizations For info: <rkarash@karash.com> -or- <http://world.std.com/~lo/>