Johanna,
Thank you for so many ideas and suggestions. In substance we're in
agreement. A couple of comments, that thicken the plot. . .
> >The input we need to do our job is "knowledge" plain and simple. Our
> >engineers must know all types of stuff, from the basics of computer
> >architecture to the details of wide-area connectivity. In many ways, its
> >almost like being a doctor, and understanding the human body. So how do
> >you supply engineers with knowledge? How do you ensure that the knowledge
> >they get is accurate?
>
> 1. Co-location. I need to find the study, but I think one of the big
> consulting firms did a study of software product development productivity,
> and they found that sitting near, where near is on the same floor within
> 20 yards (?) of the people you needed information from was the largest
> differentiator of productivity. If you were less than a "coffee break" or
> "bathroom break" from the people you needed to work with, you would be
> successful.
I like this, and feel it is actualy quite critical. In NTS (Novell
Technical Services) we have offices all around the world. Unfortunately
not everyone can be close together. This really complicates the issue.
The other problem, which I actually think is more important, is that our
performance measurements actually disincent (sp) learning. When you're on
the phone with a customer, and you've got 20 customers in the queue, and
they've been waiting 50 minutes to talk to someone, you're much more
likely to simply find a "plausible answer" and get on to the next
customer. This tactic worked well for a while, but now customers won't
hang-up until they've verified that the solution works. This has increased
the length of our queues, which perpetuates the problem. It is easy for
me to see the "cause and effect" over time, but the managers see something
totally different. They feel like they need to focus more on "getting
through the queue faster," when in reality the long-term solution, I
believe, is actually getting through the queue slower. If we valued
learning, then the people would internalize the stuff they learn, and
integrate into their daily work, which, over time, would increase the
speed at which they operate. Right now we're in the "memorization"
mindset: Memorize a problem/solution set, and hope that it occurs often
enough to help you move through the queue quickly.
I've raised this issue with Senior Management a number of times, in a
number of ways, and each time I get my head handed to me on a silver
platter. There's a lot of scare tissue around my neck. . .
> 2. Making communications public. When specs and discussions happen in a
> tool like email (to public mailing lists, not chosen groups of people) or
> like Notes, where people have easy access to it, the general level of
> understanding about the product goes up.
Now this is a touchy issue for a lot of companies. Managers often say
something like, "You mean you want me to widely distribute this
information? We'll lose control if we do." Duh? Management shouldn't be
about control, as that's more descriptive of tyranny. Management should be
about facilitation and empowerment, and information seems to be pretty
important to both those concepts. As a quality manager I *want* to lose
control of the quality system; continuous improvement is a participative
activity; no one should *control* it.
> 3. Lunch room. I am convinced that when people eat lunch together on a
> regular basis, they improve the level of communications about the product.
This is true at some point, but when the self-esteem of an organization
reaches a certain point then people stop talking about work at lunch and
focus on their weekend, sports, or news. I've watched this happen over and
over again. I think the sign of whether an organization is still
"breathing" or not is lunchtime conversation.
> You'll notice that all of these are informal communications vehicles. I
> agree that formal specs are the Right Way to do product development, but I
> also know how infrequently they get written and updated. I don't think
> it's reasonable to expect people to update specs on a a continual basis if
> the product is driven by time to market.
And yet there's a huge assumption in the industry that competitiveness is
largely determined by time to market. There's a nice catch-22. I read in
Fast Company about the folks that write the software for the Space
Shuttle. They're able to reach a phenomenal level of error free code. But
it cost a fortune to develop, and the normal time to market pressures
aren't as compelling.
Speed is a big issue today. And getting new software out the door is a
pressure feeled activity.
> >How do you build processes that allow everyone to
> >produce similar output when the output is dependent on the knowledge
> >possessed by the person doing the work?
>
> Rethink what you want out of the process. Do you want to measure the
> process or the result? What makes business sense to you? Either I don't
> understand your questions, or you're trying to measure people's output in
> a way that can be very damaging. I'm in the middle of reading Austin's
> _Measuring and Managing Performance in Organizations_ (Dorset House), and
> it's really opened my eyes to thinking about *what* I want to measure.
Here's my point: As a support engineer I may be able to take 20 calls in a
4 1/2 hour period, while my neighbor may only be able to take 4 calls in
the same period of time. The difference isn't in the process, is in the
knowledge possessed by both person. In other words, I don't see how
changing "processes" improve my business. Learning is the only way we can
really increase performance across the entire organization. (After all,
there's only so many ways to take a phone call.)
> >How can customers provide
> >meaningful feedback when they don't always understand the issues their
> >involved with, or whether the solution to a problem provided by one of our
> >engineers really solves the problem?
>
> Customers always provide _meaningful to them_ feedback. Maybe a better
> question is what about that feedback does not provide information to you?
> Can you provide them a tool to give you better feedback from them, both in
> the problem description and problem resolution ends of things?
>
> >How do you measure performance in a knowledge-centered environment? How do
> >you define quality? Is quality the speed of service? Or is the accuracy of
> >the service? Or is it both? How do you measure the accuracy of the
> >service? We tend to measure speed because it is much more quanitifiable. .
> >.and quality, well, shit, that's just too hard to measure.
>
> Speed is one axis that you should measure- it's important to your
> customers. What else is important to them. Accuracy is important, so is
> knowing where things are in the service chain. Noami Karten,
> http://www.nkarten.com, has done some work in this area, and I highly
> recommend the Austin book if measurement is important to you.
I'll check out your references. They look very interesting.
Thanks for the thought-provoking comments.
-- Ben Compton The Accidental Learning Group Work: (801) 222-6178 Improving Business through Science and Art bcompton@geocities.com http://www.e-ad.com/ben/BEN.HTMLearning-org -- An Internet Dialog on Learning Organizations For info: <rkarash@karash.com> -or- <http://world.std.com/~lo/>