Michael:
Interesting premise. However, when we use the word "study" and
"research", it is important to hold the findings pretty close to a candle
and in some cases, unfortunately, the candle flames the findings.
I did download your paper and have given it some thought. I like the
excellent use of qualitative research tools. Unfortunately, in it's
current form, you still have an untested hypothesis. Your hypothesis was:
>Is there something about the language practitioners use that is different
>from the language used in academic papers and journals, and if so, what
>might it be?"
Current White Paper Weaknesses
1) you take three cross-sectional slices and make comparisons between
them. Unfortunately, you looked at three different slices of time. In
the field of health information, where I work, one could never get away
with making comparisons between data from three different time frames:
For example, what comparisons could be made if I took smoking prevalence
rates from the early 1960's and compared them to smoking attitudes from
data in the 1990's. I would likely conclude that people all believe
smoking is bad for them but holy cow! half of them smoke! (certainly not
a valid conclusion). In the same way, it is difficult to make comparisons
about language in January 1996 from one source, with language used 12
months later, from another source.
2) The best I can tell, you make no statistical comparisons of the
occurrences between data sources. Differences in frequencies tell us
little about significance. For example, in a recent research project we
discovered a huge difference between the binge drinking rates of
California college students and students in the rest of the nation. We
thought wow! this is remarkable. Then we did the statistics thing (a
multiple regression), found that all of the differences in the rates was
attributable to age and commuter status.. That is CA. college kids living
in dorms binge drank as much as the rest of the nation but older students
married and living off campus basically did not binge. Therefore, the
differences you found don't mean anything unless you put it in context.
Perhaps the difference is not found in "academic versus practitioner" but
could be academic verses business sector (all technology LO's use
different language but all textile and petroleum LO's use the same
language) or varies by education (all Master's trained LO practitioner use
the same language and all self-proclaimed LO consultants use different
language) or there are probably a hundred other combinations that explain
the differences.
3) Your hypothesis assumes that there is some inherent relationship
between the journals, papers and listserv you identified. That assumption
might be nothing more that. First, do the journals you identified fairly
represent the academic spawning ground for new practices? As one knows
there are hundreds of journals and when you add quasi-academic trade
journals, the numbers go even higher. Are the journals you selected
representative? How do you Know? Ditto for MIT White Papers... Finally
is this list representative of those actually practicing the tenets of LO
or does it represent a very biased subset of LO practitioners? And
finally, is there some overlap between the journals and papers and this
list? Failure to establish these relationships would be like me asking a
person in Los Angeles about the language and concepts of an innovation
that was introduced in Memphis Tenn. Are there chances that the language
would be different?
If we are about making systemic improvements in business, evaluation is a
critical skill for us to possess. Otherwise we are all snake oil vendors.
This just happens to be a good example of claiming the effectiveness of
snake oil using an impressive methodology but having very little
foundation to the assetions.
Mark Fulop, MPH
fulop@mail.sdsu.edu
San Diego, CA
--Mark Fulop <fulop@mail.sdsu.edu>
Learning-org -- An Internet Dialog on Learning Organizations For info: <rkarash@karash.com> -or- <http://world.std.com/~lo/>