Learning and Conversing LO8882

Hays, Joe (HAYS@volpe1.dot.gov)
Mon, 05 Aug 96 10:57:00 EDT

Replying to LO8860 --

I thoroughly enjoyed and found interesting John Snyder's piece on
Effective Conversational Practice. On a piece that I find particularly
useful and informative, I frequently underline important parts.
(Important are those I might want to respond to or to include in some way
in my own consulting and teaching.

I underlined most of LO8860.

Probably considered "a lurker" by most, I seldom contribute to the list,
and have even found it difficult to more than skim most traffic. I have
tried to keep up with the topics that are of most interest and relevance
to "where I am at the moment." Effective Conversational Practice is one
of those areas, as I have been exploring tools for improving communicating
and learning for some time.

This piece suggested "the ladder of inference" as a useful tool, though it
was not mentioned. I expect much has already been exchanged on the tool
and its use. The ladder, however, is a powerful metaphor to help
individuals understand how communications can go awry with little outward
manifestation of problem (except, perhaps, an inexplicable escalation of
symptoms, which further exacerbate the problem).

While John's piece is full of distinct topics and ideas worthy of further
exploration--and I'm sure a number of you will respond--what I want to
write about is an observation I made regarding my own learning behavior,
which came about as a direct result of reading John's writing. The reason
I choose to write right now, taking up my time and, potentially, yours, is
that my personal observation suggests something about how people
learn--and fail to--from information and opportunities in the environment.

Now, (1) back to my underlining, (2) considering the fact that I barely
skim many contributions, and (3) building on John's notion of "inside the
frame" thinking, my tendency is to notice, apprehend, and ingest
information that is on topics I understand and am interested in. This
natural, self-organizing tendency takes me further along paths of
interest, which is fine, but leads me further and further from alternative
paths.

There are at least two negative consequences to my learning behavior.
First, I exclude many potentially-valuable topics, individuals, and
opportunities. Second, I confirm what I already know (or think I know)
and close myself to what John calls "counterexamples, disconfirming data,
and reasoned argument." The price of the first case is abstain from
learning widely, which could be interesting in its own right and have
future benefit. This could also, feasibly, help me understand my
initially-preferred subject better.

The second case is just as bad, or worse. By ignoring--consciously or
unconsciously--data that call into question my "frame" (paradigm?), my
beliefs and understanding crystallize even further, disenabling me from
learning. But this gets back to John's initial point: "why should I
bother checking anything out if I believe it already to be true?"

Self-evident? Nothing new, here?

Perhaps I need a "beliefs test." A beliefs test would not be a tool to
test truths, per se, or to call someone's veracity into question, but,
rather, would be a learning accountability tool. My beliefs test would
have criteria and reminders that suggest I may be behaving consistently
with a set of beliefs, I may or may not be aware of, but that need
testing-out or at least surfacing for consideration. My beliefs test
could be prefaced with instruction about cues or clues that one even needs
to pull out the test. For example, I "feel" when I am closing down or
shutting-out in a conversation. Knowing this about myself, whenever I
feel this way I could require myself to subject my behavior and thinking
"to the test."

Simply admitting to others that I want to take a look at my own thinking
might open lines of communication. Perhaps, they would also be willing.
Organizationally, a "beliefs test" or even a familiar procedure enacted
under certain circumstances could really enhance learning.

Concerns to this whole business include (1) am I willing to admit being
less than completely and absolutely correct?; (2) am I placing myself in a
vulnerable position by admitting the possibility of uncertainty? This all
comes down to trust...unfortunately. In environments of trust, you have
higher levels of open exchange anyway. In low-trust situations, attempts
to BECOME open are precarious. Maybe having an institutionalized
procedure (a beliefs test) would be even more useful. Trust might just
grow out of open communication, risk-taking, and learning.

J. Martin Hays, Ed.D.
Change Management Division
USDOT/RSPA The Volpe Center
55 Broadway, Kendall Square
Cambridge, Massachusetts 02142
(617) 494-2095
hays@volpe1.dot.gov

-- 

"Hays, Joe" <HAYS@volpe1.dot.gov>

Learning-org -- An Internet Dialog on Learning Organizations For info: <rkarash@karash.com> -or- <http://world.std.com/~lo/>