Normal Accidents LO12137 -Comments

JOE_PODOLSKY@HP-PaloAlto-om4.om.hp.com
Wed, 22 Jan 97 16:36:20 -0800

Here's a long comment on Jottings #68, but it's well worth the read.

Joe
______________________________ Forward Header __________________________________
Subject: Simplicity and Complexity
Author: BENJAMIN LLOYD at HP-MountainView,om1
Date: 1/21/97 2:51 PM

Joe,

Your jotting brought to mind the work of several scientists who deal
with simplicity, complexity, and their applicability to life. I
apologize for the length and obtuse nature of this discussion: I just
got on a roll, and couldn't stop (I need an editor).

The first is Murray Gell-Mann, who won the Nobel prize in physics 25
or 30 years ago, and helped to start the Santa Fe Institute in the
early 80's. He won the prize and created his reputation for
theorizing the existence of quarks as the fundamental building blocks
of matter. As such, his efforts were reductionist to an extreme. In
the early-to-mid 70's, however, he expressed the opinion that the
study of complexity represents the future of physics: we will (he
believes) eventually develop the small set of mathematical principles
that describe the "laws" of physical behavior in our universe (though
they may not apply in any other universe). However, those laws will
not be sufficient to describe all events in the universe (kind of a
Godelian result). In fact, they will be unable to approximate any of
the most interesting events.

Gell-Mann feels, therefore, that there is need to study both the
simple (exemplified by the physical laws, such as general relativity,
quantum gravity, etc.) and the complex. He calls this study
"plectics," a term with a rather forced etymology, but one, given the
success of his other invented term, the "quark," we might find
ourselves using in the future. A basic idea of plectics (as far as I
understand it) is that it is not sufficient to try to explain all
significant physical effects (including biological, psychological,
technological) starting only from a simple set of laws/principles
(from the ground up, if you will). One reason for this is that as
soon as some randomness is added to the system (which exists even at
the most basic quantum level), the result is unpredictable (complex).
At this point, it becomes necessary to apply chaos or complexity
theory.

As an aside, the point of balance between stability predicted by basic
laws and complete chaos is an example of homeostasis in complex
systems. Several prominent thinkers in a number of fields, not limited
to the sciences, believe that optimal "progress" is achieved at that
homeostatic point. In fact, the theoretical biologist Stuart Kauffman
postulates that biological (and human) evolution (in a neo-Darwinian
sense) works best when we define "fittest" (as in survival of the) as
those organisms that thrive at that point of homeostasis. In the
business world, David Whyte, in his book "The Heart Aroused," suggests
that success in business is achieved by organizations and individuals
walking the knife edge between stability and chaos.

With these views in mind, it's easy to see how we might arrive at the
unspoken expectation we have of an engineer: walk the path between
stability and chaos, and create views of parts of the chaos that
convert them to stability. In other words, humanity progresses
(evolves) by subduing the wild (another image popular over the last
thousand years, and particularly close to home here in the American
West). In so doing, however, we are (literally, in some cases)
playing with fire. Our attempt to control fire, to box it to make it
do our bidding, is one of the oldest examples both of humanity's
conceit and of our desire and willingness to live on the edge.

The risk of explosion, conflagration, etc. is apparently one that we
are willing to take in order to reap the benefits of controlling fire.
In your note, the Amish, however, recognize that the risks of adopting
or using new technology are not limited to the physical, but must
include the societal. To apply (overly-broadly, no doubt) the
balancing point to the Amish situation, they too work on that boundary
between stability and complexity, but understand that it is almost
impossible to transform the complex into the simple ("Simple Gifts,"
though not Amish, is literally a hymn or anthem to this belief; the
Copeland setting, with very simple harmonic structure, exemplifies the
beauty of that simplicity, as opposed, perhaps, to more aleatoric
music of the last 30 years).

The Amish, then, exert control over the effects of complexity by
attempting to limit its introduction into their society. In business,
we attempt to "manage risk" in the ways Phillip Capper describes,
developing defensive postures (in the same way the Amish do) that
minimize the impact of risk on progress, but we can not eliminate it,
nor, as one might infer from the previous discussion, should we try to:
too much progress is achieved BECAUSE of failures, not in spite of
them.

It has been popular over tha last thirty years to use the computer to
model events and systems. Neural networks and other such systems
comprised of a large number of very simple components are used to
simulate "life." What we find when we perform these experiments is
that we can not, regardless of how many rules or laws we add to the
enivironment, emulate life, but we can produce some results that
provide us with knowledge about how things may work. At the same
time, the idea that a butterfly flapping its wings in Guatemala
affects the weather in Chicago can cause us to withdraw completely
from the process, throwing up our hands at the immensity of the task
of understanding such a system.

Gell-Mann says we must approach it from both directions, and in fact, to
take horizontal slices (at certain levels of complexity/simplicity) to
shed light on the problem. He uses the example of human behavior and
thought: the reductionist (simple) view is that we can describe the
action of quarks, which determine the action of atomic components, which
determine the actions of molecular components, etc. on up through body
parts, etc. which finally determine behavior. Attempting the complete
journey, he would argue, is impossible, and thus, non-productive.
Rather, we have different specialists who focus at different levels:
partical physicists, molecular biologists, bio-chemists, neurologists,
psychologists, behavioral psychologists... Each of these specialties
tries to advance in both directions: towards the simplicity of suspected
cause, and towards the complexity of presumed effect.

Now, in relation to information systems, however they are implemented:
through traditional development, implementing packages, or packaging
components, object or otherwise, they all fail eventually. They fail
for the same reason that species become extinct, societies fall, or
individuals die: they encounter a set of events, usually unforeseen,
that they were not programmed or designed to handle. It is important
to note that those events almost always come from outside the
"system," and are introduced as a side effect from the system's
existence in the context of a larger environment. Incorporating or
embracing the immediate environment as part of the system (a la Gaia),
factoring external effects on and by the environment helps, if merely
by reducing the risk of unforeseen events. But it can not solve the
problem for two reasons:

1. This expansions increases the complexity of the system, therefore
increasing the opportunity for unforeseen internal failure.
2. We can never define a system that is large enough such that there
is nothing outside of it (again, a la Godel).

An example of a failure is the dinosaur. As a complex-adaptive
system, the dinosaur survived for millions of years (much longer than
humans have been around), but an event from outside the system to
which it had adapted caused its extinction. As humans, we are trying
really hard not to drive ourselves to extinction, but we have little
control outside of this world.

Information systems can be designed to be adaptive to a certain extent,
responding to external events by changing themselves, but that only
works at a reductionist level: for those events we can anticipate. As
soon as we introduce the level of complexity required to create a
complex-adaptive system, we no longer have control over the system, and
the results it provides us will incorporate a certain degree of
randomness. If we can say anything about our current use or
expectations of computers and information systems, it is that we can
not tolerate randomness.

Consider, on the other hand, a corporation such as HP: a good example
of a complex-adaptive system. There is no one person who understands
everything about how the company works, even if we leave out how each
person works. The corporation is successful even though this is true.
Even though we can not exactly predict either internally how a
particular project will work, or how an individual will perform
his/her job, or externally, how the market will respond to our product
introductions, everything averages out to produce a profit.

To survive longer, our information systems and our expectations for
them need to change: maybe we need to be satisfied if the information
system produces a good result some high percent of the time. As long
as the results are "competitive" or produce a positive "return"
overall, why should we care what the individual results are? This
partly derives from the "simple" laws that enable computing: in the
end, everything comes down to a binary yes or no. In that world, 1 +
1 always equals 2, and 9/3 = 3. In the real world, the non-digital,
non-integral world, nothing is black or white. When we humans have to
make a black or white decision from this complexity, we must guess,
over-simplify, and apply our own (complex) logic.

Quantum computing works this way: since we can not predict exactly the
behavior of the quantum components (Heisenberg), but we have "trained"
the components to act a certain way on average, we send a computation
to thousands or millions of components in parallel, survey them, and
take the preponderance of results (0, 1, or maybe) as the "correct"
answer. It's sort of like democratic computing.

I can see that this uncertainty would raise havoc in most of our
current systems, e.g. Financial Reporting ("our net profit for the
quarter was approximately $500M"), but insistence on exactness dooms
our information systems to obselescence (and keeps IS professionals
employed).

Extrapolating from this (a big leap), those information systems that
solve problems which, even in the real world, can be deduced directly
from a simple set of laws, have been successful: accounting,
mathematical modeling, etc. automate a clearly-understood and
law-driven process. The calculator is an extreme example of this:
think how successful the calculator has been (as an application and as
a technology).

On the other end of the spectrum, virtually any other use of a
computer has achieved limited success at best: the lack of payback
from Office Automation, the lack of acceptance of expert systems. But
few businesses now are willing to take the Amish approach, and live
only with a calculator to perform and compete in their business: we
are willing to accept higher risk and a smaller payback in exchange
for the efficiencies and the expanded possibilities that these more
complex solutions give us. Before airplanes, no one died in plane
crashes, but...

Now, having reduced the information system problem to a very overly
simplistic cause, and suggested the impossibility of getting out of
the complexity mess, let me step back and say that there may be light
at the end of the tunnel.

Kauffman believes that the nature of our universe in regards to
complex systems creates "order for free." The idea is that even in
the most complex of systems, there are forces that tend to produce a
type of order, to organize components of the system. He argues that
without this "order for free," the complex primordial soup of
chemicals could not possibly have produced life. This means that
there is some tendency to define a state of homeostasis in all complex
systems. Further, given some basic laws/rules that may include some
motivation or goal, complex components may join together to produce a
result that none could have done alone. This reflects back to the
discussion of a corporation's success. The point is, though, that
rather than producing stability, this process produces more, larger,
and more complex components that now interact in this new complex
system.

Information systems, therefore, can never simplify: they either
increase the complexity of a system, or change the configuration of
that complexity.

This, in combination with the recognition (thanks to the Amish
example) that the introduction of complexity always affects the
environment into which it is introduced, suggests that a major role of
information systems implementation is to define simple rules/laws that
will trigger the "order for free" within the now-more-complex
environment. As humans, we strive, generally, for order,
understanding, and simplicity: this is our comfort zone (see Socio-
technical systems theories). If, however, we are participating in a
competitive situation (such as HP), and need to advance in order to
survive, we must change our internal motivation to find happiness
either in the process of change, or in the moment itself.

In short, working in the high-tech industry, or in any industry for
that matter, we must constantly lean towards the complex and unsettled
at the expense of the simple and stable. This pressure will always
stress established systems, whether they are bridges, airplanes,
information systems, or (sadly) societal and ecological systems
(forget getting into discussions of our responsibilities in relation
to other people or the environment).

This could go on forever...
A final note: Because it is impossible for an individual to plan,
design, and construct a 747 or a highway bridge, the engineers
responsible for those projects have had to develop a very formal
method for communicating requirements and expectations to those who
will perform the various steps and construct the various pieces of
those projects. As soon as more than one person is involved in such a
project, the complexity of the interaction of project participants as
well as of the project itself render it impossible to identify all the
potential failure points. The primary task then becomes the
management of risk of risk, as opposed to the management of risk
itself.

In Information Systems, we are only beginning to develop formal
methods to communicate and manage expectations. In addition to
continuing to develop these, maybe we need to devote some energy to
managing the risk of risk (mr^2 ?) in our projects.

Ben Lloyd
WCSO/WSS Technical Architecture

-- 

JOE_PODOLSKY@HP-PaloAlto-om4.om.hp.com

Learning-org -- An Internet Dialog on Learning Organizations For info: <rkarash@karash.com> -or- <http://world.std.com/~lo/>