4.3.2 Stability and Change
The Context32 courseware project was also deeply affected by the issue of the
adaptability of the Intermedia environment upon which it was based. Despite
the immense amount of work that went into the project, and the sometimes
profound educational thought that went into its design, it was also not
immune from the dangers of lack of adaptability. In the following passage,
Paul Kahn (personal interview, March 2, 1992), the current director of the
Institute for Information and Scholarship, portrays the adaptability issues
that software and courseware developers face in the constantly changing field
of computing:
Kahn: The general problem of developing educational software and
getting from the idea stage, to the implementation stage, to the care and
feeding or longevity stage has a lot of different problems. This is true
even if you focus on a single platform, because it is a moving target.
From the time you start, the software is not same software and the platform
is not the same platform. If you tried to use sound and video a few years ago,
it is not the way to do sound and video now. So that's a set of problems that
does not need multiple platforms to make it complicated. Multiple platforms is
the same set of problems times the number of platforms, plus the issues of
compatibility between more or less equivalent software.
The degree to which Kahn speaks from his own immediate experience is clear
in the following account of what happened at the conclusion of the Intermedia
project, of which he was personally a part, and upon which Landow's Context32
courseware was based:
Intermedia was intended as a research prototype. The delivery
platform was chosen because it provided support for the features we wanted
to offer. Beginning in 1985, we assumed a multi-user environment, a
multi-tasking operating system, and high-speed connections between machines
supporting a general network file system. Despite rapid growth in computing
over the past five years most of the computers in use on college campuses
still do not support these features....The ambitiousness of the system itself
has been its most significant limitation. Since 1985, the impact of Intermedia
on education has been self-limited by the kind of computing environment needed
to support the software itself. We have made hard choices in order to create
a system that demonstrated what could be done on tomorrow's systems while
running on today's equipment.
The news has been both good and bad. The good news has been that this ambitious
design has been accomplished without special hardware or operating system software.
Pre-3.0 Intermedia had required a version of the UNIX operating system for the
IBM RT PC that was not generally available. It had also been built on a layer
of software and a database management system that presented formidable licensing
constraints for distribution outside of Brown University. All of these constraints
had been overcome in Intermedia 3.0, and as a result over 200 licensed copies were
distributed in 1989 and 1990.
However, the bad news has been that, despite it general availability since 1988,
Apple's UNIX has never been the operating system of choice for most Macintosh users.
The cost of a Macintosh operating system is included in the price of the computer.
A/UX must be purchased separately and has always required significantly more RAM
(at a time when memory was very expensive) and more hard disk than the Macintosh
operating system. Running A/UX added between $2,000-4,000 to the cost a
Macintosh II workstation. In addition, the kind of a high-speed transparent
network file access that makes the collaborative features of Intermedia possible
require Ethernet hardware. Up through the summer of 1990 Intermedia had been
one of the few applications that runs under A/UX and one of only a handful that
made use of the Macintosh Toolbox to present a graphical user interface.
In the summer of 1990, Apple Computer made changes to A/UX version 2.0 that
made it incompatible with Intermedia. By that time, support for further
development of Intermedia had ended. To run Intermedia, users have had
to maintain or locate a version of A/UX which is no longer supported by Apple.
(Launhardt & Kahn, 1991, pp. 13-14)
There were a number of forces that converged to cause the demise of Intermedia.
These can roughly be divided into the two broad categories of change in technology
and availability to users. In fact, either one of these alone could have been
a problem for this or any project. If the A/UX 2.0 and the high end
Macintosh II had been more quickly, and widely accepted, the Intermedia
software would have still needed to be updated at some point because the
operating system would still have been updated, and produced incompatibilities
to be dealt with by programmers to achieve upwards compatibility. In addition,
if Apple had continued to support the relatively unpopular A/UX 2.0, there would
still have been the problem of limited availability for students. However,
at this point in time, the issues are moot, and the Context32 courseware is
now being ported to Story Space. While this is certainly resulting in a
definite decrease in the functionality of collaboration over a network,
it is also resulting in the courseware becoming more available.
One powerful form of usability in software is its simple availability to students.
The Athena experiment also experienced collision with the adaptability issues
during attempts to provide an easy to use system. While Athena was more successful
than Intermedia at surviving, their solutions were not reached without upheaval.
To understand the Athena's (and the associated project's) experience with adaptability,
it is helpful to know about the concept of "coherence" upon which the Athena system
were based. "Coherence" generally refers to what is common to all users of a computer
system (Committee on Academic Computation for the 1990'S and Beyond, 1990).
Originally, Athena's goals conceptualized this as referring to the ability to
have common operating systems across different vendor's machines, and the
services and programs that would then be able to run over the common operating system.
The reason for this was to provide more "usable" computer systems across the MIT community.
The issue of stability in distributed computing environments emerged as a key
characteristic related to the successful creation and delivery of courseware.
Technical problems with hardware and operating system compatibility were encountered
in their pioneering efforts in distributed systems, and the problems were symptoms
of efforts to achieve operating system "coherence" across different platforms.
This was a learning experience for Athena's pioneers on the "distributed computing
frontier" that is described in an account about the conditions that surrounded
courseware development:
A major shortcoming of Project Athena, most observers also agree,
was an inevitable byproduct of its greatest achievement; its systems and
services evolved and changed so fast, at least during the first few years,
that faculty members and students alike perceived Project Athena to be an unstable,
unreliable, largely inscrutable environment for providing educational and academic
services. This perception persists today, even though the pace of evolution and
change has slowed and Project Athena is a relatively stable, reliable computing
environment.
(Committee on Academic Computation for the 1990's
and Beyond, 1990, p. 8)
This was a problem of adaptability, and specifically the accommodation of change.
There were cases of courseware projects that had been funded by Athena in which
software was either not able to be ported to a new version of the windowing system,
or was only ported with a very great deal of effort. This became a major issue over
the course of the Athena project, and was one of the most repeated issues by
participants in this research site. The overall problems that happened in the
Athena environment during the transition from mainframe to distributed computing
were due to incompatibilities in X-Windows System 10 and X-Windows System 11,
and also incompatibilities between different vendor's hardware
(Stewart, 1989).
Over time the Athena organization clearly found ways to deal with stability issues,
and will most likely serve as a role model for those who follow in their path.
By 1987 Athena had successfully achieved the ability for computers from different
vendors to have the same UNIX based operating system and X-Windows System interface
which could respond identically to system calls, file requests, and other basic
instructions from higher level programs. At that point, the computing system
did not change as fast or as drastically as it once did. Yet the system still
changes on a regular and continual basis, and because it is a distributed environment,
all users must be kept informed. After the problems of moving from mainframe
to distributed computing, the staff of Athena became attuned to helping developers
deal with the issues that result from operating a distributed environment.
In the following conversation, Naomi Schmidt (faculty liaison)
(personal interview, March 4, 1992) describes these realities of
living in a forefront distributed computing environment:
Schmidt: One problem we had was as our operating system matured from
X-Windows System, Version 10, to X-Windows System, Version 11, that programs had
to be redone. And so developers had to anticipate what the future was going to be,
so that they knew what kind of abstraction to put into their programs, and they had
to write it so that the parts that they would pull out and replace were at the right
level of what was going to change. The problem is that in this business nothing lasts
for more than about 5 years. It is not like writing a book. When you write a book,
the paper doesn't decompose. But here, we have a few generations of hardware at
any given time. We bought the oldest machines 5 years ago, and we have machines
that were purchased a few months ago. And we hope to renew a quarter of the
environment year, and pull out the lowest quarter of obsolete machines, so that
the environment improves. Our system will work on the oldest machines, but that
moves up as we replace machines, or else the system is never going to improve in
capability. As our system evolves, if we always made everything run on the oldest
machines we would never move. One of the things is that at MIT is that we really
believe that we want to keep our computing environment state of the art, leading edge.
In the conversation between Hopper and Lerman (personal interview, March 4, 1992),
the forces that cause changes in computing environments and different approaches
to dealing with them were discussed:
Hopper: Eventually new generations will be needed, the question is
how much longevity can you get out of it?
Lerman: It is important. Between AthenaMuse 1 and AthenaMuse 2 there will
probably be a break in compatibility. AthenaMuse 1 was a research prototype.
It was not architected in any formal sense. It was built in pieces and we
wouldn't want to live with it forever. AthenaMuse 2 is intended to have much
greater longevity. Not infinite, but substantial longevity. Our development
plan calls for creating an initial version of AthenaMuse 2, and then having
an annual upwards compatible upgrade for at least for 2 successive years.
Lerman: The critical question is "Does the application from the earlier
version run on the current version?" Each time you carry another generation
forward, another step forward, and try to keep upwards compatibility,
you bring along additional baggage. And eventually it's a reasonable
decision just to scrap it all together. But there are different models.
For example, there was an operating system called MULTICS which was
developed here at MIT, which is no longer commercially successful.
They had a compatibility requirement. You could take the earliest
MULTICS programs ever written and run them on the latest MULTICS
release and they would run. They had designed in that upwards
compatibility. And people would routinely take stuff that was
ten years old without recompiling and run these on the system.
Hopper: That's what's needed, but how?
Lerman: It was well designed to begin with, and that helps.
Everything was well defined. It was a huge project. A second issue is,
of course, how much baggage you're willing to bring forward each step.
I think at some point you're carrying around so much baggage you might
as well just break away. We don't know where that's going to be.
Hopper: That's a key question into how well the use of Athena as a
sample for the future will hold up. There's no real answer.
It's according to how things go, whether or not there will be
new generations, and the new generations could be completely
different if you make a break. But as long as you keep compatibility,
there's bound to be some similarities in the processes.
Lerman: Another interesting study is the X-Windows System.
We finished version 10 at MIT. We built several X applications on it.
We ultimately went to version 11 which broke version 10 applications.
We went to release version 11 and installed it. Applications that required
version 10 would no longer work under version 11. We made that decision
consciously. Version 10 had a number of design flaws which would have
made it impossible for us to promulgate it into an international standard.
In order to get standardization and acceptance in the commercial market-place,
we needed to change from version 10 to 11. There's an intermediate ground,
which we did some of. We provided translators and compatibility switches
so for about a year Athena workstations could be rebooted in version 10 mode.
If you had a version 10 application, you could issue a command to turn
your workstation back a generation. You could run the application,
and then you could roll it forward. We also built a protocol translator.
It never worked perfectly, but there are a substantial number of version
10 applications that run on version 11. In fact that's a third major
approach to upwards compatibility that I know has been used by some
people at Carnegie Mellon University. The example is a language called cT
(Bruce Sherwood). When cT shifts a version, all cT files are labeled as
to which version they were created under, and the new version knows how
to take the old files, read them, translate them into the new version,
and then convert. So they provide a natural upgrade path. The end user
actually gets upgraded as their versions change almost invisibly.
In the above example, the value cT's approach to adaptability were explained.
This attribute, along with its availability across the Mac, IBM, and Sun
environments with no modification was one of the key reasons that it was
used for the Mechanics 2.01 project. At this point in time, it appears
to be an acceptably adaptable programming tool.