1.3.3.1 Functionality in the Past
Past educational computing projects have emphasized a few types of software functions.
The most wide spread emphasis has been on teaching about languages, databases, and spreadsheets
or using professionally authored courseware (Office of Technology Assessment, 1988).
Robert Taylor (Taylor, 1980) used the concepts of Tool, Tutor and Tutee
to describe how these types of software functions tend to serve different educational roles.
As Tutor the computer takes the role of teacher (p. 3); as Tool, the computer takes the role
of assistant (p. 3); as Tutee, the computer takes the role of learner (p. 4).
This is an accurate historical description of the fragmented functions and uses of
microcomputer software through the 1970s. While Taylor's framework is no longer a
thorough description of todays more powerful software, it does provide a reminder
that different types of software functions support different educational goals.
A variety of additional software
functions are now available in the today's advanced educational computing systems.
For instance, the traditional role of textbook appears to be one troublesome "T" that microcomputers
in the early 1980s could not adequately tackle, but common systems of today can support.
Robert McClintock in his article "Out of the Starting Gate," made a convincing argument that
educational software suffered from a basic constraint of quantity that was imposed by the limited
nature of computers in the 1980s:
Computers become the objects of study, not the tools of it, because there is
little substantive material available to be studied through the computers. Thus, the current
situation is the one so widely bemoaned: Good educational software is not at hand.
(McClintock, 1986, p. 192)
Computers did not generally serve the role of textbook during the 1980s because of the limited
computational and storage mechanisms of microcomputers. While most educational institutions could
not afford the technology needed to fit the traditional role of the curriculum, some researchers
were busy developing tools that have now made it feasible (Nelson, 1981,
Bush, 1945). Nelson is credited with coining the term hypertext.
He believed that the variety of individuals' backgrounds, experiences, knowledge structures,
and methods of accessing and interacting with information made the structure of knowledge in
linear texts arbitrary and consequently counterproductive. He also conceived of both computer
processors and storage mechanisms much more powerful than those available in microcomputers
during the early 1980s (Nelson, 1987). Below is a basic description
of hypertext as described in a widely recognized article
written in the late 1980s, "Hypertext: An introduction and Survey," in IEEE Computer:
Windows on the screen are associated with objects in a database,
and links are provided between these objects, both graphically (as labeled tokens)
and in the database (as pointers). The database is a network of textual (and perhaps, graphical)
nodes which can be thought of as a kind of hyperdocument. Windows on the screen correspond to
nodes in the database on a one-to-one basis, and each has a name or title which is always
displayed in the window. (Conklin, 1987, p. 19)
There have been a number of major stages of development of hypertext tools
(Halasz, 1988). The first generation systems were all mainframe-based
and used display technologies with little or no graphics capabilities. These systems included
NLS/Augment developed by Douglas Englebart (Englebart, 1963),
FRESS developed by VanDam at Brown (VanDam, 1988),
and ZOG developed at Carnegie Mellon (Akscyn, McCracken & Yoder, 1988).
First generation systems included at least some support for medium to large teams of workers sharing
a common hypermedia network. The second generation of hypermedia began in the early 1980s with
the emergence of various workstation-based systems which were remarkably similar in concept to the
first generation systems and included NoteCards (Halasz, 1988),
Intermedia and KMS. Similarities between first and second generation
systems were not surprising because KMS was actually a redesign of ZOG at Carnegie Mellon
(Akscyn, McCracken & Yoder, 1988), and Intermedia reflected
designers earlier experiences with FRESS at Brown
(Yankelovich, Meyrowitz & VanDam, 1985).
The workstation allowed for much more advanced user interfaces, supported graphics and animation
nodes as well as fully formatted text nodes. Some provided for a more limited degree of
user-collaboration than mainframe systems, but also made heavy use of graphical overviews
of the network structure to aid navigation.
In the mid-1980s a number of products running on personal computers became common in education,
the most popular of which was HyperCard. While these systems also do not support collaboration
like the mainframe and workstation based systems (Halasz, 1988),
their development took place in tandem with a rapid amalgamation of modern media into a converging
digital source as illustrated in Figure (see Figure). Negroponte, co-founder of the MIT Media Lab,
was the first to elegantly publicize this convergence. He declared "all communication technologies
are suffering a joint metamorphosis, which can only be understood properly if treated as a single
subject, and only advanced properly if treated as a single craft"
(Brand, 1988, p. 10). The convergence of media has resulted
in most media industries squabbling about the increasing overlap in their markets, as they
scramble to settle on formats to store and deliver various forms of digitized media.
The result of the combination of the converging digital media with hypertext resulted in the
third generation micro-computer based hypertext environments widely available today,
which have become known as interactive multimedia. Interactive multimedia denotes a
collection of computational technologies that give a user the capability to access and
manipulate text, music, sound effects, speech, still images, animations and movies.
(Ambron and Hooper, 1990)
Figure Negroponte's Converging Rings
Developments also show Marshall Mcluhan's speculative "Global Village" is materializing right
in the middle of Negroponte's primeval soup of developments in communications technologies.
Members of the US Congress are now recognizing and moving to insure that massive fiber optic
electronic highways are put into place to support and encourage a growing number of these media
and communications activities over networks in the future, including education
(Gore, 1992). While small single-user products like Apple's HyperCard
are popular in education today, a few universities have already developed educational computing
environments which support large databases of linked text, sound, graphics and video that can be
accessed over distributed networks. This conception is often referred to as hypermedia.
The main difference between interactive multimedia products like HyperCard available on most
desktops today, and the implementation of hypermedia as it is described above, is that
hypermedia implies a return to a major role for networks and the collaboration capabilities
they support (Paske, 1990). The critical role of the network
is the main reason why these systems are only beginning to become practical today.
As advanced systems like these become widely available, it will be critical to find out
what types of educational goals they can be used to support.