This is how one pictures the angel of history. His face is turned toward
the past. Where we perceive a chain of events, he sees one single
catastrophe which keeps piling wreckage upon wreckage and hurls it in front
of his feet. The angel would like to stay, awaken the dead, and make whole
what has been smashed. But a storm is blowing from Paradise; it has got
caught in his wings with such violence that the angel can no longer close
them. This storm irresistibly propels him into the future to which his
back is turned, while the pile of debris before him grows skyward. This
storm is what we call Progress.
Walter Benjamin, "Theses on the Philosophy of History", 1940
The Web as a cultural phenomenon
This talk, given to the UW
Computer Science Club by Prabhakar Ragde in the
summer of 1995, was an advertisement for CS 492 (Social Impact of
Computing),which I will teach in the winter of 1996. The talk was
given using PowerPoint slides shown on an Ascentia laptop hooked up to
an LCD display panel. The talk notes, written using Word, were
expanded and converted to HTML using rtftohtml 2.7.5, with final hand
editing via Emacs.
Index
- Ted Nelson's hypertext
- Today
- Brief description of the Web
- My position
- Is the Web a revolution?
- Usenet vs. WWW?
- Comparing Usenet and WWW
- Dominant metaphor
- Activation energies
- Memory
- Asynchrony
- Scalability
- Imposed linearities
- Responsibility
- Breaking external rules
- Breaking internal rules
- Opposing modalities
- Listening/reading versus speaking/writing
- Broadcasting versus publishing
- Professional versus amateur
- Commercial versus non-profit
- Content versus style/form
- Common problems
- Immediacy
- Nonlinearity
- Indexing
- Illusion of challenging authority
- Web problems
- Gratuitous graphics
- Gratuitous gadgetry
- Overstructuring
- Bandwidth and latency
- Imposed linearities
- Accessibility
- Urge for interaction
- Future of the Web
- Within society at large
- Effects on the university
- Educational hypertext
- Initial difficulties for teacher and student
- New disciplines required to cope with new distractions
- No canonical form
- Intuitive versus formal understanding?
- How to evaluate hypertext learning?
- Linearity of testing mechanisms
- What is considered original work?
- Extracurricular effects
- Position of the university
- Position of the student
- My fifteen minutes of
fame
- Works cited
In 1974, Theodor Nelson wrote in his landmark book "Computer Lib/Dream
Machines": "Now that we have all these wonderful devices, it should be the
goal of society to put them in the service of truth and learning [...]
Obviously, putting man's entire heritage into a hypertext is going to take a
while. But it can and should be done."
Ted Nelson's book was my first exposure to the idea of hypertext. I read it
shortly after it came out; I was an impressionable 15-yr-old. Nelson had
actually invented the word "hypertext" a decade earlier; it appeared in a paper
he delivered at an ACM conference in 1965. But Nelson says he got the idea from
an article by Vannevar Bush (science advisor to US President Franklin
Roosevelt), which appeared in the Atlantic Monthly in 1945.
Bush envisioned the extension and augmentation of human memory by a machine.
His device (the "memex"), more a thought experiment than a practical invention,
used the densest storage medium of the time microfilm. A key idea was the
ability to link two points in two different documents so that a reader could
jump from one to the other easily.
Nearly fifty years after Bush's article, the Atlantic Monthly ran an article by
James Fallows on a great new application called Mosaic and how everyone who saw
it wanted it immediately. Nearly thirty years after Nelson invented the word
"hypertext", he was profiled in Wired, still trying to create a viable system.
And nearly twenty years after I first was seduced by the possibilities of
hypertext, we have the World-Wide Web, a distributed hypertext system.
(Back to index.)
Though the fact that you're reading this indicates you know something about the
Web, a brief description may be valuable to settle on some common terminology.
As a writer, you can use the Web in the following way. If your machine is an
information server (to accomplish this, your machine must be permanently
attached to the Internet, and you or your system operator must run and maintain
a fairly complex suite of software), you can create documents (stored as
computer files which are often called Web pages).
These documents are plain text with added tags (together, called HTML, or
"hypertext markup language") that indicate structure (like an indent indicates
the start of a paragraph in a book) or indicate that a section of text is
linked to another document which can be on this machine or another. If the
linked document is on another machine, an address called a URL (similar to an
electronic mail address) is provided. Documents can include in-line graphics in
addition to text; documents can also be audio or video files, or
high-resolution pictures.
As a reader, you use the Web in the following way. If your machine allows you
Internet access, you have access to a piece of software called a browser
(which, presumably, you are using right now to read this document) which
displays these documents "nicely". Links are highlighted (colour or underline);
if you click on them, the linked document appears. If that document is on
another machine, your machine negotiates the transfer with the remote machine,
using a communication protocol which is hidden from you.
The ease of use makes the Web strongly appealing even to (especially to)
novices Mosaic and other Web browsers have been called the "killer
application" for networks, as spreadsheets were for personal computers.
I'm somewhat of a naive expert. I teach computer science at
a university, and I
have been using the Internet for close to 15 years. On the other hand, I don't
spend much time reading newsgroups or navigating the Web. I don't know any
details of HTTP (the communications protocol used by the Web), though I could
understand them if I bothered to investigate.
We tend to learn about every new technology from those with a vested interest
in promoting it; they are often not unbiased sources of information. The
attitude we need to adopt in dealing with new technologies is one of skeptical
optimism.
There is a marked tendency in computer science to regard every new device as a
paradigm shift, as a revolution. Joseph Weizenbaum, in his 1975 book "Computer
Power and Human Reason", noted the number of times that the "computer
revolution" had been falsely proclaimed and we have had twenty years of
hyperbole since then. Usenet, the system of distributed bulletin boards or
newsgroups offered as an Internet service, has been touted as the best thing
since Gutenberg. Does this make the Web the best thing since Usenet?
(Back to index.)
There have been revolutions, or at least major breakthroughs, in the computer
field: one example is the personal computer. Web URLs are now to be found in
many advertisements in mass-circulation periodicals. It is worth examining how
the Web stacks up against other communication technologies, and to try to
understand what long-term effects it may have on our lives. There are as of yet
few critical writings on the Web; in fact, critical examination of the Internet
in general is only now starting to appear.
"With all these transitions, the making and breaking of social links, people
are beginning to function as elements in a hypertextual network of
affiliations. Our whole society is taking on the provisional character of a
hypertext: it is rewriting itself for each individual member. We could say that
hypertext has become the social ideal." (Jay David Bolter, "Writing
Space: The Computer, Hypertext, and the History of Writing", 1991)
This quote is a good example of the kind of hyperbole that we must deal with in
making our judgements. There are two contrary tendencies that we must keep in
mind:
1. People in new situations start by transferring attitudes from old
situations that appear similar.
2. People in new situations may ignore deep similarities with old situations
because of superficial differences.
It is up to us to decide which of these two tendencies is dominant in a given
situation, whether the assumptions being made by people are justified, and if
not, what might happen when the assumptions break down.
(Back to index.)
A good point of comparison is with Usenet. At first glance, this might seem an
apples versus oranges comparison. Usenet is system of computerized newsgroups
or bulletin boards; Web is system of hypertext/hypermedia documents. Both are
services on the Internet, but they look quite different.
Nevertheless, the two services intersect. Web browsers often offer newsgroup
access as an additional feature. A Web page requires publicity, and one
relatively cheap way for an individual to publicize a service is through
Usenet.
Beyond this, Usenet itself resembles a hypertext; threads of conversation fork
and diverge, and one message may contain explicit references or large quotes
from several other messages. Bolter, in "Writing Space", makes this explicit:
"At any one moment the network holds a vast text of interrelated writings --
the intersection of thousands of messages on hundreds of topics. It is a
hypertext that no one reader can hope to encompass, one that changes moment by
moment as messages are added and deleted."
One crucial difference is in the metaphorical view users have of the two
services. With the Web, a reader is "travelling" through text in other people's
space; with Usenet, a reader is participating in a conversation (or series of
conversations) in a public space belonging to no one (or everyone).
Much of the success of theWeb is due to its simple and appealing interface: one
has only to be able to point with a mouse and click its button. In contrast,
Usenet remains awkward to use, despite many attempts at "user-friendly"
newsreaders.
One could say that the Web has local structure (at the level of a document,
organized by tags, and its immediately relevant links) but global chaos; Usenet
has global structure (hierarchical organization of discussion topics) but local
chaos in the discussions within each group and in "cross-postings" to multiple
groups.
The activation energy of a chemical reaction is the amount of energy required
to start the reaction, which is often recovered when the reaction is completed.
This is a metaphor for the effort needed to overcome inertia in a given
situation.
The activation energy for writing is lower for Usenet than for the Web. It is
very easy to post a message that will be seen by thousands of people. Creating
a Web document, on the other hand, takes some time; one has to be able to write
HTML or to learn how to use an HTML editor. As a result, the Web may encourage
more substance in writing.
On the other hand, the activation energy for reading is lower for Web; it is
very easy (and quite compelling) to follow a link to a completely new document
and never return to the current document. The Web may encourage exploration of
information without assimilation.
Usenet has no real memory, though attempts are made to give it one, by means of
archive sites, or FAQs ("frequently asked questions" files, periodically
reposted). It is reminiscent of one of Oliver Sacks's brain-disfunctional
patients, whose connection between her short-term and long-term memory was
disrupted, requiring her to recreate the world anew each morning. The Web
clearly has memory, and it is controlled by its writers.
Asynchrony works against Usenet. The basic idea, removing the need for a
meeting between speaker and listener, is sound; it is the original purpose of
text, and it's what makes telephone answering machines and electronic mail so
useful. But in the case of Usenet, the conversational metaphor dooms it as
conversations fork and redundancies multiply. As with books, asynchrony is not
a problem for the Web.
Usenet is not scalable. The number of messages flying around grows
more than linearly with the number of new connections, because each
one is both a destination for broadcast messages and a new source of
messages to be broadcast to all others. The Web is a point-to-point
service. As new users are added, the network load increases, but much
more slowly.
Each of the services imposes linearities on the information it carries. With
Usenet, linearity is imposed by arrival order, or by conversation threads (the
latter not being strictly linear, but tree-structured); reader control is
difficult. With the Web, the linearity of the path through the information
space is imposed by the reader, under the constraints set by the writer (who
creates the links)
With Usenet, no one really has responsibility for a "transaction". The shared
conversational space does not really exist. A posted message leaves the control
of the writer and goes out to an unknown audience. Messages arrive on machines
even if no one there is reading them. With the Web, the lines of responsibility
are clearer. The writer has responsibility for creating and storing, and the
information stays in the writer's space until it is requested; the reader has
responsibility for fetching it by a voluntary act, and "owns" the information
once fetched.
The different dominant metaphors create differences in which societal rules may
be broken and what the consequences might be. Pornography, for example, is much
more accessible on the Web. On Usenet, images are posted to newsgroups by
text-encoding them and breaking the resulting large file into pieces which are
individually posted. A reader who does not have the benefit of an automatic
decoding system (such as that provided by the Computer Science Club of the
University of Waterloo) must check for these postings frequently (because of
the volume, postings on groups which carry images are often set to expire
within hours), collect all the pieces, trim off the headers added by Usenet,
put the pieces together, decode them, and view them. This takes a fair amount
of time and some sophistication.
On the Web, a reader who knows nothing can, exploring by just pointing and
clicking, quickly find pornographic images, displayed in full colour. It would
look terrible on the six o'clock news. But it seems to be less of a problem,
because the information is stored elsewhere, and the reader clearly has
responsibility for the act of access. Sites that are known to store
pornographic material invariably are flooded with requests which would overload
a small system. Thus only large commercial organizations can afford to provide
such material and they already have social sanction to do so. Two of the most
popular sites on the Web are maintained by the mainstream magazines Playboy and
Penthouse.
Anonymity is another potential problem with Usenet that does not seem to
manifest itself on the Web. We tend to frown on the unjustified use of
anonymity (poison pen letters, graffiti). Remailers, machines that strip off
identifying information from electronic mail messages, can be used on Usenet to
provide the power of anonymous broadcasting. Much of the traffic on sex
newsgroups is posted in this fashion. This further weakens the assignment of
responsibility.
On the Web, the reader is currently anonymous, though provision exists in the
communications protocol for transmitting identifying information. Suppose a
site demanded such identification. Would this cause an uproar? Not likely the
information space is "owned" by that site, and they are conceded the right to
set the rules. If the reader does not like it, they can go elsewhere. It is not
hard to see that commercial sites will soon make such demands, for the purposes
of collecting marketing information.
Does it make sense to be an anonymous provider of information? Perhaps, and it
is technically feasible to have a URL "redirector" work the same way as an
anonymous remailer, but there may not be sufficient demand. Web information is
long-lasting, and given the repeated pattern of fast interaction, it may be
easier to break secrecy.
Usenet, because of its high level of interaction, has an elaborate set of rules
of conduct ("netiquette"). With Web, they have either not evolved yet, or are
taken from other sources, such as the style rules for printed text.
(Back to index.)
We can place new technologies upon several spectra, though their places are not
fixed, but slide as they evolve or as their users mature.
For Usenet, the speaker is paramount; speakers control the flow of information.
The phenomenon of "lurkers", people who read groups but do not post, makes it
difficult for anyone to know who is in their audience or what the audience's
reaction might be to any given posting. On the Web, the reader is paramount. A
Web page is useless without readers who actively seek it out.
The Web is closer to traditional publishing: the reader must seek out a text.
Even its look, interpreted by browsers which use proportional fonts on
high-resolution displays, is closer to that of the printed page.
Usenet revels in its amateur status. All voices have equal status and
credentials are not in evidence, though over time some user IDs may be
recognized as being more knowledgeable. Experts are discouraged because of the
ephemerality of the medium (they must keep answering the same basic questions
over and over). The Web encourages experts at least, it does not discourage
them. The fact that their work has some durability may be more attractive to
them.
Usenet strives to exclude commercial activity. It is discouraged by users who
react in a hostile fashion, often posting abusive messages or filling mailboxes
of those who attempt to use the medium for personal profit. In fact, Usenet is
not attractive to business, except in the scope of its coverage and its cost.
A Usenet posting is more ugly than the simplest classified ad; almost none of
the advanced graphic design aspects of modern advertising can be brought into
play. Commercial activity on Web is more acceptable. It does not pollute a
shared space (users go to where the information is). It looks more like the
advertising in newspapers and magazines which we either seek out or ignore as
we see fit. For that reason, more and more commercial activity is taking place
on the Web.
Usenet has no style, unless you count multiple levels of quotation and flaming
as style. Most postings look the same. The Web was initially designed to carry
mostly text, with perhaps a few graphics; it was considered important that
readers be allowed to customize their view. But almost immediately, a tension
between writer and reader arose, over who controls the look of the information.
The ability to control layout and font was especially important to commercial
information providers. HTML has both style-oriented tags ("this phrase in
bold") and content-oriented tags ("this phrase to be emphasized").
In the view of Richard Lanham, professor of rhetoric and author of "The
Electronic Word" (1993), this tension is not only unresolvable but desirable.
He writes, "The bi-stable decorum that supplies the premise of electronic text
has been the fundamental premise of rhetorical education from the Greeks
onward." For him, the flexibility of electronic text corrects an imbalance
created by the domination of the codex book. "Pixeled print destabilizes the
arts and letters in an essentially rhetorical way, returns them to that
characteristic oscillation between looking AT symbols and looking THROUGH them
which the rhetorical paideia instilled as a native address to the
world."
(Back to index.)
Usenet and the Web share some common problems, because of their newness and
because they share the same underlying communications medium.
The quick response of the medium, the speed with which one can interact with
it, makes it less likely that someone will pay as much attention to a given
item.
Both media force the reader to impose linearities on what can be a quite
unstructured base; in neither case are the linearities entirely satisfactory.
Finding a particular item can be quite a challenge.
It is clear why Usenet offers the illusion of challenging authority. Since the
individual has been given broadcast capabilities for the first time, and such
capabilities traditionally carry with it the power to influence actions, it
appears that the individual now has this power. In fact, impassioned talk on
Usenet is often just another substitute for action. But why should the Web be
an authority challenge? This view was put forth by Stanley Aronowitz, in an
essay that appeared in the collection "Literacy Online". He wrote:
"The unfulfilled promise of hypertext is that it will abolish all forms of
authority, revealing in the process that standards are socially produced,
usually in behalf of the claims of the powerful to act as legatees of culture.
What hypertext promises to expose, in other words, is the authoritarian
character of taste; it is a weapon of the powerless in the struggle for control
over the signifiers of culture."
The degree to which this promise will be fulfilled can perhaps be seen by the
initial and likely continued use of the Web for relatively trivial or diverting
purposes. Much is made of the fact that the Web in effect gives a printing
press to everyone. But once you create a Web page, you must still convince
people to look at it. And, as we saw in the discussion of pornography,
popularity dooms smaller sites, with the result that large commercial,
government, or university sites will dominate information providing. The Web in
this respect is no different from other media. The photocopying machine
resulted in some increase in activity, but not in an explosion of pamphlets and
posters.
(Back to index.)
It is not too difficult to criticize a new and rapidly growing medium such as
the Web. Many of the criticisms listed here are transitional and will go away
(one hope) as the medium matures. In contrast, the criticisms listed above for
Usenet are inherent in its structure and cannot be solved so easily.
I have seen lists with a separate little icon for each item, and maps with
words scattered around on which one clicks to retrieve items. Why not just a
simple list? Unnecessary graphic images increase the volume of network traffic
and increase latency (retrieval time) without significantly improving
information content.
HTML offers some advanced features for information display and interaction, and
some browsers extend HTML with non-standard tags. Text can be made to blink,
and the user can push buttons or fill out forms. There are even ways that
programs can be run on the information server to satisfy complex requests. All
of these features are useful in the right context. But too often I see them
used simply because the capability is available.
Many Web documents contain far too many links. These documents look like
anorexic porcupines, all pointing spines and no body. I have seen documents
where every paragraph and section has its own Web page, requring endless
clicking and waits for retrieval instead of a simple scrolling of the screen in
front of me. Sometimes this is the result of using automatic converters from
other formats, and sometimes it is a misguided decision on the part of an
information provider.
A reader is encouraged to follow links without a clear notion of the costs
(time and volume). Better-designed documents say how large the file at the
other end of each link is, but they cannot predict how long it might take to
retrieve.
Currently, the main methods for navigating through the Web are a stack of
addresses representing the current depth of search (added to when a link is
followed, removed from when the browser is asked to back up) and a list of
addresses (the last twenty or hundred documents visited). One can easily
envision other aids to navigation a dynamically-constructed map, for example.
But it is not so easy to create such a map or to display it effectively, and it
is not clear whether users are up to the task of using such a map
efficiently.
Vannevar Bush advanced the notion of "trails" through the documents stored in
his memex. A reader who followed a trail they thought enlightening could make
it persistent; they or someone else could follow it again years later. There
seems to be no good way of creating or storing such a trail with current Web
structure.
To use Usenet, one currently requires only a modem (which may be capable of
only a very slow transfer rate) and a terminal to display text. Although this
equipment suffices to navigate the Web in a limited fashion, to utilize its
full capabilities without becoming frustrated requires a highspeed connection
(Ethernet or a very fast modem) and colour graphic capability. It is as if the
promise of full information equality is being held tantalizingly just out of
reach of the average person, who is always behind in terms of how they can
interact with the current favourite information source.
The Web is sometimes criticized for being too static. It seems that many people
crave more interaction than can be had from just zigzagging through documents.
In part, this attitude comes from experience with Usenet, but some of it is
simple human nature. The writer wants feedback; the reader wishes to make their
opinion known. Often, however, it leads to further gratuitous forms and
buttons. Many Web pages offer a form for comments, and received comments are
displayed in their own Web page. I have never seen an intelligent comment on
such pages. Usually they contain numerous variations of "Wow!" or
"Neat!".
(Back to index.)
It is always dangerous to make predictions about the future. One finds the same
phenomenon whether reading old predictions by futurologists in studies for
think-tanks or reading old science fiction: they tend to be too firmly rooted
in the assumptions and realities of their time, which makes them dated and
embarrassingly stale. Nevertheless, I will venture a few very modest
predictions.
I believe that as the Web grows, it will move further away from text, though
pure hypertext will always fill a niche. New Web writers will be primarily
entertainment-oriented and commercial; new Web readers will primarily explore
rather than construct. Though individuals will have the capability of
constructing their own Web pages, few will do so, apart from some personal Web
pages and perhaps little virtual spaces that will function in the same fashion
as front-yard gardens do in suburban neighbourhoods.
This prediction is also rooted in a current reality: namely, it mirrors the
treatment of books and other mass media today. The photocopier, the fax
machine, and the printer connected to the home computer all have had some
effect on interpersonal communication, but they have not turned vast numbers of
people into authors or improved their creativity. It is unlikely that the Web
will.
I feel a little more certain in discussing the effects of the Web on my local
environment, the university. We can divide such effects into two classes: "in
the classroom" (by which I do not mean the physical classroom, but the whole
educational experience) and "out of the classroom" (effects on extracurricular
interaction and campus atmosphere).
(Back to index.)
The first hypertext textbooks have started to appear on CD-ROM, but an
increasing number of instructors are using the Web. One half suspects that the
main reason the Web is so appealing to teachers is because the sheer novelty
may draw students to actually engage the material. Good teachers I know will go
to great lengths if they think their efforts will inspire students to do their
readings.
The teacher cannot just put a mass of undifferentiated information on the Web;
they must add value to the information through structuring. In turn, the
student must be able to extract benefit from hypertext. It may be important to
have students construct hypertext instead of just reading it.
By disciplines, I do not mean "topics", but "strengths" or "fortitude". It is
too easy for the teacher to get lost in the gadgetry, to provide too much
information or to make it difficult to use. It is too easy for the student to
skip around a set of documents without absorbing much of them, or even to leave
the main text space entirely, caught by something more compelling. If that
results in learning, it is good; but if it is just the Web equivalent of
clicking the channel-changer idly, it is not good.
The Web offers increased expressive power through nonlinear hypertext, but this
also means an increased potential for miscommunication (or for poor
communication).
Traditional texts have a canonical linear form, starting at one end and
proceeding to the end. That may not be the way everyone reads them, but it is
there as a common standard. There may be no common standard for a hypertext;
everyone may have a different view, depending on their path through the
material, and these views may diverge upon revisiting the material, rather than
converging. If the increased understanding from a hypertext is due to my
freedom to choose my own path, then how do I convey that understanding to
another person who may have chosen a different path?
Not all texts are equal; we believe some to be clear than others, and we
have developed methods of trying to ensure clarity (essay form, footnotes and
endnotes) and of dealing with linear texts in this form. What about hypertexts?
I have repeatedly seen the phenomenon of students who read over the text or
listen to my lectures and think they understand, but they cannot act on that
information. It could be that they don't understand that they don't understand,
or that their understanding is on an intuitive level, rather than a formal one.
It is likely that hypertexts will increase the possibility of such
misunderstandings.
Are traditional testing mechanisms (essay, assignment, lab report, midterm and
final) appropriate to knowledge acquired through hypertext? Students in
computer science already know the frustration of learning programming
techniques through doing major projects but having to demonstrate their
expertise by writing on paper during a three-hour final exam. If these
traditional mechanisms are not appropriate not, what mechanisms are? Should the
student create hypertext? This leads to more difficult questions.
As part of certifying a student's capabilities, we expect them to do their own
work, and we punish plagiarism harshly. We have developed guidelines for what
is considered the reasonable use of the work of others in creating work of
one's own: quotations should not be too long, precise citations should be
given, and so on. Many of these are violated by hypertext, by its very nature.
What would the new guidelines look like? This may bring the crisis in
educational testing out into the open. Aronowitz, in the essay cited earlier,
also engages this point. He writes, "Learning [...] is not exclusively, or even
principally, a matter of acquiring logically constructed, decontextualized
systems of knowledge; instead, it is a matter of the ability to test, on a
selective basis, the appropriateness of fixed knowledge in concrete situations."
(Back to index.)
Hypertext course materials, much more than course newsgroups, depend on
Internet access. Currently this is provided by the University in an
unrestricted fashion. At what point does the university decide that a service
is best obtained commercially? No one expects the university to pay for
long-distance phone calls or personal mail. But the Internet can provide
functions very similar to these. Some universities are already "outsourcing"
their Internet connections, requiring students to obtain commercial accounts.
As with other media (campus public space, office doors, newsgroups) there are
bound to be conflicts over differing views of Web space, whether it is public
or private, or both in a varying mix. For reasons of self-esteem and
expression, it is probably worth encouraging the development of personal web
pages by students. But what if someone puts something on their Web page which
conflicts with the image that the university wishes to project? Battles over
the content of personal pages are likely to ensue. A reasonable compromise is
needed here: the university should take action only in extreme situations, and
do so consistently and with due process.
Students have roles as both information retrievers and providers. Navigating
the Web increases consumption of scarce resources (graphic terminals, fast
modem lines, high-speed links to outside) for uses not always directly related
to formal education. When a page in personal space becomes popular, some of the
resources are consumed for the benefit of outsiders. Who should pay for them?
File space was historically precious and so we accept restrictions on it; there
is not much fuss about quotas on file space as long as it is sufficient for
course work. I was assured by systems people at the talk I gave on this subject
that drastic improvements in bandwidth are just around the corner but that
processing power will be the new bottleneck. It may become inevitable that Web
access, like filespace, will be subject to individual quotas.
In order to make these quotas large enough, students will have to come up with
compelling reasons why they should be provided with such access. The excuse
"it's part of our education" may be too vague, just as the reason "we're
teaching you to think" is no longer good enough to alleviate students' concern
about curricular relevance.
If students are charged for Web access, should it be tuition, or an ancillary
fees? In Ontario, previous governments have struck down attempts to levy
separate computing fees, but the new Tory government has a "relaxed attitude"
towards tuition fees.
(Back to index.)
Andy Warhol once said something like "In the future, everyone will be famous
for fifteen minutes." I would like to conclude this article by describing my
fifteen minutes of fame, courtesy of the Web. There are a few small lessons to
be drawn from this tale, but I will leave them to you rather than making them
explicit.
In February 1994, the Computer Science Club at the University of Waterloo held
a public forum in response to the banning of five Usenet newsgroups by the
administration. I was invited to be one of four panellists. I expanded my
remarks into a five-thousand-word article that was published in the UW Gazette
in March 1994.
Once the article had appeared and everyone had recycled their copies, I put an
HTML version of it into my personal Web space. It was two levels deep one had
to first access my home page, then a page talking about my interests in the
social implications of computing, and only then would one find a link to the
article. I expected that only a few people would find it and bother to read
it.
The access logs on my server are browsable; I can easily pick out the times
when and locations from which any given page of mine is being accessed. To my
surprise, that page started getting a large number of accesses (more than 800 a
month) in April and May 1995. The hits were coming from all around the world,
from Korea, Chile, Czechoslovakia, Alaska. None of these people were accessing
my main home page; they were jumping directly to the article, and only to the
article.
My conclusion was that someone's Web page was pointing at my essay, with a
recommendation to read it. There seemed to be no way to find out who was making
this recommendation. I did two things in response. First, I added a link at the
beginning of the essay back to my home page (previously, I had thought this
unnecessary, since I couldn't imagine anyone getting there without going
through that page). Second, I looked at the access logs to see if I could
identify the individuals requesting the essay. Although their home machines
were identified, most people were listed with user ID "unknown". But a few were
not. I sent electronic mail to those few people, asking them how they had found
my essay.
I continued to monitor the access logs. Most people did not follow the back
link to my home page. Of those who did, most went to look at pictures of my
second child (born in March 1995) before leaving my Web page.
I finally received enough e-mail to figure out the reason for all those
accesses. Netscape is one of the most popular Web browsers, accounting for more
than half of all Web accesses in spring 1995. There was a row of buttons on the
version current at that time; one was labelled Net Search. Clicking it brought
up a page created by the developers of Netscape which talked about various
methods of searching the Web. The page contained one form which allowed a free
submission of a search request to InfoSeek, a commercial search service that
was offering a sample of ten hits in response to any search string, in order to
solicit business.
Using the word "newsgroups" gave one ten pages using that word, with a promise
of two hundred more when the appropriate fee was paid. My essay was page #2.
In mid-May 1995, InfoSeek did something (perhaps they updated their database),
and the hits stopped abruptly. I thought my fifteen minutes were over. But in
June 1995, they started again, and they continue.
I suspect that most of those accesses are due to people trying to
figure out how to read newsgroups through Netscape; I doubt that many of them
are reading my five-thousand-word essay. If you have reached this far, you have
done better than they, and I thank you for your attention.
(Back to index.)
- Walter Benjamin. Illuminations. Schocken, 1978.
- Jay David Bolter. Writing Space: The Computer, Hypertext, and the
History of Writing. Laurence Erlbaum Associates, 1991.
- Richard A. Lanham. The Electronic Word: Democracy, Technology, and
the Arts. University of Chicago Press, 1993.
- Theodor H. Nelson. Computer Lib/Dream Machines. 1974, reprinted
Microsoft Press 1987.
- Myron C. Tuman(ed.). Literacy Online: The Promise (and Peril) of
Reading and Writing with Computers. University of Pittsburgh Press,
1991.
(Back to index.)