Panels and Perspectives

This portion of the conference is usually the least well represented in the proceedings, as it unfolds, live and in real time, and depends as much on audience participation as on the presentations of the speakers. This section of the electronic proceedings will attempt to instantiate some of what transpired during the panels and proceedings sessions. [Documentation is incomplete as not all panelists submitted summaries.]

Table of Contents


Panel--Business and Organizational Issues in Hypertext Applications

Ian Ritchie, British Computer Society (moderator)
Robert J. Glushko, Passage Systems
Kaj Grønbæk:, University of Aarhus, Denmark
Steve Poltrock, Boeing Information & Support Services

Abstract: The panel that had been scheduled for this slot was forced to cancel, so a willing group of pinch-hitters was asked to fill in. A tried and true hypertext panel is one with a lot of controversy and discussion; where the participants spend most of their time interacting with each other and the audience, and where views may be espoused that are a bit exaggerated, in order to highlight different approaches, encourage discussion, and, most of all, entertain the audience!! The four participants, all familiar with the world of industry, and its intersection with the world of research and practical hypertext, each gave a brief intro representing their point of view. A heated discussion then began, including audience and panelists.

Glushko: I have worked for nearly twenty years in the "hypertext business" -- first as a researcher, then as an applications developer, and more recently as a consultant, a manager of consultants, and as a co-founder of a company that helps organizations make the transition to online publishing, hypertext, and SGML. I have seen many projects and organizations succeed (and a few not succeed), and have identified from this experience six factors that are good predictors of success in these applications.

No one of these factors is sufficient, and neither are all six factors strictly necessary, but taken together they define a kind of composite case study or template for a successful migration to online publishing, hypertext, and SGML.

The factors are as follows:

If there is one theme that cuts across all these factors, it is that successful projects have a broad end-to-end and organizational perspective that focuses on the people involved rather than the technology they use.

The ideal result of an project to adopt hypertext or SGML is a coherent end-to end system: an integrated set of authoring, conversion, validation, indexing, delivery, and database management software that enables an organization to meet all its requirements for creating, managing, and disseminating information. It is natural to view this system in terms of the tools that comprise it, but this technological perspective misses much of what is essential in understanding and ensuring a successful adoption.

The perspective I advocate requires that the organization carefully assess the impact of the end-to-end system on the people who work with it. What are the costs and benefits? Are these costs one time, incurred only during the transition to the new system, or are they recurring? Are the existing skills of the people involved sufficient for them to carry out their new tasks, or do they need additional training? Is there adequate management support that can sustain the project during its start up phase before the promised benefits begin to appear? These kinds of questions have nothing to do with link types or databases or SGML syntax, but if they are not answered the project will fail.

Poltrock: Steve Poltrock discussed the effects on corporate organizations of both top-down and bottom-up introduction of hypermedia. Hypermedia has become nearly ubiquitious in large companies and its spread has caused many organizational changes. Companies deliver many large document deliverables (such as airplane maintenance manuals) in SGML format so they can be viewed as hypermedia documents. The links in these documents cannot be made by hand, they must be generated automatically by programs that build on a representation of the information and document structure. Simply developing the capability to produce such documents requires a careful analysis and redesign of the information production process and corresponding changes in the organization supporting those processes. When hypermedia is an essential part of a business's deliverables, decisions made at high levels of an organization will drive the necessary technological and organizational changes required to implement it. There is no problem "convincing" managers and groups to use it.

In contrast, Web technology has entered large companies through the efforts and enthusiasm of technical people, often without the knowledge or consent of management. During its introduction, managers are often surprised to discover that information about their organizations is accessible throughout the company. They often embrace the technology enthusiastically, seeing it as an opportunity to advertise their organization's capabilities and accomplishments. The problem with this bottom-up approach to technology introduction is that it may result in too much diversity and inconsistent structures in different suborganizations. Once the technology has taken hold, planning starts moving up the organizational hierarchy again, for example to establish publication guidelines.

Grønbæk: As a university researcher I have several years experiences in undertaking participatory design based projects together with people mainly from engineering companies.

In one of the projects, a big EU funded project, the aim was to investigate the potentials of introducing hypermedia support for engineers in a company managing a huge bridge construction process. It was concluded from participatory design analysis/activities with the engineers and middle managers that:

Our solution to providing documentation support for these engineers is quite different from what proposed by the other panelists. Glushko and in part Poltrock argued for top-down introduction of a company wide publishing policy based on a well-defined documentation standard. However, our experiences show that we need to introduce hypermedia support such that those who have to start using new tools to create links etc. also get (a large part of) the benefits of the exstra work.

In our project, it was very important for the engineers that they could continue working with their favourite tools, CAD systems, word processors, database interfaces, and the like. The heterogeneity of the documentation would make a conversion into a special hypermedia/multimedia format (e.g. HTML,SGML) unrealistic and far too inflexible for the engineers in their day-to-day work. They would simply not be motivated for moving to different tools just because somebody else would get a more homogeneous representation of the documents afterwards. Their need were to get dynamic support while in the middle of handling a task with critical deadlines.

A hypermedia solution that would apply in this situation is an open hypermedia service like Devise Hypermedia or similar systems, which can take advantage of existing (open) tools and enhance them with dynamic hypermedia linking capabilities. Finally, open hypermedia systems can be introduced incrementally bottom-up in projects and/or departments starting new activities without assuming a huge data conversion process to have taken place. Old documents are simply linked to new material in the form they exist in, whenever necessary for the working task at hand.

In short open hypermedia services provides immediately useful support for the primary day-to-day work of engineers. A lesson to learn from this for introducing documentation standards to engineers is that the accompanying publishing tools have to significantly improve the primary work practice and not create an overhead with no visible benefit.

Discussion: The discussion centered around strategies for introducing new technology into companies ranging from top-down (Glushko) via mixed (Poltrock) to bottom-up (Grønbæk).

For example, Bob Glushko recommended that the best way to get management to buy into a hypertext solution is to involve customer needs and input. Although the panel had different approaches for how to introduce sustainable technological improvements, they all agreed that the innovation had to occur at an appropriate moment for the enterprise. Grønbæk asserted that participatory design increases motivation to use and improve process. Glushko, on the other hand, said that without a top-down mandated vision statement, no system will ever get beyond the prototype phase. Poltrock, however pointed out that management may not know much about technology, so the bottom-up approach is required to demonstrate the usefullness of a technology.


Panel--Visual Metaphor and the Problem of Complexity in the Design of Web Sites: Techniques for Generating, Recognizing and Visualizing Structure

Michael Joyce, Vassar College
Robert Kolker, University of Maryland (Co-chair)
Stuart Moulthrop, University of Baltimore
Ben Schneiderman, University of Maryland (Co-chair)
John Merritt Unsworth, University of Virginia

Abstract: The notion of cyberspace having no "there" has outlived its usefulness for mystification and titillation. In fact, the Internet, and the World Wide Web in particular, are quite "there," and in very concrete ways. Ignoring this concreteness may be a way of evading responsibility for conceptualizing how the Web can be used for serious and complex purposes. Our panel will consider alternatives to conventional ideas and structures and submit that the design of Web sites does not have to be limited to simple advertising vehicles or to equally simple institutional show and tell screens. We want to suggest that complexity and imagination ought not be limited by the constraints of HTML, bandwidth, or conventional wisdom, but freed by larger, more thoughtful notions of the possibilities of user interaction and hypertextuality. Proposed for discussion will be theories of metaphor through which design becomes a way of thinking about various structures and the connections between them.


Panel--The Process of Discovery: Hypertext and Scholarship

Elli Mylonas, Brown University (Moderator)
George Landow, Brown University
John B. Smith, University of North Carolina, Chapel Hill
Mark Bernstein, Eastgate Systems, Inc.
Nancy Kaplan, University of Baltimore

Abstract: We have all seen hypertext applied to teaching and publication, and certainly as an object of research in itself. What is far more rare are examples of hypertext systems and documents integrated into the research process in other fields. Where are the scholars who are taking notes and organizing their thoughts and data using a hypertext system? Why do so many hypertext researchers still work with conventional word processors? Is this lack due to intrinsic problems with the systems? Or is it a problem of the scholars and researchers? Will this change in a generation? The participants will discuss these questions based on their own experience both positive and negative, with an especial focus on the use (or non-use) of hypertext(s) as laboratory, or "sandbox" for scholarship and scientific work.

Mylonas, Introduction: Where are the scholarly hypertexts? and where is the scholarly hypertext process?

Landow: The scholarly effects and uses of hypertext begin with the fact that linking information together changes the nature of scholarly resources and the way we use them. The examples of the OED online and the similar WWW version of the Britannica demonstrate that transferring such reference works into html greatly increases the convenience, speed, and frequency of their use, and these examples also show that such resources somewhat blur distinctions between educational and scholarly sources. The most interesting effect of hypertext upon scholarship, however, comes with the use of hypertext to reconceive the scholarly edition as editorial theorists have increasingly come to agree that only electronic presentations can provide the multiple authoritative texts print provides. The final relation of hypoertxt to scholarship similiarly involves using this new infotech to create new forms of scholarly writing and publication, ones that will not appear simply as crude translations of print into HTML.

Smith: Smith discussed the use of the WWW as a collaborative, scholarly hypertext. First he listed some ways of using it to publish that are well suited to online publication. An example of that is the "guru report", which is worth reading because the author is knowledgeable, and may have reliable links to other relevant sites. At the same time the notion of the "final product" becomes less clear. It is possible to keep updating and adding to a file, so there is not particular published version.

The web is not fundamentally different from the online and hypertext systems that we had before. But its speed and ease of use make it feel qualitatively different. Even some of the problems that we have with it are old problems, translated to a new medium. On such is the issue of quality: how to tell if a web page is reliable and a good source of information. This will force people to become more sophisticated in their use of information.

Bernstein: The formidable challenges of scholarship - and the undeniable, urgent importance of discovering answers to the questions and problems that plague us and our society - can lead us to imagine that the scholar's tools must be ponderous, elaborate, and complex. This is wrong; what we need are not weightier systems with more polish and more features, but rather tools that are smaller and livelier, hypertextual systems that encourage spontaneity and discovery.

The docuverse has arrived more swiftly than we expected; the world wide web, for all its shortcomings, provides an incredibly rich, flexible, and accessible hypertext medium. To ask, "What does the web need?" is now impertinent. The web is like the weather; it is ubiquitous and largely independent of our whims. Enjoy it, talk about it, and if it is sometimes unsatisfactory, let us build umbrellas. We have witnessed the Halasz transition through which hypertextuality becomes so widespread that it becomes increasingly invisible:

What we, as scholars, need next are facilities and environments that support smaller, livelier, and more personal tools - personal information gardens that help map, reinterpret, and (if necessary) subvert the myriad elements of the docuverse. Smaller tools are easier to build; if we want better tools, we will be able to build them ourselves. Livelier tools can have opinions and attitudes, when massive systems must be appear to be bland and neutral. Information farming promotes discovery by encouraging us to view information from fresh, unanticipated, and even playful perspectives.

Finally, progress in hypertextual tools to promote discovery depends on our finding the scholarly will to face facts. We need to read actual hypertexts with care and thought; we can no longer accept a criticism based on expectations of what hypertext literature might be. We need to learn once more to evaluate accomplishment by understanding the scholarship, not by counting citations or measuring media coverage or taking popularity polls. Finally, the recent passage of the U.S. Computer Decency Act threatens to enshrine a rhetorical gambit of a Republican presidential candidate -a gambit more concerned with exploiting racial antagonism than with computers or decency - as a permanent obstacle to the free interchange of ideas and to the docuverse we all have struggled to create.


Panel--Things Change: Deal with it! Versioning, Cooperative Editing and Hypertext

Wojciech Cellary, The Franco-Polish School of New Information and Communication Technologies
David Durand, Boston University (Chair)
Anja Haake, GMD-IPSI
David Hicks, GMD-IPSI
Fabio Vitali, University of Bologna
James Whitehead, University of California, Irvine

Abstract: A document that is in active use is generally one that is changing. Version control provides one way to control the disruptive effects of change without the worse solution of preventing or obstructing it. This panel will examine the relevance and problems of version control, with an emphasis on the topic of collaboration support. Despite its long history in the hypertext community (usually as something to be added in the future), the topics of shared editing and revision control remain complex, controversial and frequently misunderstood. Now that a really large public hypertext has come into existence, the issues of long-term maintenance and referential integrity are coming to the fore. The panel will give an overview of the fundamental issues, as well as a selection of arguments for and against different approaches to the issues. It builds on the perspective the presenters have gained from their own research, as well as their workshops on Hypertext and version control at ECHT '94 and ECSCW '95.

Summary by the panelists: The organizers were particularly pleased that the panel was scheduled to follow Anja Haake's and David Hick's newest paper on versioning VerSE: Towards Hypertext Versioning Styles. This gave versioning a whole session of the conference, and allowed people to get a good view of one approach to the problems discussed generally in the panel.

The panel itself was organized around a series of contentious issues, with individuals chosen to represent differing apporaches to addressing those issues. Participants presented their positions in the strongest possible terms, to highlight the points of conflict. Only briefly addressed, in the introduction to the presentations were the substantial points of agreement among the panelists that versioning has a place in hypertext systems development for several reasons:

The five questions that were chosen for discussion by the panelists were:

Discussion: Audience participation was quite good, though as the the 6:00 dinner hour approached, discussion did taper off. The audience, as indicated by their comments, tended to divide into two groups -- practicioners managing large web sites, and researchers interested in collaboration via shared hypertext. This confirmed the panelists' a priori suspicions about differing missions for the integration of versioning in hypertext systems.

Web site managers want the best practical, near-term solution to what is essentially a colloborative publication problem. These participants were quite vocal and probably a majority of the audience. They were very sympathetic to the proposal that current document and software management tools should be integrated into the WWW quickly to help them manage a complex and multiply constrained publication process.

There was also a more academic or experimental group in the audience that is interested in versioning as research. This group was less vocal, and seemed more interested in seeing what approaches are being tried by different research groups. Questions that we have attributed to this group tended to be about the details of a model presented by one of the panelists.

The most encouraging thing about the versioning panel from a researcher's perspective (and all the panelists shared that perspective to some degree), was that there is real interest in seeing this functionality from real users. The most disppointing thing was that this interest seemed to stop at bringing document management for the Web up to the standard used for print production; there seemed relatively little interest in more speculative research on the role of versioning in supporting collaboration and annotation.

The slides from those panelists who presented their talks from HTML documents are available on the web.


Panel--Future (Hyper)Spaces

Kathryn Cramer, Sunburst Communications
Andreas Dieberger, Georgia Institute of Technology
Cathy Marshall, Texas A & M
Tom Meyer, First Virtual (Chair)
Athomas Goldberg, New York University

Abstract: As the Internet has emerged into common consciousness, the notion of hypertext, especially as illustrated by the World Wide Web, has prospered. However, with the creation of other Internet-based media, such as MUDs and VRML, we are encountering new types of textual/narrative/hyper paradigms. These are close enough to hypertext that they can be discussed in similar terms, but they nevertheless represent something new, and are perhaps as far removed from traditional hypertext as hypertext is from flat text. The key aspects of these new forms that we will discuss include: reactivity, feeling of presence, shared spaces, wide range of interaction.

Introduction: All the speakers on this panel explored areas beyond familiar hypertext appliations and the WWW. They had many examples to show of new types of navigation and what might be prosaically called information management. This panel may be thought of as providing directions that might lead to an answer for the questions and critiques raised in the panel on Visual Metaphors.

Kramer: (from notes) Storyspace and other node-link hypertexts are unsatisfying. What is needed is something more intricate. For example, virtual reality systems might be used not simply to create 3D spaces, but rather to allow movement around an object, so it may be seen from different sides, and different viewpoints. An object might remain stable, and everything else could move and change around it. Clutter is important in a hypertext. Kramer described writing with heavily ornamented nodes, which might be series of things inscribed in one another. Her demo showed color images inscribed in black and white, where changing things are contained within constant ones.

Dieberger: Social Navigation. You may have noticed already that a lot of navigation on the Web nowadays has a lot of social character - people send URLs of interesting pages by email, people maintain hotlists with stuff they are interested in and so forth. When looking for pages on a certain topic I often don't use a search engine but first check the pointer page of my friend who happens to know much about that topic. It seems that after the Web reached a critical mass social processes kicked in that led to navigational behaviour we never observed in any earlier hypertext system. We could call this 'social navigation'.

I started to think about such issues quite some time ago, especially after Thomas Erickson (at Apple Comp.) pointed out that some types of Web navigation are almost like 'using other people as intelligent agents for searching and finding' (see also his column in Communications of the ACM in January 96).

Social navigation indeed is happening on the Web and it is happening although the tools we have do not really support it. It is incredibly bothersome to cut and paste URLs into and from emails. But I don't want to see URLs at all - I just want to point out information in a natural way, as natural as handing a piece of paper to a friend.

To design future hyperspaces we have to be aware of the trend towards social navigation and we have to build systems that help us sharing information. Future hyperspaces will not be empty data warehouses where you never meet other people. Instead they should be populated by people actively sharing and exchanging information in a natural way. Cyberspace will then turn into a space with meaning, a 'place'.

Goldberg: (from notes) Drawing on his work with video avatars which can react to external stimuli based on their personalities, Athomas Goldberg discussed future worlds which could respond to the person who is interacting with them. His examples showed the beginning of the next version of interactive video, shared spaces where people can interact with each other, and with the avatars. The avatars are rule based systems. The most spectacular part of this presentation, and the most persuasive were the videos.

Marshall: (from notes) Cathy Marshall's presentation was itself a kind of spatial hypertext. As in Norbert Streitz' opening keynote, the slides she showed provided a commentary and even a dialog with her talk as she spoke it. She started by showing slides of real spaces, and using them to illuminate virtual spaces. She discussed document based hypertext, which has a node and link structure; browser based hypertext, which is defined by views and maps; and spatial hypertext which has no links, but whose relationships are defined by proximity. In the last case, links and nodes become one. She then pointed out the prevalence of architectural metaphors, and the "boxiness" of the views we have of hypertext. This exceptionally Euclidean and architectural language is not appropriate to how hypertext and virtual space actually starts out--featureless. It then has to be farmed by squirrels to start to take shape [NB a metaphor that had come up several times during the conference.]

Marshall concluded with a series of questions that meshed very closely with Andreas Dieberger's presentation. How do we populate our spaces with other people? What happens when they scale? What do we do about time? Virtual spaces are timeless. Space outside the computer isn't seamless with space inside it. How do virtual spaces interact with real spaces?

Discussion: Although there was little time for discussion, there were a lot of people who were eager to ask questions. This panel was one of the two concluding sessions, and it brought together many of the threads, doubts and questions that were raised in several other sessions during the conference. A great deal of it focussed on ideological issues, such as the ideology that is built into Goldberg's rule based systems and the inability to resolve the tension between naturalistic spaces and Kramer's surrealistic, disruptive ones. Is it possible for us to live in a virtual world of happy television, which was what Michael Joyce referred to in his presentation? Finally Mark Bernstein asked the defining question: "It takes a heap of living to make a house a home. How much time is needed to make a space a place?"


Perspectives with Commentary: Evaluation

Gary Marchionini, University of Maryland (Chair and Commentator)

Abstract: Evaluation is one of the most important aspects of application system design. This is especialy so for hypertext systems and documents since they are user centered at a fundamental level. This is apparent in the basic hypertext model of user-controlled navigation. These perspectives will focus on different aspects of evaluating hypertexts, with a focus on the integration of multimedia components into a hypertext system.

Marchionini: Evaluation in this session is not considered to be product testing but rather an integral part of the design process and as a larger process of research. A classic example of evaluation research is the SuperBook studies (Egan et al) that not only provided formative evaluation that improved the system bust also informed our understanding of information seeking in electronic environments and how people use electronic texts. Another example is the work over the past seven years in evaluating the Perseus Project (e.g., Marchionini & Crane). The primary aims from the start were to understand how this hypermedia corpus affected teaching and learning processes. As a result we now have a systematic, longitudinal case study of systemic change.

Egan, D. E., Remde, J. R., Gomez, L. M., Landauer, T. K., Eberhardt, J., & Lochbaum, C. C. (1989). Formative design evaluation of Superbook. ACM Transactions on Office Information Systems, 7(1), 30---42.

Marchionini, G., & Crane, G. (1994). Evaluating hypermedia and learning: Methods and results from the Perseus Project. ACM Transactions on Information Systems, 12(1), 5---34.

Blair Nonnecke and Jenny Preece, South Bank University)
"Video-Based Hypermedia: Guiding Design with Users' Questions"


Mini Perspectives on the Web

Norbert Streitz, GMD-IPSI (Co-Chair and Commentator
Steven J. DeRose, Electronic Book Technologies (Co-Chair and Commentator)

Abstract: Despite its limitations, the WWW is the largest global hypertext laboratory that has ever existed. Hypertext researchers were previously limited to creating their own hypertext docu-islands. Links to other hypertexts were not easy to make, nor was it easy to disseminate individual hypertexts. Unlike the earlier generation of research systems, the WWW is a real world publishing medium on a large scale, and this is mostly due to its simple model. The presenters of this set of perspectives will discuss experiences using the WWW for hypertext research and publication. They also propose extensions to the WWW, based on their experiences creating WWW information and in the context of previous hypertext research.

Michael Bieber, New Jersey Institute of Technology
"Crafting the Electronic Edition of the _Communications of the ACM_ August 1995 Special Issue on Hypermedia Design"

This perspective discusses the process, issues and lessons learnt in designing and implementing the World-Wide Web version of the August 1995 special issue of the Communications of the ACM on hypermedia design. The printed journal version contains an introduction, two opening statements (about two pages each), ten sidebars (up to a page in length), and six 5000-word papers, two of which have inserts of up to 1000 words.

We were inspired to present hypertext *through* hypertext by the electronic versions of the July 1988 special issue of the Communications of the ACM and by Nielsen's Hypertext'87 Trip Report. While each is highly enjoyable and a great contribution in its own right, students using them in our hypermedia course have identified shortcomings concerning disorientation and link specificity, which we tried to overcome in our design. We employ semantic link labels as a weapon against each, allowing readers to see a link's purpose and destination before choosing it. This provides a level of *departure rhetoric* for readers.

Thus we had several goals for producing an electronic version of the August 1995 special issue on Hypermedia Design. First, we believe that experiencing papers about hypertext design within a hypertext environment helps readers understand the concepts better. Second, we believed the paper links such as ``(see XYZ's paper in this issue for more details)'' typically used by authors---especially within a tight corpus such as a special issue---often are vague and rarely point to exact positions within the destination paper. On-line cross-reference links would better reinforce the special issue's cohesion. Third, we believe that many authors are not using the hypertext features of the WWW to their full advantage. To demonstrate some of the ideas and functionality the *concept* of hypertext could provide the WWW community, we wanted an environment for experimenting with semantic link labeling on the World-Wide Web.

The perspective described the process undertaken to develop the issue's semantic links, present some of the technical details in generating the WWW pages incorporating these and cover additional issues, which others undertaking similar projects may wish to consider.

Howard Besser, University of Michigan
"Hypermedia in Support of Distant-Independant Education"

Besser described the outcome and the problems encountered in a distance learning project. The goal of the project was to make one virtual classroom that had students in several universities so that the class functioned as if it were one class. The classrooms were connected by ISDN, so that there were real time interactions, both on video and on the computer screen. Students could also communicate using telnet and the WWW.

The initial problems encountered in this project were of technological and related to WWW creation and management. The main solution is to be able to come up with guidelines in time to have the guidelines influence the development process. Some further problems, primarily those of managing collaborative work easily, would be solved by better authoring and system software. Another set of problems was less tractable. These were issues of privacy and copyright. Much student work was immediately made public so it could be shared by the class. It is unclear if this is an invasion of privacy, or how it affects students to have to critique and be critiqued in public. The course may experiment with aliases for certain types of public postings.

In conclusion, the worse problems were those that had to do with access and stability of the web server. It was the central piece to the course, since students could find assignments and other necessary material there. It could not be counted on, either to provide a fast connection at any time of day, or to be up. This raises the frustration level of the students, and makes it difficult for them to work together.

Paul De Bra and Frank Dignum, Eindhoven University of Technology
"Collaborative Hypertext Authoring in the Web"

Gary Hill, Les Carr, Dave De Roure and Wendy Hall, University of Southampton
"The Distributed Link Service: Multiple Views on the WWW"

Open hypertext systems rely on the existence of a link service, so that links may be kept external to documents, thus making it easier to maintain and update them, as well as enabling the reader by making easier to select among links, and even to add their own, without affecting any documents. Finally, be decoupling the links from the data, it is possible to provide alternate views and provide for alternate uses. The WWW is currently the largest shared hypertext system in use. However, its links are embedded in its data. Hill and his colleagues have shown how a link server may be used to add the advantages of an open hypertext system to the WWW.

Hill then described the implementation of the link service, which runs as CGI scripts on a WWW server. The CGI scripts learn what was selected in which document, and then query a link database in order to display a list of all possible links, or of all links that meet certain criteria. He also described the client interface, which requires a utility that runs along with Netspace to transmit the information that the link service needs.

The Distributed Link Service has other advantages over standard web browsers. It can point to and from non-WWW documents, it can be customized to provide advanced hypertext functionality, and can also be used as an authoring tool, from whose private databases a public one can be compiled.

A complete version of the paper on which this perspective presentation was based an be found on http://wwwcosm.ecs.soton.ac.uk/~gjh/perspective.html.

Discussion: Discussion centered a great deal around the technical issues of linking, either using the link service, or trying, as Bieber did, to type the links. Bieber had to explain that his group had not experimented with visibly differentiated links because it was too difficult to build different browsers. He felt that a good reader would get semantic context form the surrounding text. The question of scaling also came up. Both Bieber and Besser felt that these systems will not scale well. Gary Hill was also asked questions about the details of his system.