Convergence to the Information Highway

Brian R. Gaines

Knowledge Science Institute
University of Calgary
Alberta, Canada T2N 1N4
gaines@cpsc.ucalgary.ca
http://ksi.cpsc.ucalgary.ca/KSI/

Abstract: The convergence of telecommunications and computing technologies/ services to a new medium offering integrated services through digital networks was predicted in the 1970s and is beginning to have major social and commercial impacts in the 1990s. This article analyzes the technological infrastructure of convergence to an information highway, tracing the origins of the concept, the false starts, the origins and growth of the Internet and World Wide Web, convergence as a substitution process, and the learning curves of the technologies involved.

1 The Path to the Information Highway

The motivation for an "information highway" was expressed in 1937, just prior to the advent of computer technology, when Wells was promoting the concept of a "world brain" based on a "permanent world encyclopaedia" as a social good through giving universal access to all of human knowledge. He remarks:

"our contemporary encyclopaedias are still in the coach-and-horses phase of development, rather than in the phase of the automobile and the aeroplane. Encyclopaedic enterprise has not kept pace with material progress. These observers realize that the modern facilities of transport, radio, photographic reproduction and so forth are rendering practicable a much more fully succinct and accessible assembly of facts and ideas than was ever possible before." (Wells, 1938)

Bush, a technical advisor to Roosevelt, published in 1945 an article in Atlantic Monthly which highlighted problems in the growth of knowledge, and proposed a technological solution based on his concept of memex, a multimedia personal computer:

"Professionally, our methods of transmitting and reviewing the results of research are generations old and by now are totally inadequate for their purpose...The difficulty seems to be not so much that we publish unduly in view of the extent and variety of present-day interests, but rather that publication has been extended far beyond our present ability to make real use of the record." (Bush, 1945)

The world brain has continued for over fifty years to provide an active objective for the information systems community (Goodman, 1987), and memex is often quoted as having been realized fifty years later through the World Wide Web (Berners-Lee, Cailliau, Luotonen, Nielsen and Secret, 1994).

The advent of time-shared conversational computing (Gruenberger, 1967; Orr, 1968) in the early 1960s allowed computers to be used to begin to address these early visions in providing a national information system (Rubinoff, 1965) or a computer utility (Parkhill, 1966). Martin's model of a "wired society" in 1978 comes closest to forecasting many aspects and impacts of the information highway as it is envisioned today:

"In the past, communications networks have been built for telephony, telegraphy, and broadcasting. Now the technology of communications is changing in ways which will have impact on the entire fabric of society in both developed and developing nations. In the USA the technology revolution coincides with a change in the political and legal structure of the telecommunications industry; the combination is explosive. Some countries will take advantage of the new technology; some will not. Some businessmen will make fortunes. Some companies will be bankrupted." (Martin, 1978)

However, attempts to make available the wired society at the time of Martin's seminal work were presented in terms of greatly exaggerated expectations. For example, in 1979 Fedida and Malik present Viewdata as having the potential to have major social and economic impacts:

"We believe that Viewdata is a major new medium according to the McLuhan definition; one comparable with print, radio, and television, and which could have as significant effects on society and our lives as those did and still do. Like them, it may well lead to major changes in social habits and styles of life, and have long-lasting as well as complex economic effects." (Fedida and Malik, 1979)

Other books of the same period describe the commercial, social and educational potential of Viewdata and interactive Videotex in similar glowing terms (Sigel, 1980; Woolfe, 1980; Chorafas, 1981; Winsbury, 1981), but the potential never materialized although systems such as Minitel in France may be seen as primitive ancestors of the information highway.

It was not until the 1990s and the advent of the World Wide Web that a system with many of the attributes of Well's world brain and Bush's memex came into being. The web makes available linked and indexed interactive multimedia documents so that it emulates the printed publication medium but also goes beyond it in offering sound, video and interactivity. Its founders describe it in terms reminiscent of Wells' vision:

"The World Wide Web (W3) was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project" (Berners-Lee et al., 1994)

However, even an active and interactive encyclopaedia is an inadequate model of existing Internet activities because it neglects the integration of human-to-human discourse. Newsgroups and list servers support for mutual interest communities where questions are answered not be consulting an encyclopaedia but by consulting other people. This corresponds to another prophesy from the early days of timeshared computing:

"No company offering time-shared computer services has yet taken advantage of the communion possible between all users of the machine...If fifty percent of the world's population are connected through terminals, then questions from one location may be answered not by access to an internal data-base but by routing them to users elsewhere--who better to answer a question on abstruse Chinese history than an abstruse Chinese historian." (Gaines, 1971)

Wells and Bush described the implementation of their visions in terms of the media technologies of their time and did not foresee the advent of television and its impact as a source of knowledge (Bianculli, 1992). They also neglect human discourse as another significant source of knowledge, human society as a living encyclopaedia. The envisioned information highway may be seen as an extended "world brain" accessed though the personal computer as a "memex" that integrates all available media and means of discourse to give active presentations of, and interactive access to, all of human knowledge. The current facilities of the Internet and World Wide Web provide a primitive implementation of the highway.

2 The Growth of the Internet and World-Wide Web

The Internet and the World Wide Web both typify technologies that come into being through serendipity rather than design in that the intentions and aspirations of their originators had little relation to what they have become. As the development of the electronic digital computer can be attributed to needs and funding in the 1940s arising out of the second world war, so can that of the Internet be attributed to needs and funding in the 1960s arising out of the cold war. The Eisenhower administration reacted to the USSR launch of Sputnik in 1957 with the formation of the Advanced Research Projects Agency to regain a lead in advanced technology. In 1969 ARPANET (Salus, 1995) was commissioned for research into networking. By 1971 it had 15 nodes connecting 23 computers and by 1973 international connections to the UK and Norway. In 1984 the National Science Foundation funded the creation of a national academic infrastructure connecting university computers in a network, NSFNET. In 1987 the net had grown to such an extent that NSF subcontracted its operation to Merit, and in 1993/1994 the network was privatized.

The number of computers connected through the Internet has grown from some 28 thousand at the beginning of 1988, to over 9 million at the beginning of 1996. Figure 1 shows data plotted from the Internet Domain Surveys undertaken by Network Wizards (NW, 1996).

Figure 1 Growth in number of hosts on the Internet

The World Wide Web was conceived by Berners-Lee in March 1989 (CERN, 1994) as a "hypertext project" to organize documents at CERN in an information retrieval system (Berners-Lee and Cailliau, 1990). The design involved: a simple hypertext markup language that authors could enter through a word processor; distributed servers running on machines anywhere on the network; and access through any terminal, even line mode browsers. The web today still conforms to this basic model. A poster and demonstration at the ACM Hypertext conference in December 1991 announced the web to the computing community. However, major usage only began to grow with the February 1993 release of Andreessen's Mosaic for X-Windows. Whereas the original web proposal specifically states it will not aim to "do research into fancy multimedia facilities such as sound and video" (Berners-Lee and Cailliau, 1990), the HTTP protocol for document transmission was designed to be content neutral and as well-suited to multimedia material as to text.

In March 1993 the web was still being presented (Berners-Lee, 1993) as primarily a hypermedia retrieval system, but in November that year a development took place that so changed the nature of the web as to constitute a major new invention in its own right. Andreessen issued NCSA Mosaic 2 using tags to encode definitions of Motif widgets embedded within a hypermedia document, and allowed the state of those widgets within the client to be transmitted to the server. Suddenly the web protocols transcended their original conception to become the basis of general interactive, distributed, client-server information systems. This change was again serendipitous since the original objective of the design had been to enable the user to specify retrieval information in a dialog box that was embedded in a document rather than in a separate window. The capability of the user to use a web document to communicate with the server is the basis of commercial transaction processing on the web.

The growth rate of overall Internet traffic is some 100% a year. However, web traffic was growing when last accurately measurable in 1994 at some 1,000% a year. The growth of the web relative to all the other services is apparent if one plots the proportion of the data accounted for by each service. Figure 2 shows the proportion of FTP, web, Gopher, News, Mail, Telnet, IRC and DNS traffic on the NSFNET backbone from December 1992 through April 1995. It can be seen that the proportion of all services except FTP and HTTP remain relatively constant throughout the period, declining slightly towards the end. However, the proportion attributable to FTP decreases while that due to the web HTTP protocol increases and becomes greater than that through: IRC in October 1993; Gopher in March 1994; mail in July 1994; news in November 1994; and FTP in March 1995. This corresponds to the basic web protocol becoming the primary carrier of net data traffic with a 25% and growing share when last measurable.

Figure 2 Proportion of FTP, web (HTTP), Gopher, News (NNTP), Mail (SMTP), Telnet, IRC and DNS traffic on the NSFNET backbone 1992-1995

Experience to date makes it problematic to characterize the commercial potential of the information highway, its actual social impact, and whether the protocols, technologies, carriers and equipment on which the present implementation is based are an adequate basis for future development. The information systems industry is well-known for over-ambitious expectations of technologies and their impact--a decade ago `expert systems' were going to revolutionize industry and create a new five billion dollar industry--they did neither. The `video phone' and `voice typewriter' have been on the horizon for over thirty years but have found no market and no adequate technology, respectively. It is eminently reasonable to be suspicious of claims that the information highway will be the driver of the next economy, and that the technology to implement it is practically in place.

The remaining sections present the substitution and learning processes of the information technologies underlying the information highway in order to provide a basis for forecasting the time scales of convergence and its socio-economic impact.

3 Convergence as a Technology Substitution Process

Telecommunications and computing technologies have common roots in electronics device technology, exploiting it to provide systems and services in similar ways. The early histories of the typewriter, the phonograph, the telephone, the movie, the computer and radio and television broadcasting share much the same timelines from the late nineteenth through twentieth century. This common background is based on a technological progression from mechanical devices through vacuum tubes to the transistor and the integrated circuit containing a number of transistors (Braun and Macdonald, 1978). The transition from mechanical to electronic motion leads to increased reliability and greatly increased speed of operation. However, all of the applications above have also been involved in three other transitions of equal significance. The first is the transition from analog electronic signals directly representing the continuous variables involved to a digital electronic signal encoding those variables. The second is the transition from special-purpose computing architectures where the circuits are designed for a particular task, to general-purpose computing architectures which can be programmed for a particular task. The third is the transition from programs as fixed circuits to programs as themselves variable digital data allowing general-purpose machines to be simply reprogrammed for different tasks.

Whereas the transition from mechanical to electronic devices may be seen as a simple substitution of a faster more reliable technology, and that from analog to digital may be seen as a simple substitution of a more precise and reliable technology, the profound impact of transition to programming was serendipitous because the basic reason for doing it was to improve reliability by using fewer electronic components. In the early 1940s the concept emerged that the use of unreliable electronic components could be minimized by using a sequence of instructions to carry out a complex operation (Mauchly, 1942).

Programming gives to computers a fundamental property of being, in some sense, universal machines, since they can be programmed to emulate the calculation performed by any other machine. This means that, in principle, any special-purpose electronic circuit can be substituted by a general-purpose computer together with a program to emulate the operation of the special purpose circuit. For the electronics industry this makes possible economies of scale through very high-volume mass production of computer chips that can be functionally specialized to a wide range of applications as required. For customers it makes it possible to purchase one general-purpose system that can be used for a variety of functions, some of which may be unknown at the time of purchase.

The economic logic for substituting special-purpose systems with general-purpose programmed systems is straightforward. At a given state of the art in circuit technology the counter-balancing considerations are:-

  1. Negatively, the programmed system is generally slower (by a factor of between 10 and 100) than the special-purpose system-- a potential decrease in performance.
  2. Negatively, the programmed system generally uses more storage than the special-purpose system-- a potential increase in cost.
  3. Positively, the programmed system is more widely applicable and subject to mass production-- a potential decrease in cost.
  4. Positively, the programmed system is multi-functional-- a potential decrease in cost and improvement in capabilities.

Consideration 1 is a fundamental limitation since, if the application requires processing speeds not feasible with existing computer technology, then substitution cannot occur. This is why digital television is a late arrival since the data rates required for television signal processing have been too high for mass market computer products until the mid-1990s. Consideration 2 is also a limitation in television applications such as digital editors where the cost of random-access digital storage is still high compared with analog tape. However, as technology improves, in applications to which these limitations come not to apply the logics of consideration 3, mass production, and 4, versatility, lead to inexorable substitution.

The diffusion of computer applications may be seen as a process of substitution of electronics for mechanics, followed by a process of substitution of general-purpose programmed electronics for special-purpose electronics. Special-purpose circuits remain essential for applications where the rate of information processing exceeds that possible in a low-cost general-purpose computer, or where the cost of transforming the signals involved to and from digital form exceeds the cost savings of using a general-purpose computer. Convergence may be seen as a phenomenon of such substitution taking place in the consumer telecommunication markets with telephones, radio, television, VCR's, cameras, and so on, becoming an integral function of the multi-media personal computer.

This analysis may suggest that convergence is a simple process of technological substitution. However, analysis of convergence as substitution must also take into account two "emergent phenomena":-

The next section presents a model of how such new phenomena arise through the learning infrastructure of information technology.

4 The Learning Infrastructure of Information Technology

The number of transistors on a chip has seen a 1,000,000,000 increase in less than 40 years, whereas other high-technology industries have typically seen less than 100 performance increase in 100 years (Gaines, 1991). This improvement depends on the capacity of silicon to support minute semiconductor logic circuits, but this capacity could not have been fully exploited over 9 orders of magnitude performance improvement without the use of the computer to support the design and fabrication of such circuits. This is one example of a positive feedback loop within the evolution of computers, that the computer industry has achieved a learning curve that is unique in its sustained exponential growth because each advance in computer technology has been able to support further advances in computer technology. Such positive feedback is known to give rise to emergent phenomena in biology (Ulanowicz, 1991) whereby systems exhibit major new phenomena in their behavior. The history of computing shows the emergence of major new industries concerned with activities that depend upon, and support, the basic circuit development but which are qualitatively different in their conceptual frameworks and applications impacts from that development; for example, programming has led to a software industry, human-computer interaction has led to an interactive applications industry, document representation has led to a desktop publishing industry, and so on.

Each of these emergent areas of computing has had its own learning curve (Linstone and Sahal, 1976), and the growth of information systems technology overall may be seen as the cumulative impact of a tiered succession of learning curves, each triggered by advances at lower levels and each supporting further advances at lower levels and the eventual triggering of new advances at higher levels (Gaines, 1991). It has also been noted in many disciplines that the qualitative phenomena during the growth of the learning curve vary from stage to stage (Crane, 1972; De Mey, 1982; Gaines and Shaw, 1986). The era before the learning curve takes off, when too little is known for planned progress, is that of the inventor having very little chance of success but continuing a search based on intuition and faith. Sooner or later some inventor makes a breakthrough and very rapidly his or her work is replicated at research institutions world-wide. The experience gained in this way leads to empirical design rules with very little foundation except previous successes and failures. However, as enough empirical experience is gained it becomes possible to inductively model the basis of success and failure and develop theories. This transition from empiricism to theory corresponds to the maximum slope of the logistic learning curve. The theoretical models make it possible to automate the scientific data gathering and analysis and associated manufacturing processes. Once automaton has been put in place effort can focus on cost reduction and quality improvements in what has become a mature technology.

Figure 3 shows a tiered succession of learning curves for information technologies in which a breakthrough in one technology is triggered by a supporting technology as it moves from its research to its empirical stage. Also shown are trajectories shown the eras of invention, research, product innovation, long-life product lines, low-cost products, and throw-away products. One phenomenon not shown on this diagram is that the new industries can sometimes themselves be supportive of further development in the industries on which they depend. Thus, in the later stages of the development of an industrial sector there will be a tiered structure of interdependent industries at different stages along their learning curves.

Figure 3 The infrastructure of information technology

The BRETAM tiered learning curves infrastructure of Figure 3 brings together the various phenomena of convergence in an integrated model which has the potential both to explain the past and forecast the future. Well's vision of a world brain immediately predates the initial breakthrough that triggered the learning curves of information technology. The invention of the digital computer was triggered by the type of problem in the processes of civilization that he hoped to prevent. He foresaw a technological solution but not the technologies that actually provided it. Bush, as inventor of the differential analyzer and with his war time knowledge of computing as Roosevelt's advisor, described Memex in the context of the relevant technology. However, it was not until the 1960s that the mean time between failures of computers became long enough to make interactive use routinely possible, and it was not until the 1970s that the costs became low enough for "personal computers" to be developed and these did not come into widespread use until the 1980s.

The relevant learning curves in Figure 3 are the lower four: digital electronics; computer architecture; software; and interaction. The product innovation trajectory passes through the last of these in the fourth generation, 1972-1980, and led to the premature development of Viewdata and Videotex products and Martin's detailed forecasts of the potential for a wired society. However, the mass market potential for wired society technology at costs comparable to other mass media such as the telephone and television is dependent on the cost reductions possible in the post-maturity phase of the learning curves leading to throw-away products. This trajectory passes through the interaction learning curve in the current seventh generation era, 1996-2004, and it is this that has made the information highway economically feasible.

In projecting the BRETAM model into the future, one critical analysis is whether the learning curves in the lower level technologies can be sustained. The learning curve for the number of devices on a chip has been maintained as a continuing exponential growth even though our knowledge of the underlying silicon device technology is mature because it has been possible to continue to miniaturize the individual transistors using increasingly refined and automated production processes. Such miniaturization is now coming up against fundamental physical limits and against commercial considerations that there are declining economies of scale in fabricating devices with greater numbers of transistors (Ross, 1995). A Forbes survey of eleven leading industry figures in 1996 gave predictions that Moore's law would fail by the year 2005 (Forbes, 1996). However, current digital circuit technology seems sufficient to support the information highway through one or two generations, particularly given the simplicity of the computer architectures currently in use.

The analysis of product opportunities arising from the existence of the information highway involves the upper learning curves of the BRETAM model--knowledge representation and acquisition, autonomy and sociality. Knowledge representation and processing encompasses all the media that can be passed across the web, not just the symbolic logic considered in artificial intelligence studies but also typographic text, pictures, sounds, movies, and the massive diversity of representations of specific material to be communicated. The significance of discourse in the human communities collaborating through the Internet has been underestimated in the stress on `artificial' intelligence in computer research. Knowledge need not be machine-interpretable to be useful, and it can often be machine-processed, indexed and enhanced without a depth of interpretation one might associate with artificial intelligence. The World Wide Web is already a "pool of human knowledge" (Berners-Lee et al., 1994) and the extension of that pool to encompass more and more knowledge is the most significant way of adding value to the web. The problems with this are socio-economic in that much represented knowledge is owned by copyright holders who seek some financial reward before they will offer it to others. Technologically it is important to develop ways of charging for access to knowledge at a low enough rate to encourage widespread use at a high enough volume to compensate the knowledge provider. The knowledge-level problem for the information highway is not so much representation and processing but rather effective trading.

The growth of available material on the web is already causing problems of information overload. The 25 to 50M documents currently available are not readily searched by an individual and the web was not designed for central indexing. The acquisition learning curve is at a level where it has been possible to solve the problem using software programs, spiders, that crawl the web gathering information and indexing it by content so that documents can be retrieved through a key word search. In 1995 indexing spiders solved the problem of acquiring a dynamic model of the rapidly expanding web. However, the simple key word searches offered for retrieval have already become inadequate in that they usually result in a large corpus of documents most of which are irrelevant to the searcher. Information retrieval techniques have to be improved, probably to allow queries to be expressed in natural language and refined through a natural language dialog. Ultimately a user should not be able to determine whether a query is being answered by another person or by a computer program.

5 Conclusions

This article has analyzed the technological infrastructure of convergence to an information highway, tracing the origins of the concept, the false starts, the growth and origins of the Internet and World Wide Web, convergence as a substitution process, and the learning curves of the technologies involved. A number of substitution processes underlying convergence have been identified: electronic for mechanical devices; digital for analog devices; and general-purpose programmable devices for special-purpose devices. A model of convergence in terms of a tiered infrastructure of learning curves in information technology has been proposed and used to explain the past and forecast the future.

Information technology is characterized by high rates of growth in performance parameters sustained over long periods. The number of devices on a chip has grown by 9 orders of magnitude in 37 years. Clock speeds of computers have grown by some 6 orders of magnitude over the same period. The number of computers connected through the Internet is growing at 100% a year and has grown by 7 orders of magnitude in 27 years. The volume of traffic on the Internet is growing at over 100% a year, and that component attributable to the World Wide Web was growing at 1,000% a year when last accurately measurable in 1994.

It is suggested that these high sustained growth rates have been possible because computer technologies are mutually supportive providing positive feedback such that advances in existing technologies trigger breakthroughs in new technologies which themselves help to sustain the advance of the existing technologies. The main problem in forecasting the future of convergence and information technology in general is that the learning curves of most of the major performance parameters still appear to be in their initial exponential growth phase. This makes it impossible to predict the later parts of the curves from past data. For some parameters there are basic physical limitations to existing technologies that indicate that current growth rates cannot be sustained beyond some 10 years. However, there are possibilities for new materials and new architectures that could maintain effective growth rates for the foreseeable future.

Tracking the individual learning curves of the major technologies that comprise the infrastructure of information technology provides a more detailed account of the present and future state-of-the art of the technologies underlying convergence. The base technologies of digital electronics, general-purpose computer architectures, software and interaction are mature and provide solid foundations for computer science. The upper technologies of knowledge representation and acquisition, autonomy and sociality, support product innovation and provide the beginnings of foundations for knowledge science. Well's dream of a world brain making available all of human knowledge is well on its way to realization and it is in the representation, acquisition, and access and effective application of that knowledge that the commercial potential and socio-economic impact of convergence lies.

Acknowledgments

Financial assistance for this work has been made available by the Natural Sciences and Engineering Research Council of Canada.

References

Berners-Lee, T. (1993). World-Wide Web Talk at Online Publishing 1993. CERN, Geneva. http://info.cern.ch/hypertext/WWW/Talks/OnlinePublishing93/Overview.html.

Berners-Lee, T. and Cailliau, R. (1990). WorldWideWeb: Proposal for a Hypertext Project. CERN, Geneva. http://info.cern.ch/hypertext/WWW/Proposal.html.

Berners-Lee, T., Cailliau, R., Luotonen, A., Nielsen, H.F. and Secret, A. (1994). The World-Wide Web. Communications ACM 37(8) 76-82.

Bianculli, D. (1992). Teleliteracy: Taking Television Seriously. New York, Simon & Schuster.

Braun, E. and Macdonald, S. (1978). Revolution in Miniature. Cambridge, UK, Cambridge University Press.

Bush, V. (1945). As we may think. Atlantic Monthly 176 101-108.

CERN (1994). History to date. CERN, Geneva. http://info.cern.ch/hypertext/WWW/History.html.

Chorafas, D.N. (1981). Interactive Videotex: the Domesticated Computer. New York, Petrocelli.

Crane, D. (1972). Invisible Colleges: Diffusion of Knowledge in Scientific Communities. Chicago, University of Chicago Press.

De Mey, M. (1982). The Cognitive Paradigm. Dordrecht, Holland, Reidel.

Fedida, S. and Malik, R. (1979). The Viewdata Revolution. London, UK, Associated Business Press.

Forbes (1996). Chips triumphant. Forbes ASAP February 26 53-82.

Gaines, B.R. (1971). Through a teleprinter darkly. Behavioural Technology 1(2) 15-16.

Gaines, B.R. (1991). Modeling and forecasting the information sciences. Information Sciences 57-58 3-22.

Gaines, B.R. and Shaw, M.L.G. (1986). A learning model for forecasting the future of information technology. Future Computing Systems 1(1) 31-69.

Goodman, H.J.A. (1987). The "world brain/world encyclopaedia" concept: its historical roots and the contributions of H.J.A. Goodman to the ongoing evolution and implementation of the concept. ASIS'87: Proceedings 50th Annual Meeting American Society Information Science. pp.91-98. Medford, New Jersey, Learned Information.

Gruenberger, F., Ed. (1967). The Transition to On-Line Computing. Washington, DC, Thompson.

Linstone, H.A. and Sahal, D., Ed. (1976). Technological Substitution: Forecasting Techniques and Applications. New York, Elsevier.

Martin, J. (1978). The Wired Society: A Challenge for Tomorrow. Englewood Cliffs, New Jersey, Prentice-Hall.

Mauchly, J.W. (1942). The use of high speed vacuum tube devices for calculating. Randell, B., Ed. The Origins of Digital Computers: Selected Papers. pp.329-332. Springer, Berlin.

NW (1996). Internet Domain Survey. Network Wizards. http://www.nw.com.

Orr, W.D., Ed. (1968). Conversational Computers. New York, Wiley.

Parkhill, D.F. (1966). The Challenge of the Computer Utility. Reading, MA, Addison-Wesley.

Ross, P.E. (1995). Moore's second law. Forbes March 25 116-117.

Rubinoff, M., Ed. (1965). Toward a National Information System. Washington D.C., Spartan Books.

Salus, P. (1995). Casting the Net: From ARPANET to INTERNET and Beyond. Reading. MA, Addison-Wesley.

Sigel, E. (1980). Videotext: The Coming Revolution in Home/Office Information Retrieval. White Plains, NY, Knowledge Industry Publications.

Ulanowicz, R.E. (1991). Formal agency in ecosystem development. Higashi, M. and Burns, T.P., Ed. Theoretical Studies of Ecosystems: The Network Perspective. pp.58-70. Cambridge, Cambridge University Press.

Wells, H.G. (1938). World Brain. New York, Doubleday.

Winsbury, R. (1981). Viewdata in Action: a Comparative Study of Prestel. New York, McGraw-Hill.

Woolfe, R. (1980). Videotex: the New Television/Telephone Information Services. London, Heyden.


KSI Page

gaines@cpsc.ucalgary.ca 16-Jul-96