LOC Workshop on Etexts, Library of Congress [books to read in a lifetime TXT] 📗
- Author: Library of Congress
- Performer: -
Book online «LOC Workshop on Etexts, Library of Congress [books to read in a lifetime TXT] 📗». Author Library of Congress
In addition to ignorance of new technical advances, I have found that too many editors—and historians and literary scholars—are resistant and even hostile to suggestions that electronic technology may enhance their work. I intend to discuss some of the arguments traditionalists are advancing to resist technology, ranging from distrust of the speed with which it changes (we are already wondering what is out there that is better than CD-ROM) to suspicion of the technical language used to describe electronic developments.
Maria LEBRON
The Online Journal of Current Clinical Trials, a joint venture of the American Association for the Advancement of Science (AAAS) and the Online Computer Library Center, Inc. (OCLC), is the first peer-reviewed journal to provide full text, tabular material, and line illustrations on line. This presentation will discuss the genesis and startup period of the journal. Topics of discussion will include historical overview, day-to-day management of the editorial peer review, and manuscript tagging and publication. A demonstration of the journal and its features will accompany the presentation.
Lynne PERSONIUS
Cornell University Library, Cornell Information Technologies, and Xerox Corporation, with the support of the Commission on Preservation and Access, and Sun Microsystems, Inc., have been collaborating in a project to test a prototype system for recording brittle books as digital images and producing, on demand, high-quality archival paper replacements. The project goes beyond that, however, to investigate some of the issues surrounding scanning, storing, retrieving, and providing access to digital images in a network environment.
The Joint Study in Digital Preservation began in January 1990. Xerox provided the College Library Access and Storage System (CLASS) software, a prototype 600-dots-per-inch (dpi) scanner, and the hardware necessary to support network printing on the DocuTech printer housed in Cornell’s Computing and Communications Center (CCC).
The Cornell staff using the hardware and software became an integral part of the development and testing process for enhancements to the CLASS software system. The collaborative nature of this relationship is resulting in a system that is specifically tailored to the preservation application.
A digital library of 1,000 volumes (or approximately 300,000 images) has been created and is stored on an optical jukebox that resides in CCC. The library includes a collection of select mathematics monographs that provides mathematics faculty with an opportunity to use the electronic library. The remaining volumes were chosen for the library to test the various capabilities of the scanning system.
One project objective is to provide users of the Cornell library and the library staff with the ability to request facsimiles of digitized images or to retrieve the actual electronic image for browsing. A prototype viewing workstation has been created by Xerox, with input into the design by a committee of Cornell librarians and computer professionals. This will allow us to experiment with patron access to the images that make up the digital library. The viewing station provides search, retrieval, and (ultimately) printing functions with enhancements to facilitate navigation through multiple documents.
Cornell currently is working to extend access to the digital library to readers using workstations from their offices. This year is devoted to the development of a network resident image conversion and delivery server, and client software that will support readers who use Apple Macintosh computers, IBM windows platforms, and Sun workstations. Equipment for this development was provided by Sun Microsystems with support from the Commission on Preservation and Access.
During the show-and-tell session of the Workshop on Electronic Texts, a prototype view station will be demonstrated. In addition, a display of original library books that have been digitized will be available for review with associated printed copies for comparison. The fifteen-minute overview of the project will include a slide presentation that constitutes a “tour” of the preservation digitizing process.
The final network-connected version of the viewing station will provide library users with another mechanism for accessing the digital library, and will also provide the capability of viewing images directly. This will not require special software, although a powerful computer with good graphics will be needed.
The Joint Study in Digital Preservation has generated a great deal of interest in the library community. Unfortunately, or perhaps fortunately, this project serves to raise a vast number of other issues surrounding the use of digital technology for the preservation and use of deteriorating library materials, which subsequent projects will need to examine. Much work remains.
SESSION III
Howard BESSER Networking Multimedia Databases
What do we have to consider in building and distributing databases of visual materials in a multi-user environment? This presentation examines a variety of concerns that need to be addressed before a multimedia database can be set up in a networked environment.
In the past it has not been feasible to implement databases of visual materials in shared-user environments because of technological barriers. Each of the two basic models for multi-user multimedia databases has posed its own problem. The analog multimedia storage model (represented by Project Athena’s parallel analog and digital networks) has required an incredibly complex (and expensive) infrastructure. The economies of scale that make multi-user setups cheaper per user served do not operate in an environment that requires a computer workstation, videodisc player, and two display devices for each user.
The digital multimedia storage model has required vast amounts of storage space (as much as one gigabyte per thirty still images). In the past the cost of such a large amount of storage space made this model a prohibitive choice as well. But plunging storage costs are finally making this second alternative viable.
If storage no longer poses such an impediment, what do we need to consider in building digitally stored multi-user databases of visual materials? This presentation will examine the networking and telecommunication constraints that must be overcome before such databases can become commonplace and useful to a large number of people.
The key problem is the vast size of multimedia documents, and how this affects not only storage but telecommunications transmission time. Anything slower than T-1 speed is impractical for files of 1 megabyte or larger (which is likely to be small for a multimedia document). For instance, even on a 56 Kb line it would take three minutes to transfer a 1-megabyte file. And these figures assume ideal circumstances, and do not take into consideration other users contending for network bandwidth, disk access time, or the time needed for remote display. Current common telephone transmission rates would be completely impractical; few users would be willing to wait the hour necessary to transmit a single image at 2400 baud.
This necessitates compression, which itself raises a number of other issues. In order to decrease file sizes significantly, we must employ lossy compression algorithms. But how much quality can we afford to lose? To date there has been only one significant study done of image-quality needs for a particular user group, and this study did not look at loss resulting from compression. Only after identifying image-quality needs can we begin to address storage and network bandwidth needs.
Experience with X-Windows-based applications (such as Imagequery, the University of California at Berkeley image database) demonstrates the utility of a client-server topology, but also points to the limitation of current software for a distributed environment. For example, applications like Imagequery can incorporate compression, but current X implementations do not permit decompression at the end user’s workstation. Such decompression at the host computer alleviates storage capacity problems while doing nothing to address problems of telecommunications bandwidth.
We need to examine the effects on network through-put of moving multimedia documents around on a network. We need to examine various topologies that will help us avoid bottlenecks around servers and gateways. Experience with applications such as these raise still broader questions. How closely is the multimedia document tied to the software for viewing it? Can it be accessed and viewed from other applications? Experience with the MARC format (and more recently with the Z39.50 protocols) shows how useful it can be to store documents in a form in which they can be accessed by a variety of application software.
Finally, from an intellectual-access standpoint, we need to address the issue of providing access to these multimedia documents in interdisciplinary environments. We need to examine terminology and indexing strategies that will allow us to provide access to this material in a cross-disciplinary way.
Ronald LARSEN Directions in High-Performance Networking for
Libraries
The pace at which computing technology has advanced over the past forty years shows no sign of abating. Roughly speaking, each five-year period has yielded an order-of-magnitude improvement in price and performance of computing equipment. No fundamental hurdles are likely to prevent this pace from continuing for at least the next decade. It is only in the past five years, though, that computing has become ubiquitous in libraries, affecting all staff and patrons, directly or indirectly.
During these same five years, communications rates on the Internet, the principal academic computing network, have grown from 56 kbps to 1.5 Mbps, and the NSFNet backbone is now running 45 Mbps. Over the next five years, communication rates on the backbone are expected to exceed 1 Gbps. Growth in both the population of network users and the volume of network traffic has continued to grow geometrically, at rates approaching 15 percent per month. This flood of capacity and use, likened by some to “drinking from a firehose,” creates immense opportunities and challenges for libraries. Libraries must anticipate the future implications of this technology, participate in its development, and deploy it to ensure access to the world’s information resources.
The infrastructure for the information age is being put in place. Libraries face strategic decisions about their role in the development, deployment, and use of this infrastructure. The emerging infrastructure is much more than computers and communication lines. It is more than the ability to compute at a remote site, send electronic mail to a peer across the country, or move a file from one library to another. The next five years will witness substantial development of the information infrastructure of the network.
In order to provide appropriate leadership, library professionals must have a fundamental understanding of and appreciation for computer networking, from local area networks to the National Research and Education Network (NREN). This presentation addresses these fundamentals, and how they relate to libraries today and in the near future.
Edwin BROWNRIGG Electronic Library Visions and Realities
The electronic library has been a vision desired by many—and rejected by some—since Vannevar Bush coined the term memex to describe an automated, intelligent, personal information system. Variations on this vision have included Ted Nelson’s Xanadau, Alan Kay’s Dynabook, and Lancaster’s “paperless library,” with the most recent incarnation being the “Knowledge Navigator” described by John Scully of Apple. But the reality of library service has been less visionary and the leap to the electronic library has eluded universities, publishers, and information technology files.
The Memex Research Institute (MemRI), an independent, nonprofit research and development organization, has created an Electronic Library Program of shared research and development in order to make the collective vision more concrete. The program is working toward the creation of large, indexed publicly available electronic image collections of published documents in academic, special, and public libraries. This strategic plan is the result of the first stage of the program, which has been an investigation of the information technologies available to support such an effort, the economic parameters of electronic service compared to traditional library operations, and the business and political factors affecting the shift from print distribution to electronic networked access.
The strategic plan envisions a combination of publicly searchable access databases, image (and text) document collections stored on network “file servers,” local and remote network access, and an intellectual property management-control system. This combination of technology and information content is defined in this plan as an E-library or E-library collection. Some participating sponsors are already developing projects based on MemRI’s recommended directions.
The E-library strategy projected in this plan is a visionary one
Comments (0)