In This Issue
This month, we are featuring a double issue. Half is given over to the testbeds that support much of the research conducted by the six projects sponsored by the NSF/DARPA/NASA Digital Library Initiative (DLI), and half is devoted to recent work in metadata. The testbeds cover a range of heterogeneous data that functioning digital libraries are likely to house and thus offer opportunities for an equally broad range of research questions. But their existence also points to an underlying and sometimes unrecognized issue in digital libraries research: how do you transfer or diffuse advanced technologies and tools to the people who will use them, and who will be responsible for maintaining these resources? One obvious place to start looking for answers is to consider how these testbeds will be managed after the research projects move on.
At a recent meeting of the six DLI-sponsored projects, D-Lib posed four questions:
All six of the projects allow for or already have established some mechanism for sharing technologies and/or access to their testbeds. All six anticipate continued research beyond the current four-year grant. Finally, all six of these projects also imply that existing libraries will end up with at least some role in the future management of the testbeds although this role will clearly vary from project to project. Both the Alexandria Digital Library Project at the University of California, Santa Barbara (UCSB), and the DLI project at the University of Illinois at Urbana-Champaign (UIUC) are seated in existing libraries, and the DLI Project at the University of California, Berkeley, is conducted jointly with members of the School of Information Management and Systems and the California State Library. The DLI project at the University of Michigan (UMDL) is linked to the university's new School of Information and the university library and is one of a stream of digital library projects on the Michigan campus that cover a broad range of near- and long-term issues.
Less is said about how the existing libraries will embed managing the testbeds into their future operations, whether they plan to maintain and continue acquiring materials within the parameters of the testbeds or perhaps keep them off to one side, as a resource to support continued research activities. It may be early, yet, in the planning. As Daniel Atkins of UMDL comments, digital library technologies are still rather like "horseless carriages". UMDL expects to maintain its testbed through a production service supported by the library and other campus agencies. At UMDL as elsewhere, licensing agreements that limit access to content represent an impediment to widescale deployment. Nevertheless, existing libraries seem to be inheriting not only the digital collections but also the burden of maintenance and perhaps of educating users to work with these materials. It is not an inappropriate role for libraries, which have historically intermediated between users and information. Still, as Andreas Paepcke of Stanford University observes, libraries are "notoriously stressed financially" and many "are struggling to maintain even their traditional functions." With its focus on third party services and track record with start-ups, Stanford proposes continued technology transfer through existing industrial partners and future start-ups as well as the library. UIUC and Carnegie Mellon University also emphasize future research and development within the university and by third parties. Finally, Berkeley proposes collaboration with other public agencies as well as the university library in addition to continuing the research program.
Our choice of metadata for the second half of this double issue was partly the result of the calendar -- there have been several important workshops and meetings on metadata this spring -- partly a recognition of the importance of the issue, and partly nostalgia. A year ago, Stuart Weibel wrote a summary discussion of the Dublin Core; in this issue, he and Lorcan Dempsey describe the results of the Warwick Metadata workshop in April 1996. However, surpassing the nostalgia is the recognition that metadata will be fundamental to developing interoperable systems, which Hector Garcia-Molina and Clifford Lynch have identified as one of two long-term objectives of digital library research in their report on the U.S. Government's Information Infrastructure Technology and Applications (IITA) Working Group's April 1995 workshop, Interoperability, Scaling, and the Digital Libraries Research Agenda.
In the year since the Dublin Core of thirteen data elements was proposed, many have subscribed to the basic concept of a simple resource description record. The path to consistent deployment is still unclear, though the Warwick reports in this issue suggest ways that the resource description architecture might evolve. Indeed, in separate stories, the one co-authored with Weibel and a second authored alone, Dempsey describes numerous projects in the US, UK, Europe, and Australia that are experimenting with metadata models. One outcome of the April 1996 workshop is the Warwick Framework, discussed by Carl Lagoze, who describes a container architecture that aggregates packages of metadata. This architecture, it is argued, allows for multiple yet interoperable metadata sets that can also accomodate administrative and access requirements (e.g., terms and conditions) and can migrate to new environments. On the other hand, Terry Smith lays out a meta-information system that embeds the notion of metadata in a broader structure of digital libraries based on the perspective of users and services.
Metadata and other issues promise us interesting times in the fall. See you in September.
Amy Friedlander
Editor
hdl://cnri.dlib/july96-friedlander