D-Lib Magazine
spacer
The Magazine of Digital Library Research
spacer
transparent image

D-Lib Magazine

November/December 2015
Volume 21, Number 11/12
Table of Contents

 

Developing Best Practices in Digital Library Assessment: Year One Update

Joyce Chapman
Duke University Libraries
joyce.chapman@duke.edu

Jody DeRidder
University of Alabama Libraries
jody@jodyderidder.com

Santi Thompson
University of Houston Libraries
sathompson3@uh.edu

DOI: 10.1045/november2015-chapman

 

Printer-friendly Version

 

Abstract

In the face of limited resources and increasing demand for online access to digital library content, we need to strategically focus our efforts and better understand users, impact, and associated costs. However, methods for assessment of digital libraries are not standardized. In an effort to address this crucial gap, the Digital Library Federation Assessment Interest Group has engaged the community over the past year in the development of best practices and guidelines. With this article, the authors provide an update on progress to date and solicit participation in an evolving effort to develop viable solutions.

 

1 Introduction

While research and cultural heritage institutions have had an increased focus on providing online access to special collections in the past decade, methods for assessing digital libraries have yet to be standardized. At the same time, assessment has become increasingly important due to limitations in resources and increasing demand from patrons for online access to materials. As discussed in our May 2015 D-Lib In Brief publication1, the majority of existing research findings in the field cannot be effectively generalized from one software system and institution to another and as a result, many staff in digital libraries are at a loss as to how to begin to assess areas such as costs, impact, use, and usability. In an effort to address this crucial gap and to strategically focus efforts, the Digital Library Federation Assessment Interest Group (DLF AIG) has engaged the community in an effort to develop best practices and guidelines in digital library assessment over the past year. This article provides both background information and an update on progress made to date.

 

2 Assessment Needs and Goals

The DLF AIG aims to actively develop documentation, tools, and suggested best practices around various areas of digital library assessment. The goals of this endeavor are both to assist those digital libraries that are unsure of how to assess their assets, and to provide a baseline across institutions to aid in the collection of interoperable metrics for comparative purposes. The areas that have been chosen as foci this year are based on membership interest, and do not cover all areas of assessment. Currently, working groups have formed around the key areas of analytics, cost, user studies, and citations. These groups are working on efforts as diverse as developing white papers that discuss gaps in assessment research, creating tools that calculate costs for digitization workflows, and outlining best practices for collecting Google Analytics data.

The DLF AIG is currently using Matusiak's definition of a digital library as "the collections of digitized or digitally born items that are stored, managed, serviced, and preserved by libraries or cultural heritage institutions, excluding the digital content purchased from publishers."2 The AIG began its work by considering two basic questions:

  1. What strategic information do we need to collect in order to make intelligent decisions?
  2. How can we best collect, analyze, and share that information effectively?

The first question is more complex than it initially appears. In 2000, Saracevic3 famously divided the context for evaluation into two camps: user-centered context and system-centered context. In his vision, the user-centered levels of criteria were the needs of the community ("social" level), the needs of the organization ("institutional"), the needs of the individual users or groups ("individual"), and the "interface." The system-centered levels were the "content," the software ("processing"), and the hardware, networks and underlying support ("engineering").

To date, however, a majority of the digital library evaluations have been largely focused on the interface, the software, and to some extent the needs of the users. Research from the User and Usability DLF AIG working group has shown that when information professionals have published on the needs of users, they largely address user behavior (29% of articles reviewed), user perceptions (31% of articles reviewed), and the usability of digital library interfaces (32% of articles reviewed).4 While our field's growing dependence on networking and underlying support has become so commonplace as to be understandably overlooked, critical aspects such as content, organizational needs, and the needs of the community have received little attention. Moreover, most of the criteria used during evaluation are merely borrowed from the domains of traditional library and information retrieval systems, and may not be effective or appropriate for digital libraries developed for largely unpublished materials.

In 2010, Ying Zhang5 analyzed Saracevic's levels, finding multiple aspects, and reviewed the literature to determine which of these aspects had not yet been incorporated into published studies of digital library evaluations. The thirteen aspects not yet covered in published studies included: ease of use, reliability, integrity, usefulness, collaboration, managerial support, network effect, productivity, interoperability, security, and comprehensiveness. Of these unexamined aspects, the first six were rated as top criteria by groups of developers, administrators, librarians, users and researchers Zhang interviewed. By following the methods of evaluation used for traditional library and information retrieval systems, aspects critical to evolving digital libraries and the evolution of user needs are being overlooked.

Yet digital libraries are no longer in their infancy, and as funding models have increasingly moved from one-time sources for unique projects to continuous funding for sustainable programs, effective assessment is critical to making informed choices with limited resources. In 2004, Saracevic stated that:

"...there are no more or less standardized criteria for digital library evaluation. Several efforts that are devoted to developing digital library metrics have not produced, as yet, generalizable and accepted metrics, some of which may be used for evaluation. Thus, evaluators have chosen their own evaluation criteria as they went along. As a result, criteria for digital library evaluation fluctuate widely from effort to effort."6

Unfortunately, not much has changed in the past decade, particularly with regards to digitized primary source materials and institutional repositories. Development of best practices and guidelines requires a concerted engagement of the community to whom the outcome matters most: those who develop and support digital libraries. With this article, the authors hope to share what progress we have made to date, as well as to increase awareness of this issue and solicit participation in an evolving effort to develop viable solutions.

 

3 Assessment Interest Group

The DLF AIG's effort began at the DLF Forum in fall of 2013.7 A working session at the Forum entitled "Hunting for Best Practices in Library Assessment"8 was so successful that over 50 participants volunteered to continue the discussion after the conference. The collaborative Google document9 created for taking notes during this session was 16 pages long, filled with ideas for how to move forward in three topical areas: demonstrating impact, meeting user needs, and assessing costs and benefits. A second working session the following day on altmetrics10 also drew a crowd.

The following spring, DLF hosted a conference call with the presenters of both sessions, and together they established the new DLF AIG.11 To facilitate asynchronous discussion, a Digital Library Assessment Google Group12 was established to provide a space for colleagues to discuss and organize ongoing assessment efforts, which at the time of this writing has over 150 members. Over the next few months, the authors developed a Digital Library Assessment Framework13, which groups the focus of efforts into three major categories: meeting user needs, assessing benefits, and assessing costs. This framework would be used to guide discussions and working groups in the ensuing months.

In the fall of 2014, the co-presenters of the 2013 altmetrics presentation shared a white paper14 on their work and asked for comments and feedback. A few days later at the 2014 DLF forum, a panel of presentations15 by representatives of NISO, Duke University, University of California at San Diego and the University of Alabama highlighted the new NISO initiative to develop standards for altmetrics16, a new web-based cost estimation tool for digitization17, and both qualitative18 and quantitative19 results from digital library user studies. These presentations were followed by community engagement to further the development of best practices and guidelines for assessment.

Participants self-selected for small group discussions on one of three topics: altmetrics, cost assessment, and user studies. Two questions were posed to each small group:

  1. What are the critical aspects that we need to address?
  2. What are the next steps we can take?

These discussions were collaboratively documented online20 and in the wake of this continued interest, four working groups were formed in November 2014 within the DLF AIG to work on the development of best practices. Working groups are currently centered on analytics, cost assessment, user studies, and citations. These topics were chosen purely based on community interest and not because we believe they encompass all aspects of assessment; the DLF AIG welcomes the formation of more working groups on other topics of interest. DLF has established a wiki site21 that is being used by the AIG working groups to document resources, best practices, and guidelines as they develop.

 

4 Working Groups

The primary purpose of the working groups is to develop best practices and guidelines that can be used by the larger community to assess digital libraries in each area; the initial goal for each group was to have viable progress to report at the DLF Forum22 in October 2015. Below we report on the progress made by the four working groups.

 

4.1 Analytics

The Analytics working group23 is coordinated by Molly Bragg and Joyce Chapman. The goal of the group is to develop best practice guidelines around analytics — primarily Google Analytics — for assessment in digital libraries; they chose to limit the scope in year one to Google Analytics because many libraries use this tool, and because the task needed to be scoped in order to be attainable.24 After distributing two drafts of a white paper to the larger DLF AIG for feedback and comments in July and August 2015, the working group released a paper entitled "Best Practices for Google Analytics in Digital Libraries"25 in September 2015, which recommends 15 core metrics for baseline collection in a digital library program. The white paper also includes a literature review, theoretical and structural methods for approaching analytics data gathering, examples of platform specific implementation considerations, Google Analytics set-up tips and terminology, as well as recommended resources for learning more about web analytics. In the paper, each metric includes a definition and explanation of importance, as well as library-centric examples for how to work with the metric in Google Analytics.

Future directions for the Analytics working group could include standardizing methods for sharing metrics across institutions, clear decision-making around allowing or disallowing web crawler traffic from access counts, reaching further consensus on definitions of access and use, and widening the scope beyond Google Analytics to include other recommended tools and methods.

 

4.2 User Studies

To date, the User Studies working group26, coordinated by Santi Thompson, has compiled resources and drafted literature to assist those who are interested in evaluating users of digital repositories and their needs. The group's work began during the assessment breakout session at the 2014 DLF Forum. Feedback from the session identified three core areas for the group to focus on over the course of 2015: making usability studies more accessible to librarians; tracking the return on investment for digital libraries; and understanding the reuse of digital library materials.

The group's first goal was to produce a white paper on the current state of research regarding who is using digital library content and why they seek these materials, organized into the three areas cited above. They began by compiling a bibliography of sources27 that highlight research around usability, return on investment, and reuse. Next, they analyzed and synthesized these works to address gaps and assess future needs, developed research terms and "tagged" each article in the bibliography with one or more of these terms, and wrote brief summaries for each article to catch important areas not covered by the tagging process. Once the tagging and summarizing concluded, the group analyzed results and used them to identify the strengths and gaps in the current literature in each of the three defined areas and made recommendations for next steps toward the development of best practices. The first draft of the white paper28 was released for comments in October 2015, and a final version will be released in December 2015.

 

4.3 Citations

The Citations working group is comprised of a single member, Elizabeth Joan Kelly (Loyola University New Orleans, Monroe Library). Kelly focused on the task of producing a white paper entitled, "Guidelines for citing library-hosted, unique digital assets."29 Kelly consulted the style manuals for three major citation styles — APA, Chicago, and MLA — to assess whether they provide direction for citing digitized special collections and institutional repository items. Existing citation formats, such as those for archival materials and for digitized web files, were analyzed along with recent developments in data citation standards in order to create recommended citation styles for digital library objects. Kelly proposes that the use of uniform citation formats for unique digital assets will "lead to better tracking of use of these assets by hosting libraries" and continues on to state that "in recommending these formats ... it is intended that both traditional citation metrics and altmetrics will better track the use of these digitized special collection and cultural heritage materials and institutional repository content."30 Kelly gathered feedback on drafts of the citation guidelines in March and April 2015; a final draft was completed in May and circulated in June 2015. Final edits were completed on the document in October 2015 in preparation for DLF.

 

4.4 Cost Assessment

Coordinated by Joyce Chapman, the Cost Assessment working group's31 tasks are slightly different than the other working groups. The group seeks to aggregate and make freely available a large set of data on the time it takes to perform various tasks involved in the digitization process, in order to assist organizations in digitization project planning and benchmarking. They are also building a Digitization Cost Calculator32 that takes advantage of the contributed dataset to provide digitization cost estimates based on input parameters. The group began by determining the scope of processes for which time data would be defined and reported via the Calculator. They performed a review of existing literature around relevant areas33, including collection of time and cost data for digitization and existing best practices in quality control and metadata creation.

They then authored a set of guidelines34 to guide the collection of time data for 20 digitization processes, including eight processes in the original materials preparation phase (i.e., fastener removal, rights review), six processes in the post-processing phase (i.e., cropping images, color correction and tonal adjustment), three processes in the post-preparation phase (i.e., re-binding) and the additional three processes of image capture, descriptive metadata creation, and quality control. Three levels were defined for both metadata creation and quality control, based on a review of the literature and existing resources.35 The guidelines were released to the community for comments, and finalized in July 2015.

While the original Digitization Cost Calculator was built by Chapman as a proof of concept and presented at the 2014 DLF forum, the data definitions authored by the cost assessment working group will inform a modified structure of the calculator with expanded capabilities.36 The group put out a call for data submissions37 in August 2015 via the new data submission form38, as well as wireframes39 for the envisioned redesigned Calculator. Before the new Calculator can be built, however, at least one set of data must be submitted for each of the 20 data fields, as well as for each type of image capture device and each level of quality control and metadata creation. The working group encourages each of you to get involved and submit data from your institution!

 

5 Going Forward

Further updates on the DLF AIG's progress will be presented at both the DLF annual meeting in Vancouver, Canada, in October 2015 and at the Southeastern Library Assessment Conference in Atlanta, Georgia in November 2015. Colleagues who are interested in digital library assessment or would like to participate in the continuing best practice development work of the DLF AIG are urged to contact the authors or join the Digital Library Assessment Google group40 and express their interests. The DLF AIG hopes that the work it is undertaking will help the community establish best practices for digital library assessment, which in turn will lead to increased sustainability and effectiveness of digital libraries in the future.

 

Notes

1 "A Community Effort to Develop Best Practices in Digital Library Assessment".

2 Matusiak, K. (2012). Perceptions of usability and usefulness of digital libraries. International Journal of Humanities and Arts Computing, 6(1-2), 133-147. http://dx.doi.org/10.3366/ijhac.2012.0044

3 Tevco Saracevic, "Digital Library Evaluation: Toward an Evolution of Concepts," Library Trends 49, no. 2 (2000): 350-369.

4 See the Use and Usability working group's topical tagging of 147 articles on digital library assessment (94 articles once irrelevant articles were deleted) published in the past five years here.

5 Ying Zhang, "Developing a Holistic Model for Digital Library Evaluation," Journal of the American Society for Information Science and Technology 61, no. 1 (2010): 88-110.

6 Tevko Saracevic. "How were digital libraries evaluated?" Presentation at the DELOS WP7 Workshop on the Evaluation of Digital Libraries in Padova, Italy, October 2004: 6.

7 Digital Library Federation. "2013 DLF Forum: Austin, Texas."

8 Jody DeRidder, Sherri Berger, Joyce Chapman, Cristela Garcia-Spitz, and Lauren Menges. "Hunting for Best Practices in Library Assessment" Presentation at the Digital Library Federation Forum in Austin, TX, 4 November 2013.

9 "Hunting for Best Practices in Library Assessment," a collaborative Google document generated during the presentation by the same name, at the Digital Library Federation Forum in Austin, TX, 4 November 2013.

10 David Scherer, Stacy Konkiel, and Michelle Dalmau. "Determining Assessment Strategies for Digital Libraries and Institutional Repositories Using Usage Statistics and Altmetrics." Presentation at the Digital Library Federation Forum in Austin, TX, 5 November 2013.

11 Joyce Chapman, "Introducing the New DLF Assessment Interest Group." Blog Post on the Digital Library Federation blog, 12 May 2014.

12 "Digital Library Assessment," Google Group.

13 "Digital Library Assessment Framework," 2014.

14 Stacy Konkiel, Michelle Dalmau, and Dave Sherer. "Determining Assessment Strategies for Digital Libraries and Institutional Repositories Using Usage Statistics and Altmetrics" (white paper). October 2014. http://dx.doi.org/10.6084/m9.figshare.1392140

15 Jody DeRidder, Joyce Chapman, Nettie Lagace, and Ho Jung Yoo. "Moving Forward with Digital Library Assessment." Presentation at the Digital Library Federation Forum in Atlanta, GA, 29 October 2014.

16 National Information Standards Organization. "NISO Alternative Metrics (Altmetrics) Initiative."

17 Joyce Chapman. "Library Digitization Cost Calculator." 2014.

18 Jody DeRidder, "Did We Get the Cart Before the Horse? (Faculty Researcher Feedback)," presentation at the Digital Library Federation Forum in Atlanta, TX, 29 October 2014.

19 Ho Jung Yoo and SuHui Ho, "Do-It-Yourself Usability Testing: a Case Study from the UC San Diego Digital Collections." Presentation at the Digital Library Federation Forum in Atlanta, GA, 29 October 2014.

20 "Moving Forward with Digital Library Assessment," a collaborative Google Document generated during the session of the same name at the Digital Library Federation Forum, 29 October 2014.

21 DLF, Assessment.

22 Digital Library Federation. "2015 DLF Forum: Vancouver."

23 Members include Molly Bragg (co-coordinator of working group, Duke University), Joyce Chapman (co-coordinator of working group, Duke University), Jody DeRidder (University of Alabama), Martha Kyrilidou (Association of Research Libraries), Rita Johnston (University of North Carolina at Charlotte), Ranti Junus (Michigan State University), Eric Stedfeld (New York University).

24 Over 60% of all websites use Google Analytics: see "Piwik, Privacy."

25 The white paper can be viewed or downloaded from the DLF AIG Analytics working group's wiki page here.

26 Members include: Santi Thompson (coordinator of working group, University of Houston), Joyce Chapman (Duke University), Jody DeRidder (University of Alabama), Elizabeth John Kelly (University of Loyola New Orleans), Martha Kyrillidou (Association of Research Libraries), Caroline Muglia (University of Southern California), Genya O'Gara (The Virtual Library of Virginia), Ayla Stein (University of Illinois at Urbana-Champaign), Rachel Trent (State Library of North Carolina), Sarah Witte (Columbia University), Liz Woolcott (Utah State University), Tao Zhang (Purdue University).

27 DLF User Studies in Digital Libraries Bibliography.

28 The white paper can be viewed or downloaded from the DLF AIG User Studies working group's wiki page here.

29 The white paper can be viewed or downloaded from the DLF AIG Citations working group's wiki page here.

30 Elizabeth Joan Kelly. "Guidelines for citing library-hosted, unique digital assets," (2015): 9.

31 Members of the working group include Joyce Chapman (coordinator of the working group, Duke University Libraries), Kinza Masood (University of Utah), Chrissy Reissmeyer (University of California at Santa Barbara), Dan Zellner (Northwestern University).

32 See the beta Calculator here. The calculator works by combining and averaging the available data from each contributing institution for a given data field. For example, if three institutions have contributed time data for image capture using an overhead scanner, the calculator will average the three numbers and use that in calculators provided to the user. If only one institution has provided data, that institution's data will be provided in calculators for the user. Each contributing institution's data is made available in tabular format on the "Notes on Data" tab to support transparency and ease of use.

33 See the bibliography produced by the working group here.

34 The guidelines and definitions can be viewed or downloaded from the DLF AIG Cost Assessment working group's wiki page here.

35 These levels and definitions can be found in the larger guidelines and definitions document linked from the DLF AIG Cost Assessment working group's wiki page here.

36 See wireframes for the new input and output of the Calculator here.

37 See Call for data submissions: digitization cost calculator.

38 The form for data submission can be found here.

39 The wireframes can be viewed and downloaded from the DLF AIG Cost Assessment working group's wiki page here, or via a Google Drive folder.

40 "Digital Library Assessment," Google Group.

 

Works Cited

[1] Sherri Berger, Joyce Chapman, Jody DeRidder, Cristela Garcia-Spitz, and Lauren Menges. "Hunting for Best Practices in Library Assessment." Presentation at the Digital Library Federation Forum in Austin, TX, 4 November 2013.

[2] Joyce Chapman, Jody DeRidder, Nettie Lagace, and Ho Jung Yoo. "Moving Forward with Digital Library Assessment." Presentation at the Digital Library Federation Forum in Atlanta, TX, 29 October 2014.

[3] Joyce Chapman, "Introducing the New DLF Assessment Interest Group." Blog Post on the Digital Library Federation blog, 12 May 2014.

[4] Michelle Dalmau, Stacy Konkiel, and David Scherer. "Determining Assessment Strategies for Digital Libraries and Institutional Repositories Using Usage Statistics and Altmetrics" (white paper). October 2014.

[5] Michelle Dalmau, Stacy Konkiel, and David Scherer. "Determining Assessment Strategies for Digital Libraries and Institutional Repositories Using Usage Statistics and Altmetrics." Presentation at the Digital Library Federation Forum in Austin, TX, 5 November 2013.

[6] Jody DeRidder, "Did We Get the Cart Before the Horse? (Faculty Researcher Feedback)." Presentation at the Digital Library Federation Forum in Atlanta, TX, 29 October 2014.

[7] Tevco Saracevic, "Digital Library Evaluation: Toward an Evolution of Concepts," Library Trends 49, no. 2 (2000): 350-369.

[8] Tevko Saracevic. "How were digital libraries evaluated?" Presentation at the DELOS WP7 Workshop on the Evaluation of Digital Libraries in Padova, Italy, October 2004: 6.

[9] Ying Zhang, "Developing a Holistic Model for Digital Library Evaluation." Journal of the American Society for Information Science and Technology 61, no. 1 (2010): 88-110.

[10] Ho Jung Yoo and SuHui Ho, "Do-It-Yourself Usability Testing: a Case Study from the UC San Diego Digital Collections." Presentation at the Digital Library Federation Forum in Atlanta, TX, 29 October 2014.

 

About the Authors

Joyce Chapman is the Assessment Coordinator at Duke University Libraries and co-founder and co-leader of the DLF Assessment Interest Group. She holds an MSIS from the University of North Carolina at Chapel Hill.

 

Jody DeRidder is the Head of Metadata & Digital Services at the University of Alabama Libraries and a co-founder of the DLF Assessment Interest Group. She holds an MSIS and an MS in Computer Science from the University of Tennessee.

 

Santi Thompson is Head of Digital Repository Services at the University of Houston libraries and a co-leader of the DLF Assessment Interest Group. He holds an MLIS and MA in Public History from the University of South Carolina.

 
transparent image