Research Methods in Information Science/The collection assessment

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Collection assessments typically serve one of two purposes:

  • to inform librarians about current holdings so that they can make better decisions regarding purchases, subscriptions, resource sharing, or collection policies.[1]
  • as part of a larger assessment of a library, such as for a grant or accreditation effort.

Researchers typically undertake collection assessments to make major determinations. To support librarians and administrators in making these decisions, collection assessments typically employ a number of different methods. Triangulation can be very important in these projects: each of the measures listed below can suggest facts about the collection, but does not provide very much certainty.

Usage-based methods[edit | edit source]

Metrics[edit | edit source]

Circulation[edit | edit source]

For many years, the most popular measure of library service has been the number of items circulated. Hence, a few general observations on circulation figures should be made. First of all, circulation and use are not synonymous terms. The fact that a book is taken out of the library does not necessarily mean that it has been read. Second, circulation items must be defined if circulation figures are to be comparable. Variants in the reporting of circulation figures include: the length of the circulation period, type of material included, the inclusion of renewals, etc. A careful definition of circulation and a thorough catalog of the items included will remove many of the dangers of possible misinterpretation of circulation statistics.

A simple count of books loaned is one of the most widely used measures of library service. The measure is easy to secure (such figures are kept by nearly all libraries), is intrinsically simple and, therefore, easily understood. Barring certain limitations cited above, most people know what is meant by 250 circulations in the children's picture book section. But gross circulation figures are in most cases so simple that they obscure important information. They provide no evidence as to the type of reader, and thus somewhat indirectly imply equal use of various types of material by all reader groups—a situation which is rarely, if ever, true.

Of more interest is the percentage of all items in a section that have actually circulated or number of unique borrowers in a specific section.

In-house usage[edit | edit source]

Library rules and regulations often place many obstacles in the way of measuring the use of periodicals and reference materials. Since these items often do not circulate outside in the library, it is difficult to produce even the gross figures of circulation for periodicals.

Most ILSs include a method for recording "in-house use". While this can help to estimate collection use, it too is somewhat problematic. By this means, one cannot tell whether the volume has been actually used or set aside as inadequate. And this method depends to a certain extent upon the users' disposition to leave volumes unshelved rather than to return them to their regular places. Finally, certain ILSs store in-house usage in a database table separate from circulations proper, which makes these data harder to access and synthesize.

Interlibrary loan requests[edit | edit source]

Aguilar, W. (1986). The application of relative use and interlibrary demand in collection development. Collection Management, 8(1), 15-24.

  • only use of this method seems to be Ochola, J. N. (2003). Use of circulation statistics and interlibrary loan data in collection management. Collection Management, 27(1), 1-13.

Byrd, G. D., Thomas, D. A., & Hughes, K. E. (1982). Collection development using interlibrary loan borrowing and acquisitions statistics. Bulletin of the Medical Library Association, 70(1), 1.

Log data[edit | edit source]

Interpreting usage data[edit | edit source]

Borin and Yi divide usage indicators into three levels:

  • Access-level indicators show how a user tries to access a resource. This is the most superficial level, as it doesn't demonstrate whether or not the patron even used a resource in any meaningful way. These indicators include database logins, article hits, link resolver logins, gate counts, and search logs from catalogs and discovery layers.
  • Use-level indicators are at an intermediate level, showing that a patron actually made some attempt to use a resource. These indicators include ILL requests, article downloads and views, and circulation statistics.
  • Impact-level indicators show evidence that patrons actually found a resource valuable for their learning or research. The primary indicator at this level is citation analysis of student or faculty writing.[2]

Collection depth and breadth-based methods[edit | edit source]

Note that collection assessments that rely on collection data need to rely on accurate, up-to-date data. Therefore, a complete, up-to-date inventory of the collection (either physical or electronic holdings) is crucial to ensure that the assessment is based on accurate data.[3]

Note that these metrics are heuristics to the imprecise art of collection development[4].

Metrics[edit | edit source]

Collection size[edit | edit source]

Collection size is simply a report of how many items are located in each section.

Acquisition rates is the number of items added to a particular section in a given period of time.

Expenditures can be broken down by section for a simple metric.

These metrics are easy to understand and collect, but reveal little about how a library's collections meet patron needs. They can be somewhat illuminating when compared with a mapping of high-enrollment or flagship programs, such as University of Michigan's mapping[5].

Representation of standard lists[edit | edit source]

This method is often referred to as the checklist method. In this method, a librarian will select a standard list of titles. The library's catalog is then checked to see how many of the titles are either owned by or accessible via the library. This metric is reported as a total number or a percentage of titles found.

For topic-specific assessments: Professional and scholarly associations put together standards for their disciplines, which can be used. Another technique for developing a check list in a special field has been suggested. An important textbook or several textbooks are examined, and each reference to a book is noted. These references are then tabulated and summarized, and the books mentioned as references are listed in order of frequency of mention. By this means, a reputable list of books in a given field is compiled, though the time required to prepare such a list throws some doubt on its economy. This procedure was used by librarians as early as 1937 [6].

Librarians can consult Choice reviews, Publishers Weekly, Citation reports.

Lists can be subjective and arbitrary. The use of different bibliographies in the same field insures against bias of a single list, but can add greatly to the amount of effort required in such a project. Lists can also become outdated quickly or otherwise be unsuitable for the a library's patrons.

Some authors have argued that this method is more appropriate to smaller or specialized libraries[4].

Goldhor's inductive method[edit | edit source]

Overlap analysis[edit | edit source]

Wilson, F. L., & McGrath, W. E. (1990). Cluster analysis of title overlap in twenty-one library collections in western New York.

Percentage of available titles that a library has purchased[edit | edit source]

Choose a representative press (e.g. Columbia University Press) and see how many of their publications have been published

Interpreting collection depth data[edit | edit source]

Contextualizing[edit | edit source]

There are several environmental factors that we need to consider:

  • Nature and goals of the institution
  • Budget
  • Peer institutions
  • Consortial memberships
  • Patron demographics

For public libraries:

  • Population in service area: public libraries are often asked to report the number of items per capita. For example, the ALA used to issue a standard of how many books per capita a public library should own.[7]

For academic institutions:

  • Credentials offered
  • Enrollment in various areas

Incorporating faculty expertise[edit | edit source]

Whaley Jr, J. H. (1981). An Approach to Collection Analysis. Library Resources and Technical Services, 25(3), 330-38.

Conspectus models[edit | edit source]


RLG collection levels[edit | edit source]

Physical assessment methods[edit | edit source]

Format of a collection assessment report[edit | edit source]

Intner (2003) recommends that a collection assessment report include the following sections at minimum:

  1. Executive summary
  2. Introduction including background and collecting goals
  3. Profile of the collection
  4. Comparisons of selected features of the profile with the same features at peer institutions
  5. Quality measures such as request fill rates, waiting time, use, etc., arranged by subject or department
  6. Comparisons of selected quality measures with those of peer libraries
  7. Conclusions about collection performance
  8. Recommendations
  9. Appendices containing bibliography of sources, notes on methodology, raw data, and other supporting documents [3].

References[edit | edit source]

  1. Agee, Jim (September 2005). "Collection evaluation: a foundation for collection development". Collection Building 24 (3): 92–95. doi:10.1108/01604950510608267. 
  2. Borin, Jacqueline; Yi, Hua (3 October 2008). "Indicators for collection evaluation: a new dimensional framework". Collection Building 27 (4): 136–143. doi:10.1108/01604950810913698. 
  3. a b Intner, Sheila S. (September 2003). "Making your collections work for you: collection evaluation myths & realities". Library Collections, Acquisitions, and Technical Services 27 (3): 339–350. doi:10.1016/S1464-9055(03)00067-8. 
  4. a b Lundin, Anne (1989). "List-checking in collection development: An imprecise art". Collection Management 11 (3-4): 103-112. 
  5. "Categories". Retrieved 30 November 2015. 
  6. Dalziel, Charles (1937). "Evaluation of periodicals for electrical engineers". The Library Quarterly: 354-372. 
  7. McDiarmid, Errett (1940). The library survey : problems and methods. Chicago: American Library Association. p. 102. Retrieved 10 November 2015. 
  8. McAbee, Sonja L.; Hubbard, William J. (8 June 2003). "The Current Reality of National Book Publishing Output and Its Effect on Collection Assessment". Collection Management 28 (4): 67–78. doi:10.1300/J105v28n04_05.