Research Methods in Information Science/Technical services and cataloging studies
Scope of this chapter
In Mugridge's 2014 study, the most common goal of technical services assessment was to streamline processes, and almost all of the decisions that were informed by these analyses were reallocation of staff. While the authors of this book recognize the importance of such analyses, this chapter does not include such methods; they instead will be found in the chapter on Management and operational research. Likewise, numerous papers calculate the costs of catalog work or perform cost-benefit analyses. This type of study is likewise excluded so that we may concentrate on methods that illuminate the quality of technical services work, particularly catalog records.
One issue with current cataloging research is that there is not much consensus on values and metrics. Catalogers can generally identify a good record when they see one, but its otherwise difficult to define exactly what a "good" record or practice is in this field. The following publications may be useful in identifying the values you would like to study:
- Ranganathan, S. R. (1955). Heading and canons: Comparative study of five catalogue codes. Madras: S. Viswanathan.
- Hider, P., & Tan, K. C. (2008). Constructing record quality measures based on catalog use. Cataloging & Classification Quarterly, 46(4), 338-361.
Additionally, Van Wyck offers four "performance indicators":
It seems that cataloging values may rely on extrinsic features as well. Gorman has suggested that the value of a catalog record is related to the value of the resource cataloged. (In Stalberg, E., & Cronin, C. (2011). Assessing the Cost and Value of Bibliographic. Library Resources & Technical Services, 55(3), 124.)
There are several values that can be quantitatively measured, but which contribute most to discovery?
- Level of authority control
- Level of typographical errors
- Process vs. Big picture focus
- ROI vs. service vs. innovation focus
- Facilitation of the FRBR user tasks
Another place to check: Conway, M. (2010). Research in Cataloging and Classification. Cataloging & Classification Quarterly, 19(1).
This has some assessment ideas: http://downloads.alcts.ala.org/ce/11202013AssessmentStrategiesCatalogingSlides.pdf
Identifying high-impact fields: Carrie Preston, “High Speed Cataloging Without Sacrificing Subject Access or Authority Control: A Case Study,” in Radical Cataloging: Essays at the Front, ed. K. R. Roberto (Jefferson, NC: McFarland & Co., 2008)
Radio, E. (2016). Semiotic Principles for Metadata Auditing and Evaluation. Cataloging & Classification Quarterly, 1-19.
Display understanding (http://connect.ala.org/files/7981/costvaluetaskforcereport2010_06_18_pdf_77542.pdf)
Note that this method can also be used to assess actual content, as in http://journal.code4lib.org/articles/7738
Measure user discovery before and after an enhancement project.
Surveys administered when users request ILLs, storage materials to identify what piece of data they need before determining whether or not it was useful.
Balanced scorecard method
Example: Kim, D. S. (2010). Using the balanced scorecard for strategic operation of the cataloging department. Cataloging & Classification Quarterly, 48(6-7), 572-584.
Total quality management method
Example: Khurshid, Z. (1997). The application of TQM in cataloguing. Library Management, 18(6), 274-279.
- Mugridge, Rebecca L. (27 May 2014). "Technical Services Assessment". Library Resources & Technical Services 58 (2): 100-110. doi:10.5860/lrts.58n2.100. ISSN 2159-9610. https://journals.ala.org/lrts/article/view/5337/6519. Retrieved 18 January 2016.
- Van Wyk, A. C. (1997). The development of performance indicators to measure cataloguing quality in the Technical Services Division of the Unisa Library with special reference to item throughput time. MOUSAION, 15, 53-67.
- Schomberg, Jessica (18 December 2015). "Examination of Cataloging Assessment Values Using the Q Sort Method". Cataloging & Classification Quarterly 54 (1): 1–22. doi:10.1080/01639374.2015.1072864.