Report on Benchmarking Process
Benchmarking is an ongoing, systematic process for measuring and comparing the work processes of one organization to those of others that exhibit functional "best practices." The goal is to provide an external standard for measuring the quality and cost of internal processes, and to help identify where there may be opportunities for improvement. To be effective, benchmarking should be integrated into operations throughout the organization and should be an ongoing process that analyzes data collected over time. It is a learning process that helps institutions discover how they can best improve the services, direct or indirect, that they offer to their customers.
For the 1998/2000 biennium, the University of Virginia Library chose as one of its goals (6f) to institute benchmarking as a tool for the analysis of internal processes and to establish benchmarks against which the Library can measure those processes. The Library's first Benchmarking Team was created in January 1999. The Team was charged with two challenges: to create a benchmarking process for the Library; and to carry out a short-term benchmarking project as a pilot (for which there is a separate Shelving Report). The benchmarking portion of the charge included:
". . . The Team is charged with learning the benchmarking process and applying it to a specific project. The intent is that the members of this Team become the Library's core staff with knowledge of benchmarking. After learning the process, the Team members should be able to:
- assist other groups with their benchmarking projects
- assist in developing benchmarking expertise among other staff members, for example, by participating in a training program
Each May the membership of the Team will be reviewed. Those who want to remain on the Team will be joined by new members so that the Benchmarking Team can be a constantly renewed central group of experts in the process. New projects will be determined at the same time that membership is reviewed." (see Appendix 1 for full charge.)
The benchmarking pilot project was chosen by User Services and Central Services Councils after review and discussion of the results of several user satisfaction surveys conducted in the spring of 1998. It was decided that our reshelving process was limited enough for the Team to use as a pilot for learning the process.
Team members were chosen by Management Information Services staff and Kendon Stubbs, and were selected in part from members who had similar experience on other process improvement teams. It was also important to have representation from several departments and service units affected by the project. Two Team members were from Management Information Services to provide statistical skills and continuity for the benchmarking process. The Team consisted of David Griles from Management Information Services, Doug Moseley from Cataloging, Heather Packard from Science/Engineering, Gary Treadway from Social Sciences Services, and Lynda White from Fine Arts/Management Information Services. Two Team members from stacks supervisory staff were added within a few weeks: Don McCracken, Stacks Supervisor in Alderman, and Pam Howie, Public Services Library Assistant in Music.
The Learning Curve
The Team began its task by identifying and reading books and articles on benchmarking in industry and the military. There is some literature on benchmarking specifically relating to libraries, but details on how to carry out the process in libraries are generally lacking. In addition, it could not be determined that there is any training available locally through the University. No courses are taught through Organizational Development and Training, through the Commerce School, the Education School, nor through the Darden Business School. Inquiries to Association of Research Libraries went unanswered. The Training Coordinator for the University Library bravely stepped in and began educating herself on the process. She was, of course, on the same learning curve as the Benchmarking Team, making it difficult to develop a timely class for the team. A query to the LARGE_PSD listserv, asking for contact with those who had done a benchmarking project, brought a response from Pennsylvania State University's Sally Kalin. She graciously consented to spend some time on the phone explaining the process and also to send a packet of information on the benchmarking projects she had participated in. In addition, the Team spent some time reading several books and articles on benchmarking. Fortunately, after a short time, the litany of benchmarking became repetitive and the Team decided to embark on its pilot project.
The basic benchmarking process is straightforward (see Appendix 2 for greater detail):
- Determine what to benchmark
- Form a benchmarking team
- Identify benchmark partners
- Collect and analyze benchmarking information
- Take action
The Team undertook several parts of the process simultaneously. Since there were minimal statistics or other data available on our shelving process, we began to flowchart the process in all 11 libraries and to work on a survey instrument to help us gather data about the process as practiced at the University of Virginia Library. The questionnaire was tested by interviewing stacks supervisors in units where all returned items were not shelved by the end of each day. The outcome was messy at best. It was necessary to revise the questionnaire several times in order to achieve more consistent answers.
While the Team was brainstorming questions for our internal survey, we also began to explore how to identify best shelving practices at other institutions. The literature on the shelving process is a sparse as the literature on benchmarking in libraries. Instead of relying on the literature, two electronic listservs (LARGE_PSD and CollDev) were queried with the assistance of Diane Walker and Gary Treadway. Those responding to the listserv query were initially asked whether they would be willing to participate in a brief survey. The 19 institutions that responded were sent a short 10-question survey (Appendix 3) devised to ferret out best practices at institutions similar to the University of Virginia Library. Thirteen institutions responded over the next two months revealing much interesting data about shelving standards and staff sizes. From these responses the Team was able to identify several institutions having what appeared to be "best practices." Contacts made with American Library Association's Library Administration and Management Association officers revealed that no LAMA committee members were aware of institutions doing either benchmarking or shelving studies.
The conversation with Sally Kalin of Pennsylvania State University about benchmarking led us to invite Gloriana St. Clair of Carnegie-Mellon University in Pittsburgh to the University of Virginia. Ms. St. Clair presented basic benchmarking information to the entire Library staff, and she assisted the Team in revising the local practices questionnaire and in deciding which institutions exhibited "best practices." She also suggested that the Team was moving toward its objective at a good pace in spite of its reservations about the lack of training in the benchmarking process. She confirmed that the Team should stop reading and "just get on with it."
After Ms. St. Clair's visit, the Team made rapid progress revising the local questionnaire (see Appendix 4). Answers garnered in the initial staff interviews were re-entered in the revised document and were much clarified in that process.
The Team concurrently began to devise a plan to measure several things for which there was no data: how fast books are shelved (books per hour), what the turnaround time is (from return desk to shelf), how accurately books are shelved, and what the turnaround time is for pick-ups. David Griles developed the protocol and ran the Sirsi reports with which the studies were done. With the exception of Science/Engineering, Team members carried out the measurements in libraries other than their home libraries.
Simultaneously, Team members began planning for site visits to the University of Arizona and Virginia Tech. These two institutions were chosen because of their reports of 4-hour turnaround time, 94%+ accuracy rates, and previously completed shelving studies. The site visits were planned for mid-April at the same time most of the Team was measuring speed, turnaround time, accuracy, and pick-ups. The site visits were essential for understanding how the best practices really worked. There is no substitute for walking through a process and having an opportunity to ask questions along the way. In addition, the host libraries were asked to fill out the same survey that had been completed by our own stacks staff. This allowed us to identify procedures that were similar and different, thus pointing to how our process could be improved.
At various points in the project, the Team apprised staff and stakeholders of progress by:
- having stakeholders on the team
- making direct contact with other stacks supervisors
- inviting Ms. St. Clair to present information on benchmarking to the entire staff
- sending an e-mail interim report mid-way through the project to Library@Virginia.edu. (See Appendix 5.)
Using and comparing data from the questionnaire, the best practices e-mail survey, the site visit reports, and our own local measurements, the Team was able to develop recommendations for changes in the shelving process at the University of Virginia. A report on the project, with these recommendations for action, was submitted to the Library's Administrative Council.
The Benchmarking Team recommends several changes in procedure for future benchmarking projects:
- Try to garner grass-roots input in choosing a project, perhaps by issuing a call for proposals to all staff. This would help ensure interest among team members and cooperation in implementing changes at the level at which the changes would be made.
- Choose team members very carefully to assure that all are interested in the project and its outcomes, and all have time to contribute to it.
- Provide a basic how-to training class that can be used on demand for new groups embarking on a benchmarking project.
- Enlist original Benchmarking Team members to provide assistance as consultants/coaches for other groups wanting to benchmark a process or product.
- Provide a support person to assist with compiling data, making travel arrangements, etc.
- Allot more time for a project, particularly if the project is larger in scope.
- Allow team members time away from their regular jobs, with someone assigned to help do the tasks they normally do.
The Team also recommends continuing and enhancing several aspects of the pilot project:
- Continue to educate the entire library staff about benchmarking
- Ensure that stakeholders are aware of project progress and can contribute to its outcome.
- Continue to provide sufficient financial support for site visits, measurements, etc.
- Ensure that the final step (take action) is not omitted.
Benchmarking can be a powerful tool for assessment and change of processes, if it becomes a working tool for all library staff.
The Team would like to thank Sally Kalin of Pennsylvania State University for elucidating the benchmarking process in libraries; Diane Walker and Gary Treadway for assisting with finding institutions with best practices; Gloriana St. Clair of Carnegie-Mellon University for consenting, on very short notice, to consult with the Team and to present benchmarking to the entire staff; Kendon Stubbs for financial and moral support, Gail Oltmanns for sponsoring Ms. St Clair's visit; and Suzanne Bombard for being so willing to embark on a training program for us.
Lynda S. White, Convener
June 4, 1999
Appendix 1: Charge to the Benchmarking Team
January 22, 1999
To: David Griles, Douglas Mosely, Heather Packard, Gary Treadway, Lynda White (Convener)
From: Kendon Stubbs
Re: Benchmarking Team
Thank you all for agreeing to serve on the University Library's first Benchmarking Team. The purpose of this Team is to fulfill goal 6f of the Library's priorities for 1998-2000: Develop and implement performance standards and benchmarks for selected library services.
Benchmarking is an ongoing, systematic process for measuring and comparing the work processes of one organization to those of others that exhibit functional "best practices." The goal is to provide an external standard for measuring the quality and cost of internal processes, and to help identify where there may be opportunities for improvement. Benchmarking should be integrated into operations throughout the organization and should be an ongoing process that analyzes data collected over time. The Library's priorities indicate that it is time to embark on this learning process in order to discover how we can best improve the services, direct or indirect, that we offer to our patrons.
To accomplish this the Team is charged with learning the benchmarking process and applying it to a specific project. The intent is that the members of this Team become the Library's core staff with knowledge of benchmarking. After learning the process, the Team members should be able to:
- assist other groups with their benchmarking projects
- assist in developing benchmarking expertise among other staff members, for example, by participating in a training program.
Each May the membership of the Team will be reviewed. Those who want to remain on the Team will be joined by new members so that the Benchmarking Team can be a constantly renewed central group of experts in the process. New projects will be determined at the same time that membership is reviewed.
The project this Team is charged to undertake is benchmarking the shelving/reshelving process in all University Library service units. The project should include these processes:
- map and measure the current process in each library
- determine benchmarking partners with best practices for the shelving process
- communicate with those partners about their process
- compare their practices with the University Library's process
- recommend improvements in current practices based on the best practices of our benchmarking partners.
Additional staff from stacks operations should be invited to join the Team during this initial benchmarking process. Recommendations should be ready by May 15, 1999.
Appendix 2: Process outline
- Determine what to benchmark
- Identify customers
- Grassroots request
- Administration request
Produce summary of requirements and review with customer
Customer agrees to provide support and resources
Identify critical success factors
- Convert them into measures where possible
- Review them with customer
- Identify customers
- Form a benchmarking team (4-6 people)
- Identify type of benchmarking team to organize (intact work team, task form, ad hoc team)
- Select team members based on ability and motivation
- Allocate sufficient resources (time, funding, process support)
- Identify, train, orient internal benchmarking specialists
- Brief senior management regarding its role in supporting the team
- Brief support staff on their roles in the process
- Supply project planning tools; train team members to use effective project management techniques
- Identify benchmark partners
- Allocate sufficient time to investigate best-practices partners
- Identify specific databases and other resources related to the benchmarking investigation
- Attempt to investigate nontraditional sources of best-practices partners (for us perhaps Blockbuster Video or Barnes & Noble)
- Do not limit potential partners to those that are familiar, geographically convenient, or friendly to your organization
- Collect and analyze benchmarking information
- Identify information collection methodologies
- Develop information-gathering protocol and review with team members
- Prepare structured interview outlines
- Inform research/library staff of benchmarking information needs
- Use multiple methods of collecting information (phone calls, site visits, surveys)
- Secure sufficient resources to conduct investigation
- Collect internal benchmarking information
- Prepare briefing package to use with benchmarking partners
- Train team in use of information matrices
- Check benchmarking information for patterns, misinformation, omissions, etc
- Take action
- Produce summary report of benchmarking investigation
- Send copy of report to benchmarking customers and partners
- Report should recommend: specific product/process improvements; learning opportunities; formation of functional networks
- Continue to improve the benchmarking process along with improvements in product/process
*Spendolini, Michael J., The Benchmarking Book. NY: Amacom, 1992.
Appendix 3: Questionnaire for Respondents to Listserv Query
- Do you have standards for reshelving already in place? In particular, standards for books shelved per hour, percent shelved without error, turnaround time from check-in to shelf.
- Have you done a shelving study of any kind? If so, what was the focus?
- How many branch libraries are on your campus?
- How many of these branches fall into the 75,000-300,000 volume range?
- How many volumes are in your main library?
- How many items were returned from circulation to your main library last year (1997/98)?
- How many new books were added to your main library collection last year?
- How many items were picked up around the library (from the floor, photocopiers, tables, etc.) and reshelved last year?
- What is the frequency of these pick-ups (daily, each shift, etc)?
- How many searches for missing materials were requested by patrons at your main library?
- How many FTE stacks employees work in your main library?
- How many FTE student shelvers work in your main library?
- How do you manage massive returns at the end of a term or academic year?
- Would you be willing to host a site visit by a small team from the University of Virginia? (I can provide some details of what we have in mind if you would like, but this would basically be an opportunity to share information face-to-face.)
Appendix 4: Shelving Questionnaire
- If you have a flowchart of your shelving process, please attach a copy.
- How many times is a book handled from the time it is returned to your library until it is shelved? How many additional times is it handled if it is returned to a different library?
- How much time does it take for an item to get from the return desk in your library to the shelf?
- Do you have standards for shelving quantity and quality? If so, what are they? How were they measured? For example: number of books shelved per hour; percentage of errors/accuracy rate
- How often are book return drops cleared?
- How often are tables, photocopiers, carrels, shelves, etc. cleared of books left by patrons? What is the process for reshelving the items collected?
- What additional steps, if any, are involved in processing new books for shelving?
- Describe each step of your book sorting process, from check-in to final sorting onto trucks before shelving, and who performs it.
- If you have set due dates, how do you handle the massive returns at those dates?
- Are there areas in which it is difficult to shelve? How do you manage shelving in those areas?
- Are the stacks shelf read?
- Who normally does the shelving?
Total # of people Average # of hours
per week per person
Students $ # # Staff $ # # Temps $ # #
- Do shelvers have a fixed work schedule?
- How is shelving assigned and supervised?
- What other tasks do shelvers perform in the stacks besides shelving?
- Do shelvers have work assignments outside of the stacks?
- Describe the process of training shelvers and who does the training.
Appendix 5: Benchmarking Team Interim Report
March 12, 1999
The University Library's Benchmarking Team undertook, in January, a project to find improvements for shelving practices in our libraries. At this point in the project we thought an update for staff might be useful.
The shelving project was chosen collaboratively by Central and User Services Councils, in part in response to surveys done last spring which indicated that patrons sometimes have difficulty finding books in the stacks. We are looking at how long it takes for a book to reach its shelf after it has been discharged at any library and at how accurately it is shelved.
The Benchmarking Team began by learning the process of benchmarking. This process improvement tool has been in use in the business community for over a decade but it has only recently migrated to the academic community. This has made it difficult to find both information about the process as it relates to libraries and to find information about other benchmarking projects on the shelving process specifically. The Team invited Gloriana St. Clair, Director of Libraries at Carnegie Mellon University, to the University to help us with our benchmarking process and to speak to the library staff about benchmarking.
The Team has concurrently begun learning more about our own shelving processes in each library in the Alderman system. We have created flow-charts for each library and Government Documents. We have also done a brief survey on how each location shelves its books and journals and on some of the factors that contribute to the shelving process such as training, number and level of employees, payrates, LEO delivery, new book routines, pick-up routines, sorting areas, etc. We discovered that many of us already shelve excellently-by the end of each day. We are about to begin measuring the shelving process for the larger libraries to see how long shelving takes. By the time we finish these projects we will have enough information about our own process to compare ourselves to other institutions who shelve more quickly and accurately than we do.
Another component of the project has been to identify those institutions. Since there are only a small handful of shelving studies in the literature, we had to find another way to establish which institutions have "best practices" for shelving. With assistance from Diane Walker and Gary Treadway, we queried several listservs asking for participants for a short survey. From the responses we received the group was able to establish that the University of Arizona has the best turn-around time for shelving (4 hours!). Members of the Team will be traveling to Tucson in April to find out how they do this.
We encourage other staff to think about their own work processes or bottlenecks and to consider how the benchmarking process can help simplify them and/or find solutions.
Allan, Ferne C., "Benchmarking: practical aspects for information professionals," Special Libraries, v.84, no.3 (summer 1993), p.123-129.
Alstete, Jeffrey W., Benchmarking in higher education; adapting best practices to improve quality, Washington: George Washington University Graduate School of Education and Human Development, 1996.
Buchanan, Holly S. and Joanne Marshall, "Benchmarking reference services: step-by-step," Medical Reference Services Quarterly, v.15, no.1 (spring 1996), p.1-13.
Camp, Robert C., Benchmarking; the search for industry best practices that lead to superior performance, Milwaukee: ASQC Press, 1989.
Coult, Graham, "Measuring up to the competition," The Library Association Record, v.98, no.9 (September 1996), p.471.
Davis, Robert I., and Roxy A. Davis, How to Prepare for And Conduct a Benchmark Project. Department of Defense, 7/15/94, http://www.dtic.mil.c3i/bprcd/0135.htm
Finnigan, Jerome P., The manager's guide to benchmarking, San Francisco: Jossey-Bass Publishers, 1996.
Garrod, Penny. "Benchmarking development needs in the LIS sector," Journal of Information Science, v.23., no.2 (1997), p.111-18.
Gohlke, Annette, "Benchmarking Basics for Librarians," Military Librarians Workshop, Dayton, Ohio, Nov., 1997, http://www.sla.org/division/dmil/mlw97/gohlke/index.htm
Gohlke, Annette, "Benchmarking for strategic performance improvement," Information Outlook (August 1997), p.22-24.
Hanson, Emil O., "In search of the benchmark institution," College & University, v.70, no.3 (spring 1995), p.14-19.
Jurow, Susan, et al, Benchmarking Interlibrary Loan: a pilot project. OMS Occasional Paper #18. Washington: Association of Research Libraries, 1995.
_____ et al, "Tools for measuring and improving performance," Journal of Library Administration, v.18, nos.1-2, p.113-126.
Kaufman, Roger and William Stuart, "Beyond conventional benchmarking: integrating ideal visions, strategic planning, reengineering, and quality management," Educational Technology, v.35, no.3 (May-June 1995), p. 11-14.
Lawes, Ann, "The benefits of quality management to the library and information services profession," Special Libraries, v.84, no.3 (summer 1993), p.142-146.
Library Benchmarking International, http://ns1.world-net.net/users/lbi/lb_ex.html
Library Materials Availability Group, LIMA Findings and Recommendations, April 10, 1998. University of Virginia Library
NACUBO Benchmarking Project, http://www.nd.edu/%7Espbusop/benchmark/nacubo1.htm
Northern Territory University Library, Benchmarking Project, http://www.ntu.edu.au/library/bench2.html
O'Neil, Rosanna M., Total quality management in libraries: a sourcebook, Englewood, CO: Libraries Unlimited, 1994,
Pritchard, Sarah M., "Library benchmarking: old wine in new bottles?" The Journal of Academic Librarianship, v.21, no.6 (November 1995), p.491-495.
Robertson, Margaret, and Isabella Trahn, "Benchmarking academic libraries: an Australian case study," Australian Academic & Research Libraries, v.28, no.2.
Shaughnessy, Thomas W., "Benchmarking, total quality management, and libraries," Library Management & Administration, v.7, no.1 (winter, 1993), p.7-12.
Spendolini, Michael J., The Benchmarking Book. NY: Amacom, 1992.
St. Clair, Gloriana, "Benchmarking and restructuring at Penn State Libraries," in Restructuring Academic Libraries, edited by Charles A. Schwartz, Chicago: Association of College and Research Libraries, 1997.
St. Clair, Guy, "Benchmarking, total quality management, and the learning organization: new management paradigms for the information environment, introduction," Special Libraries, v.84, no.3 (summer 1993), p.120-122.
_____, "Benchmarking, total quality management, and the learning organization: new management paradigms for the information environment, a selected bibliography," Special Libraries, v.84, no.3 (summer 1993), p.155-157.
_____, "The future challenge: management and measurement," Special Libraries, v.84, no.3 (summer 1993), p.151-154.
Stuart, Crit and Miriam Drake, "TQM in research libraries," Special Libraries, v.84, no.3 (summer 1993), p.131-136.
Tucker, Sue, Benchmarking, a guide for educators. Thousand Oaks, CA: Corwin Press, 1996.