Establishing the Bar by Which We Measure Performance
Letter From the Executive Director, October 2022
With this week’s announcement of the first Nobel Prizes of 2022, it is worth considering how we measure performance. It is often difficult to ascertain whether something has been a success, particularly in the short term. Organizations often put forward performance metrics and base assessment against those criteria. Whether these are actually tied to success and impact can be difficult to discern, outside of very quantitative metrics such as sales targets or budget expectations. Even over longer terms, it can be a challenge to determine if one’s work has made the difference one had hoped, or if the mission an organization has set for itself has been or is being achieved.
Assessment of scholarship is a similarly complex question, with many nuances that make gauging the outcomes and their impacts fraught. Traditional bibliometrics have sought to capture performance information based on citations. While potentially helpful, citations only provide a single, albeit incomplete, window onto a single output of a research endeavor. NISO’s early work on altmetrics and other subsequent work to define metrics for data are attempts to create a fuller picture of how we measure scholarly success.
For example, a project can extend well beyond a single paper or grant, or might spread across multiple organizations. Ideally, we would like to combine these various elements to develop a comprehensive picture of what has happened across a research activity. A soon-to-be published ISO standard, the Research Activity Identifier (RAiD), aims to capture and aggregate the various elements of a research project into a cohesive virtual record by linking the various outputs, individuals, institutions, and grants by using a global identification structure. If a project could be identified, rather than its component outputs, we could be in a better position to describe, discover, and assess the performance of the larger effort.
This is particularly the case when assessing the impact of grants by philanthropic or government funding bodies. The lag in assessment with grants can extend months or years after the grant-funded period is over. Frequently, this information is not available until well after the relationship between funder and researcher has ended, since the resulting outputs might only be published after the grant has ended. Even gathering these data can be a challenge for funders, without structured grants acknowledgment information.
Persistent identifiers and metadata are essential elements of connecting and assessing the impact of science. In part, it was the rationale for including these things in the White House OSTP “Nelson” memo on Free Access to Research released in August. Other funders are also focused on access and assessment, as well. It was the focus of conversations at a meeting in which I participated in September, hosted by Altum, focused on managing the grant-making process. It is worth taking a look at the visual rendering of the meeting to explore some of the ideas discussed there.
To support greater understanding of the assessment process and how libraries play a role in that process, NISO will again focus on this topic as part of our fall educational program. Starting this month, we will offer our incredibly popular assessment training program again, led by Martha Kyrillidou, a consultant focusing on management, evaluation, assessment, and R&D activities. Beginning on October 11, this six-part series will cover topics including qualitative and quantitative methods, organizational goals, research methods, and tools. There is still time to register for the series.
Defining the processes by which metrics are derived, along with the identification and metadata structures that make these assessments possible, has been a core element of NISO’s work since the 1960s, when NISO’s Information Services and Use Metrics & Statistics for Libraries and Information Providers (Z39.7) was first published. We continue to support a variety of work that facilitates understanding the impact of research, including laying the groundwork for future assessment, through projects such as RAiD. Like measuring all performance, it may be years before we know that our efforts have been a success.