Skip to main content

Meaningful Metrics (First of Two)

Webinar

Scope

It’s time to revisit metrics. How can they be made more meaningful and illuminative? Publishers, librarians, and their vendors use similar vocabulary (such as usage) but what they mean by their terminology (downloads, referral, etc.) and how they interpret it may differ. What data needs to be collected? How long is it retained? What are appropriate data-sharing practices? How should providers measure use of open educational resources? Or use of open access monographs? Can we come to agreement on the meaning of the behavioral data that may be automatically gathered in? In short, how can we make metrics more meaningful?

Participants in this two-part webinar will examine and discuss these issues and more from a variety of perspectives. Confirmed participants in this initial segment include Rebecca Kennison, Principal, K|N Consultants representing HuMetricsHSS, Brian Cody, Founder, Scholastica, Marie McVeigh, Head of Editorial Integrity, Web of Science, Clarivate; Rachel Borchardt, Science Librarian, American University; and Stacy Konkiel, Senior Data Analyst, Altmetric.

 

Event Sessions

Roundtable Discussion

Speakers

Marie McVeigh

Lead, Peer Review Operations and Publication Integrity
Mary Ann Liebert

We expect that the conversation will touch on the following:

What are some of the challenges in assessing that particular form of output or activity? Are there outputs (possibly unique to a particular discipline or community) that are problematic for purposes of gauging value or contribution?

What data might be useful to collect, in order to gauge the value or contribution of a particular output or activity? (Views, downloads, citations, tweets, etc. have all been used as metrics, some more meaningful than others. What data might one collect in evaluation of teaching or committee work?)

How complete is the resulting data set? What are the gaps? What gets missed or can’t be counted? Are there other mechanisms available or other considerations that are under-utilized in evaluating or assessing scholarly engagement? 

What are the complexities in developing appropriate metrics? What nuances need to be factored in? 

In evaluating and comparing contributions, we have a tendency to emphasize that which is easily quantified. As an example, we count citations and yet citation metrics have fallen prey to abuse as a result. What types of practices or mechanisms might be put in place to counter such tendencies?

Given the reliance on metrics in high-stakes decision-making, what steps might be taken to prevent stakeholders from misusing metrics?  Or ensure protection from being abused? (Opportunities to bring in discussion of DEI) 

Historically, the reward system in academia has included metrics tied to publication (acquisition of a ms. by a university press and the book’s subsequent sales) and to use (citation activity, library circulation, etc.)  What additional (new) metrics does this panel think might be useful as more meaningful indicators of contribution?

What kinds of shared initiatives might make sense in the current environment? What types of partnerships would benefit the community as a whole?

Resources shared by our panel:

The transformative power of values-enacted scholarshipArticle by Nicky Agate, Rebecca Kennison, Stacy Konkiel, Christopher P. Long, Jason Rhody, Simone Sacchi, and Penelope Weber

Metaphor and Metrics – Article by Marie McVeigh

Shared by Stacy Konkiel 

Metrics Toolkit

OurResearch Website  (Impactstory Profiles Project)

Just Ideas? The Status and Future of Publication Ethics in Philosophy  A White Paper by Yannik Thiem, Kris F. Sealey, Amy E. Ferrer, Adriel M. Trott, and Rebecca Kennison

Additional Information

NISO assumes organizations register as a group. The model assumes that an unlimited number of staff will be watching the live broadcast in a single location, but also includes access to an archived recording of the event for those who may have timing conflicts. 

NISO understands that, during the current pandemic, staff at a number of organizations may be practicing safe social distancing or working remotely. To accommodate those workers, we are allowing registrants to share the sign-on instructions with all colleagues so that they may join the broadcast directly, irrespective of their geographical location. 

Registrants receive sign-on instructions via email on the Friday prior to the virtual event. If you have not received your instructions by the day before an event, please contact NISO headquarters for assistance via email (nisohq@niso.org). 

Registrants for an event may cancel participation and receive a refund (less $35.00) if the notice of cancellation is received at NISO HQ (nisohq@niso.org) one full week prior to the event date. If received less than 7 days before, no refund will be provided. 

Links to the archived recording of the broadcast are distributed to registrants 24-48 hours following the close of the live event. Access to that recording is intended for internal use of fellow staff at the registrant’s organization or institution. Speaker presentations are posted to the NISO event page.

Broadcast Platform

NISO uses the Zoom platform for purposes of broadcasting our live events. Zoom provides apps for a variety of computing devices (tablets, laptops, etc.) To view the broadcast, you will need a device that supports the Zoom app. Attendees may also choose to listen just to audio on their phones. Sign-on credentials include the necessary dial-in numbers, if that is your preference. Once notified of their availability, recordings may be downloaded from the Zoom platform to your machine for local viewing.