NISO Plus Forum on AI Yields Ideas for Future Standards Work

What emerged from attendee engagement in DC?

The second NISO Plus Forum was held last month in Washington, DC, on the topic of artificial intelligence (AI). People in our community have been focusing a lot of attention on AI’s uses, applications, and implications for months. In the classroom  industry, and even the White House, the topic of AI has been on everyone’s mind, with questions and concerns ranging from “will people use AI ethically?” to whether large language models are illegally consuming copyrighted works, to whether the models are biased and untrustworthy. There are as many questions as reasons for concerns. For our community, it makes sense to consider how we can meaningfully contribute to the ongoing discussions. While most of those in the NISO community are not developing or tweaking natural language models, there are aspects of these systems in which information professionals can affect this growing trend.  

We leaned heavily into the in-person nature at the NISO Plus Forum, with a “World Café” format in which participants moved from table to table for 15-20 minute conversations, then moved again, and again, reshuffling the makeup of each table repeatedly throughout the day. While AI was the general theme, we moved through three specific areas of discussion over the course of the day: services, data, and ethics. Although the style may seem chaotic and unlikely to yield outcomes, it was amazing to see the groups coalesce around a set of ideas and areas of common concern.

At the end of the day, each table was asked to summarize its ideas and their perspectives. In reporting back to the group, each table offered up a potential project idea as something concrete that NISO and its membership could undertake to improve either our collective understanding of AI systems, their training, or their application in our community.  Many of the ideas centered around questions of trust and ensuring systems’ validity.  Among the ideas that were proposed were:

  • Guidance on disclosure of the use of AI in content development
  • Development of a Trusted Corpus standard to assess the quality of the material used to train a language model
  • Development of a training series that teaches people how to use AI tools
  • Launching an AI Summit to bring together the community to discuss how responsibility is distributed among publishers, libraries, funders, and institutions for AI deployment in scholarly communications
  • Creating a Responsible AI Framework for scholarly communications and creating consequences for inappropriate use of AI tools
  • Establishing a standard for disclosure of AI involvement in content creation that describes how AI was used, including the methods, training, and human involvement
  • Creating an AI Literacy Toolkit that can be used to train our community on appropriate uses, legal questions, and relevant standards
  • Building a terminology standard for how data has been used and how AI tools have been applied in a situation
  • Defining a standard for testing models and certifying their training data, including provenance information on what was used to train the system 
  • Establishing an attribution system that recognizes the way data was incorporated into a language model

The purpose of the Forum was to generate ideas that can be used to foster further discussion at the upcoming NISO Plus Conference that will be held in Baltimore in February. While some of these ideas are formative and exploratory, there are several that could hold potential for future NISO work. People in the room were asked whether they would consider engaging with those project ideas, as a means of gauging their popularity. Those responses will inform the conference programming committee’s selection of ideas that will be included as conference sessions. Ideally, as more conversations take place about these ideas, and as they are more widely shared, they could lead to projects that NISO initiates as recommended practices or standards for the community to adopt.