AI in the Working Environment
NOTE: How are organizations handling the introduction of artificial intelligence internally? Review the research and documentation as well as policies that have been developed. Shown here are relatively recent materials that came to the attention of NISO I/O. Moving forward, this resource page will be periodically updated; contact Jill O'Neill, Editor-in-Chief, NISO Information Organized (I/O), at joneill@niso.org with your recommendations.
Research and Documentation
July 2024
Elsevier Insights 2024: Attitudes Towards AI (News Announcement, Elsevier, July 9, 2024)
Artificial intelligence (AI) is expected to transform research and healthcare, yet adoption of AI for work use remains low as does use of even the most popular AI platforms like Bard and ChatGPT, according to a new study by Elsevier, a global leader in scientific information and data analytics. The Insights 2024: Attitudes toward AI report, based on a survey of 3,000 researchers and clinicians across 123 countries, reveals that both groups see AI having the greatest potential in accelerating knowledge discovery, increasing work quality and saving costs.
June 2024
Will US Federal Agencies Appoint Chief AI Officers? (Tweet, Taylor & Francis, June 20, 2024)
New bill could see every U.S. federal agency get its own Chief Artificial Intelligence Officer to oversee responsible development and deployment of AI.
The Importance of Data Labeling in AI (Blog Post, Access Innovations, June 18, 2024)
Data labeling, also referred to as data annotation, involves tagging or annotating data with labels that provide meaningful context. Labeled data serves as the foundation for supervised learning, where models learn to map inputs to outputs based on the examples provided. The quality and quantity of labeled data directly impact the accuracy and performance of the resulting AI models. High-quality labeled data enables models to generalize well to new, unseen data.
Making Data AI-Ready (Blog Post, Dryad, June 11, 2024)
AI-ready data refers to data that is organized, evaluated by Dryad data curators, and prepared in a way that makes it easy for researchers to utilize it for AI modeling. Dryad provides a large corpus of this kind of well-structured, well-documented data. This data can be combined with datasets from specialist repositories and a researcher’s own data to create comprehensive datasets that fuel AI-driven research. Accessing the wide range of datasets from a “generalist” platform like Dryad, and potentially combining it from data sourced elsewhere facilitates the integration of knowledge “from various fields and knowledge systems” that can lead to “more accurate models and foster curiosity-driven research.” Dryad is also an invaluable resource for researchers who lack access to expensive equipment, distant or off-limits field sites, or face other barriers to collecting the data they need themselves.
ARL/CNI 2035 Scenarios: AI-Influenced Futures (The Association of Research Libraries (ARL) and the Coalition for Networked Information (CNI), June 2024)
The Association of Research Libraries (ARL) and the Coalition for Networked Information (CNI) have chosen to apply scenario planning to imagine a future influenced by artificial intelligence (AI) and to explore the range of uncertainty associated with AI in the research and knowledge ecosystem. The scenarios were developed through a highly consultative process leveraging the expertise of the ARL/CNI Joint Task Force on Scenario Planning for AI/ML Futures and the ARL and CNI communities and facilitated by Stratus, LLC. The strategic focus and critical uncertainties highlighted in the scenarios were identified through extensive stakeholder engagement with the ARL and CNI membership during the winter of 2023 and spring of 2024 and involved over 300 people. Input was provided through focus groups, workshops, and one-on-one interviews.
April 2024
ARL Releases Guiding Principles for Artificial Intelligence (Tweet, Association for Research Libraries (ARL), April 29, 2024)
The Association of Research Libraries (ARL) has issued a set of “Research Libraries Guiding Principles for Artificial Intelligence.” AI technologies, and in particular, generative AI, have significant potential to improve access to information and advance openness in research outputs. AI also has the potential to disrupt information landscapes and the communities that research libraries support and serve. The increasing availability of AI models sparks many possibilities and raises several ethical, professional, and legal considerations.
Developing Community Consensus in Using AI For Writing (Featured Article, American Association for the Advancement of Science (AAAS), April 19, 2024)
When and how should text-generating artificial intelligence (AI) programs such as ChatGPT help write research papers? In the coming months, 4000 researchers from a variety of disciplines and countries will weigh in on guidelines that could be adopted widely across academic publishing, which has been grappling with chatbots and other AI issues for the past year and a half. The group behind the effort wants to replace the piecemeal landscape of current guidelines with a single set of standards that represents a consensus of the research community.
AI Index Report 2024 Reveals Accelerating Activity (Press Release, Stanford Institute for Human-Centered Artificial Intelligence (HAI), April 15, 2024)
The full report has 9 chapters addressing the impact of artificial intelligence in the context of research and development, technical performance, responsible AI, economy, science and medicine, education, policy and governance, diversity, and finally, public opinion. At the beginning of each chapter, there appears a list of highlights for that section.
Artificial Intelligence and Critical Infrastructure (Rand Analysis) (Report, Rand Corporation, April 2, 2024)
This report from the Rand Corporation is one in a series of analyses on the effects of emerging technologies on U.S. Department of Homeland Security (DHS) missions and capabilities...Authors were charged with developing a technology and risk assessment methodology for evaluating emerging technologies and understanding their implications within a homeland security context.
March 2024
Evolving AI Strategies in Libraries (ARL Findings) (News Announcement, Association of Research Libraries (ARL), March 29, 2024)
The onset of new, more accessible, artificial intelligence (AI) technologies marks a significant turning point for libraries, ushering in a period rich with both unparalleled opportunities and complex challenges. In this era of swift technological transformation, libraries stand at a critical intersection. To effectively chart this transition, two quick polls were conducted among members of the Association of Research Libraries (ARL).
A View on AI and Privacy Concerns (Tweet, Axios, March 14, 2024)
Axios looks at issues of privacy as impacted by the current excitement surrounding generative AI.
Privacy is the next battleground for the AI debate, even as conflicts over copyright, accuracy and bias continue.
Why it matters: Critics say large language models are collecting and often disclosing personal information gathered from around the web, often without the permission of those involved.
February 2024
The Impact of Artificial Intelligence by 2040 (Tweet, Elon University, February 29, 2024)
Elon University’s Imagining the Digital Future Center conducted a two-pronged study in late 2023 to develop an outlook for the impact of artificial intelligence on individuals and societal systems by 2040. Research findings were gathered using two methodologies: a national public opinion survey and a canvassing of hundreds of global technology experts.
Proposal for New Software Framework for LLMs (Tweet, Social Science Research Network (SSRN), February 28, 2024)
LLMs have the potential to transform experimental economic research. This study proposes a new software framework to design experiments between LLMs & integrate with human subjects.
Newly Released White Paper from Responsible AI UK (White Paper, Responsible AI UK (RAI UK), February 26, 2024)
The AI Fringe was a series of events hosted in London and across the UK in October and November 2023 to complement the UK Government-hosted AI Safety Summit. It brought a broad and diverse range of voices into the conversation and expanded the discussion around safe and responsible AI beyond the AI Safety Summit’s focus on Frontier AI safety.
White House Seeks Input on Closed vs. Open AI Systems (Featured Article, Associated Press, February 21, 2024)
The Biden administration is wading into a contentious debate about whether the most powerful artificial intelligence systems should be “open-source” or closed.
The White House said Wednesday it is seeking public comment on the risks and benefits of having an AI system’s key components publicly available for anyone to use and modify. The inquiry is one piece of the broader executive order that President Joe Biden signed in October to manage the fast-evolving technology.
Working with Artificial Intelligence Inside the Organization (Tweet, Vischer, February 21, 2024)
An on-going blog series on safe, sensible, and lawful use of artificial intelligence in the enterprise.
January 2024
AAP Announces Support for AI-Certification Nonprofit (Tweet, Association of American Publishers, January 22 2024)
AAP’s Maria Pallante is advising Fairly Trained, which certifies generative AI for ‘training’ without copyright infringement.
Policy, Safeguards, and Oversight
Copyright Office Releases Initial Segment of AI Report (Office of Copyright, Library of Congress, July 31, 2024)
Today, the U.S. Copyright Office is releasing Part 1 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI), addressing the topic of digital replicas. This Part of the Report responds to the proliferation of videos, images, or audio recordings that have been digitally created or manipulated to realistically but falsely depict an individual. Given the gaps in existing legal protections, the Office recommends that Congress enact a new federal law that protects all individuals from the knowing distribution of unauthorized digital replicas. The Office also offers recommendations on the elements to be included in crafting such a law.
What Concerns Underlie the Colorado AI Act? (News Announcement, The Future on Privacy Forum, May 9, 2024)
Colorado AI Act (CAIA) is the first comprehensive and risk-based approach to artificial intelligence (AI) regulation in the United States. This overview highlights the law’s requirements governing the private sector's use of AI, including developer and deployer obligations, consumer rights for transparency and the ability to appeal, and enforcement. CAIA will become effective February 1, 2026.
White House Seeks Input on Open vs Closed AI Systems (Tweet, Associated Press, March 2024)
The White House said Wednesday it is seeking public comment on the risks and benefits of having an AI system’s key components publicly available for anyone to use and modify. The inquiry is one piece of the broader executive order that President Joe Biden signed in October to manage the fast-evolving technology.
Tech companies are divided on how open they make their AI models, with some emphasizing the dangers of widely accessible AI model components and others stressing that open science is important for researchers and startups. Among the most vocal promoters of an open approach have been Facebook parent Meta Platforms and IBM.
A View on AI and Privacy Concerns (Tweet, Axios, March 2024)
Axios looks at issues of privacy as impacted by the current excitement surrounding generative AI...Read the assessment of vulnerabilities here.