Federal Engagement in Artificial Intelligence Standards: Report from the Field
As a result of Executive Order on Maintaining American Leadership in Artificial Intelligence from February 2019, the National Institute of Standards and Technology (NIST) held a workshop on May 30 to bring a variety of stakeholders together to discuss ways in which the US Government can and should be involved in standards for the various technologies collectively known as artificial intelligence (AI). In attendance were representatives from various federal agencies, people from technology and business, and from other standards organizations.
The schedule for the workshop was designed to have both a business/economic development standards needs as well as a deeper look at federal standards needs for AI usage. The day was divided into morning and afternoon sessions that reflected these perspectives, and each section of the day began with a panel discussion on the specific perspective. Participants were then divided into smaller working groups, and each group further discussed the various needed standards.
During the opening panel on what the proper role for the federal government is in the AI space from a business perspective, NIST presented a framework for discussion that broke AI into segments of implementation:
- Data collection and processing
- Model
- Verification and Validation
- Deployment
- Operation and Monitoring
- Decision or Prediction
The question put before the first panel, and the subject of the morning breakout session, was whether or not this model was useful in thinking about the role of standards in AI and where varying types of standards might fit. Panel members from Microsoft and Nvidia represented technology and business concerns, while Joshua New from the Center for Data Innovation (https://www.datainnovation.org) and Lynne Parker, Assistant Director for AI from the White House Office of Science and Technology Policy brought the governmental perspective.
The current political climate is one of low-to-no regulation, and that was definitely reflected in the opening discussion, as all of the speakers reiterated a desire to find a balance between standards and innovation. Things got more interesting during the breakout sessions, as the mixed perspectives caused the conversation to move to a more detailed and nuanced examination of what role standards might play in ongoing AI development.
There were three aspects of AI standards development that came out of the breakouts that I think were most fruitful and interesting: the role of ethical standards in AI development and how those relate to technical standards, the role of oversight and human control at varying levels of the NIST AI model, and the varying ways that “standards” might be interpreted in order to do the work needed in a very complicated technology (technical, ethical, licensing as a standard, performance standards, outcome measurements, and more). The first topic was the most agreed-upon, with a strong desire for governmental agencies to develop ethical standards that were tied to technical standards such that the details of the latter don’t undermine the former.
The role of human oversight and input was also a topic of heavy discussion, both for the aforementioned ethical checks but also as a check-and-balance for efficacy and iteration. While certain technical standards might be able to be verified entirely by systems themselves, other outcome-based or model-specific standards will need to be overseen and corrected as AI systems become more complex. Lastly, the varying types of standards that might be implemented as a part of an AI system, from technical metadata standards for interoperability and communication, to outcome-based measurement standards for the reduction/elimination of bias in decision making, to the licensing of parts of the systems (either human or algorithmic) as arbiters of “fairness,” were heavily debated in the small groups.
NIST expects to produce a report from the workshop within the month which will then inform a plan to be presented to the White House before August of this year. I believe that NISO has an opportunity to be an ongoing part of this work, especially in the early stages of the NIST AI model. As a group that has special experience in standards for data collection and processing, NISO’s expertise could be very valuable in ensuring that the necessary data and metadata is well-suited to the AI future that is upon us.