Artificial Intelligence (AI) is rapidly transforming our world. Remarkable surges in AI capabilities have led to a wide range of innovations including autonomous vehicles and connected Internet of Things devices in our homes. AI is even contributing to the development of a brain-controlled robotic arm that can help a paralyzed person feel again through complex direct human-brain interfaces. These new AI-enabled systems are revolutionizing and benefitting nearly all aspects of our society and economy – everything from commerce and healthcare to transportation and cybersecurity. But the development and use of the new technologies it brings are not without technical challenges and risks.
NIST contributes to the research, standards and data required to realize the full promise of artificial intelligence (AI) as a tool that will enable American innovation, enhance economic security and improve our quality of life. Much of our work focuses on cultivating trust in the design, development, use and governance of artificial intelligence (AI) technologies and systems. We are doing this by:
NIST’s AI efforts fall in several categories:
NIST’s AI portfolio includes fundamental research into and development of AI technologies — including software, hardware, architectures and human interaction and teaming — vital for AI computational trust.
AI approaches are increasingly an essential component in new research. NIST scientists and engineers use various machine learning and AI tools to gain a deeper understanding of and insight into their research. At the same time, NIST laboratory experiences with AI are leading to a better understanding of AI’s capabilities and limitations.
With a long history of devising and revising metrics, measurement tools, standards and test beds, NIST increasingly is focusing on the evaluation of technical characteristics of trustworthy AI.
NIST leads and participates in the development of technical standards, including international standards, that promote innovation and public trust in systems that use AI. A broad spectrum of standards for AI data, performance and governance are — and increasingly will be — a priority for the use and creation of trustworthy and responsible AI.
A fact sheet describes NIST's AI programs.
AI and Machine Learning (ML) is changing the way in which society addresses economic and national security challenges and opportunities. It is being used in genomics, image and video processing, materials, natural language processing, robotics, wireless spectrum monitoring and more. These technologies must be developed and used in a trustworthy and responsible manner.
While answers to the question of what makes an AI technology trustworthy may differ depending on whom you ask, there are certain key characteristics which support trustworthiness, including accuracy, explainability and interpretability, privacy, reliability, robustness, safety, and security (resilience) and mitigation of harmful bias. Principles such as transparency, fairness and accountability should be considered, especially during deployment and use. Trustworthy data, standards and evaluation, validation, and verification are critical for the successful deployment of AI technologies.
Delivering the needed measurements, standards and other tools is a primary focus for NIST’s portfolio of AI efforts. It is an area in which NIST has special responsibilities and expertise. NIST relies heavily on stakeholder input, including via workshops, and issues most publications in draft for comment.
Sign up for our newsletter to stay up to date with the latest research, trends, and news for Artificial intelligence.