Leaders at the National Institute of Standards and Technology reiterated the need for a common lexicon of terms relevant to artificial intelligence systems and machine learning models as the technologies continue to advance and be used globally.
Speaking during an Athens Roundtable event on Friday, NIST’s associate director for emerging technologies at the Information Technology Laboratory Elham Tabassi discussed both the need for and challenges of gathering a broad, diverse community response to standardizing technical language.
Tabassi and other officials have been working on developing federal policy that looks to measure and account for a broad range of societal risks posed by emerging technologies like AI. Part of that work is developing standards and metrics for risks like bias, discrimination and other hazards. She noted that marrying recommendations from scientific, quantitative professionals with those that have more social sciences backgrounds is challenging –– but necessary –– to do for democratically-developed standards for AI.
While NIST helms the development of standards and accompanying measurements, Tabassi said that reaching consensus on specific language and terms governing the usage and development of AI systems is a priority over developing relevant metrics.
A common set of technical standards will also help researchers develop better methods to evaluate various components of AI systems, such as for trustworthiness and efficacy. Tabassi said that this includes evaluating a given system not just for technical robustness, but societal robustness as well.
She added that future evaluation standards should emphasize where and how a human user is involved with an AI system in grading its trustworthiness.
Following President Joe Biden’s executive order on AI, NIST has been given a slew of new responsibilities in federal AI regulatory and research. Tabassi said that her agency has to develop guidelines for evaluating AI systems capabilities along with associated cybersecurity guidance, creating test environments, and operating the US AI Safety Institute.