NTIA Urges US Govt to Actively Monitor AI Model Risks

It recommends collecting evidence on their capabilities, limitations, and information content.
NTIA Urges US Govt to Actively Monitor AI Model Risks
Published on

The U.S. government should monitor potential risks associated with powerful AI models and be ready to take action if risks escalate, according to a new report from the National Telecommunications and Information Administration (NTIA). The report examines the risks and benefits of dual-use foundation models — large, complex models trained on extensive datasets and adaptable for various applications. 

“Dual-use foundation models with widely available model weights introduce a wide spectrum of benefits. They diversify and expand the array of actors, including less resourced actors, that participate in AI R&D. They decentralize AI market control from a few large AI developers. And they enable users to leverage models without sharing data with third parties, increasing confidentiality and data protection,” NTIA said in the report.

However, it also notes that making the weights of certain foundation models widely accessible could pose risks to national security, equity, safety, privacy, and civil rights. These risks may arise from potential misuse, ineffective oversight, or inadequate accountability mechanisms.

Hence, it recommends a three-part framework for the federal government to actively monitor the risks and benefits of dual-use foundation models with widely accessible weights. 

This framework includes: collecting evidence on their capabilities, limitations, and information content; evaluating this evidence against specific thresholds of concern; and potentially implementing appropriate policy measures based on these evaluations. 

Earlier this year, NTIA launched a ‘Request for Comment’ on the risks, and benefits associated with advanced AI models with widely available model weights.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech