Michael Boyce, director of the Department of Homeland Security (DHS) newly-established AI Corps, proposed that generative AI tools used across the Federal government should undergo risk evaluation akin to the Federal Risk and Authorization Management Program (FedRAMP), Meritalk reported.
FedRAMP, overseen by the General Services Administration (GSA), standardizes security assessment, authorization, and continuous monitoring for cloud products and services used by Federal agencies.
Boyce expressed a desire for a process similar to FedRAMP specifically tailored for generative AI, however, the Office of Management and Budget (OMB) memo does not currently include provisions for addressing this aspect.
Earlier in March, OMB issued its finalized policy document governing the use of AI within Federal agencies, fulfilling a key requirement of the administration's October 2023 AI executive order.
“I think the OMB memo envisions that it will be more pushed to the individual agencies for those AI-specific risks and the reason is because we don’t have a centralized mechanism for managing all operational risks across the government,” he said at the ATO and Cloud Security Summit.