MITRE AI Sandbox to Train Models for Weather Prediction, Cybersecurity, Social Services

MITRE will leverage its NVIDIA powered supercomputer to train these models.
MITRE AI Sandbox to Train Models for Weather Prediction, Cybersecurity, Social Services
Representative image by DC Studio.
Published on

MITRE, a federally funded non-profit focused on cybersecurity research, has announced that it will develop three AI foundation models in areas such as critical infrastructure cybersecurity, weather modeling, and sustainable social services.

Named the MITRE Federal AI Sandbox, the non-profit will leverage its new computing capabilities, powered by an NVIDIA DGX SuperPOD. This supercomputer will facilitate the training and deployment of these advanced AI models.

"AI has immense potential to transform the government's service to its citizens and address important challenges ranging from improving resilience of critical infrastructure to making Medicare sustainable. The Federal AI Sandbox will be a resource for federal agencies to enable AI solutions,”  Charles Clancy, MITRE Senior Vice President and Chief Technology Officer, said.

MITRE is looking to train new foundation models designed to: 

  • Help cybersecurity experts prevent, detect, and respond to security breaches in critical infrastructure by analyzing massive and complex data to help a security operations center proactively identify and respond to threats.

  • Enhance high-resolution weather modeling with improved precision and speed to help communities receive more accurate local forecasts and better prepare for urgent weather hazards.

  • Revolutionize government agencies' ability to provide citizens with fair and timely access to benefits by transforming millions of pages of law, agency policy, and regulations into tools that help streamline government workflows.

Earlier this year, MITRE inaugurated a new facility focused on examining potential risks associated with government utilization of artificial intelligence.

Named the AI Assurance and Discovery Lab, this establishment is tailored for evaluating the risk factors inherent in AI-driven systems through simulated scenarios, red-teaming exercises, and "human-in-the-loop experimentation," among other methodologies. 

Furthermore, the lab will conduct bias assessments within systems while affording users control over the utilization of their personal information, as outlined in the announcement.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech