The Ministry of Digital Development and Information (MDDI) has introduced guidelines to strengthen AI governance in Singapore and make the country’s AI ecosystem more supportive for businesses.
The set of safety guidelines will be part of a testing framework and software toolkit called AI Verify for generative AI model developers and app deployers.
In an opening address at Personal Data Protection Week earlier this month, Minister for Digital Development and Information Josephine Teo stressed the importance of mitigating data risk for businesses and consumers to trust AI-enabled products and services.
Teo noted that data is a critical need for businesses to deploy applications on top of existing LLMs in the meantime and “likely for many more years.” However, the bias in datasets is a risk in AI development, as models are built on such datasets. With datasets containing PII, the GenAI models built on those may also bring up such information when prompted.
“If these risks are not mitigated, businesses and consumers alike may find it difficult to trust AI-enabled products and services. Without a foundation of trust, support for AI innovations could diminish over time,” said Teo.
With this in mind, the guidelines seek to establish a baseline, common standard for generative AI development with a focus on transparency and testing.
The Infocomm Media Development Authority (IMDA) will start consulting the industry on these guidelines to ensure they are “relevant and robust," said Teo.