The Dark Side — How Governance Can Prevent GenAI Misuse and Ensure Trust

The Dark Side — How Governance Can Prevent GenAI Misuse and Ensure Trust
Published on

What is generative AI and malicious uses?

Artificial Intelligence, or AI, is a field of computer science that focuses on creating machines and software that can perform tasks that usually require human intelligence. These tasks include learning from experience, recognizing patterns, understanding natural language, and making decisions.

AI systems use algorithms and vast amounts of data to simulate human thinking. Generative AI (GenAI) is AI that can be used to generate images, text, video, or audio based on prompts.

The success of GenAI in recent years has brought with it "bad actors" who use these tools to conduct malicious activity. In the field of Cybersecurity, GenAI can be used by “bad actors” of all kinds. They can create fake personas or voices for phishing scams and extortion. When it comes to GenAI, security is about ensuring that no one can be duped by fake creations. 

Generative AI used in social engineering

With Social Engineering, someone is pretending to be someone else; this could be from a boss, to a family member, or even a close friend. The “bad actor” could use AI in a video call to pretend to be your boss and ask for money. They could also use AI to pretend to be a family member in distress over a phone call and extort the individual to unknowingly send money to the “bad actor.”

The intent behind both is to have money transferred by the “mark” to an unrelated party, probably to the “bad actors” themselves.

A finance worker at a multinational firm was tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s Chief Financial Officer in a video conference call, according to Hong Kong police.

The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said.

When applied, AIFA Standards would not allow for this to happen. As soon as you joined the conference call and saw that the video was AI-generated or heard over the phone that it was being modified by AI, you would then know to stop all contact with that person immediately as they are not who they claim to be.

Generative AI and deepfakes and election tampering

Deepfakes are becoming more and more prevalent, especially around the time of an election. AI can be used to generate false news articles or videos that show a political figure partaking in actions that would damage their character.

This is a major issue as these deep fakes can look almost too real. With a high level of realism, everyday people who browse the internet would not be able to differentiate between real or fake, which could in turn tarnish the reputation of political figures, celebrities, or even your average community member.

Artificial intelligence tools for creating fake video, audio and other content will likely give foreign operatives and domestic extremists “enhanced opportunities for interference” as the 2024 US election cycle progresses, the Department of Homeland Security said in a recent bulletin distributed to state and local officials and obtained by CNN.

The AIFA Standards, when followed, give the viewer or listener proof of any use of AI generation. It will help identify a deep fake as a deep fake, thus not real and should not be taken seriously.

AI Freedom Alliance

AIFA is a multi-industry governance group. Our contributors come from manufacturing companies, marketing, social media, banking, payment processing, publishing, healthcare, insurance, cybersecurity, communication, venture capital, agriculture, academia and local government IT shops.

We are trying to work with the Ohio Senate to help with Senate Bill 217 – which deals with Fake GenAI.

The mission of the group is to develop standards that help mitigate the malicious use of AI and promote safe and beneficial innovation.

Recently AIFA has created standards GAI-1 and GAI-2 to govern the creation and visualization of AI generated Audio and video.The main thrust of the standards is that AI generated content must be identified as such. And media player software should be able to identify the content as AI generated using a watermark or similar identification. 

Interestingly enough Facebook has recently started working on these technical standards, agreeing that this type of transparency would help resolve many issues with the technology.

Guidance on execution of AIFA standards

AIFA proposes a watermark, either visible or embedded in audio and video content created or altered using AI. By doing this, people can avoid reading falsely created news articles or audio files that they have heard. GAI-1 and GAI-2 simply create a level of transparency that gives the users of AI generated content a mechanism to identify such content.

A recent article by AIFA contributor Joey Cox, explored ways that content may be marked as AI generated.

Conclusion

The growth and sophistication of generative AI leads its consumers not to know what is real and what is fake. AIFA proposes simple transparency standards allowing companies and individuals to easily make that determination.

We hereby call upon Technology Companies involved in AI to self-govern and apply these transparency standards within their software.

About the Authors:

John Farhat has been active in the IT industry for over three decades. He currently serves as CEO of the family office overseeing technology assets. These include Cybersecurity and IT Management under the Mission Control IT Services brand, Custom Phone Systems under the Loquantur brand, as well as Advisory/Client Representation services through Farhat Services Company.

As a serial entrepreneur, Farhat is also involved with a number of startups in the fintech and agricultural sectors. Outside of his work, he has helped found the AI Freedom Alliance as a grassroots group of multi-disciplinary smaller and medium companies focused on AI governance.

Bradley Bank is a junior cybersecurity engineer delivering managed solutions to clients of all sizes and fields. Bank loves the creative process of building and testing managed stacks to help deliver complete solutions to clients. He interacts with AI on a daily basis as these capabilities become available in industry tools.

With the AI Freedom Alliance, Bank represents the younger generation as the future beneficiary of AI, and the one to live with its consequences. He feels an intense need for governance to create a better future.

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech