Fighting GenAI Cybercrime — 5 Ways AI Can Go to Battle for You

Fighting GenAI Cybercrime — 5 Ways AI Can Go to Battle for You
Published on

The cybersecurity industry is always trying to stay a step ahead of the world's bad actors. With the recent emphasis on Artificial Intelligence (AI), the industry needs to elevate its security problem-solving to new heights through the use of automation, natural language processing, and deep learning.

Within the last year, generative AI has made cybercrime more like child's play. This article will explore how AI (particularly generative AI) is being leveraged for both good and ill in the ever-evolving world of cybersecurity.

While AI can bring a plethora of benefits to a business, it is crucial to remember that it is used maliciously too. Generative AI, the type of AI that produces text, images, and videos, is helping attackers write malware to carry out sophisticated campaigns in less time, making it harder for security experts to keep up.

Some of the recent attacks leveraging the power of generative AI include malware that carry out widespread business email compromise (BEC) phishing attacks, tricking healthcare professionals into misdiagnosing cancer, to scamming unsuspecting victims using AI augmented voice calls. 

The democratization of generative AI has made cybercrime very simple instead of being a complex and technical exploit.

Even a newbie with a keyboard and access to the Internet can write code and carry out a cyberattack with ease. The diminishing barrier to entering the cybercrime market is credited in large part to the development of generative AI apps like ChatGPT, GPT-4, and Google Bard.

Nonetheless, it is wise for C-level security experts to understand how these tools are being used by malicious actors so that they can better protect their organizations.

Generative AI and its uses

Social engineering is perhaps the most predominant threat presented with the evolution of generative AI. No matter their native language or motive, anyone can use ChatGPT or similar tools to generate convincing dialogues or scripts that can be used to manipulate people.

This enables phishing, smishing, or pretexting to steal valuable information from users as most people spot scams from grammar and spelling errors. The versatility of large language models, thanks to the power of AI, can be used by cybercriminals to effectively scale these social engineering campaigns for a fraction of the cost.

These same tools have been used many times to progress dangerous malware to unsuspecting victims. Bad actors have stolen Facebook accounts, groups, and pages, taken out malicious Facebook ads, and even created fake ChatGPT software to distribute malware.

What’s more concerning is that researchers have found the code used in these malware attacks is often polymorphic or mutating, which means it can evade endpoint detection and response (EDR) systems altogether.

Hacking and theft are also enabled by the use of AI. Advanced NLP chatbot features can enable anyone to pose as someone else and trick people into handing over sensitive information like passwords, bank account numbers, or other personally identifiable information (PII).

Lastly, some experts believe that ChatGPT may have already been used to carry out nation-state cyberattacks.

Even now, the threat of bad actors is starting to shift to the AI tools themselves. ChatGPT suffered a breach earlier this year, and while the vulnerability that allowed for the breach was dealt with swiftly, the message of these tools not being invulnerable to cyberattacks adds another strategy to hackers’ playbooks. 

The Proliferation of AI-based cybercrime

As “citizen hackers” become more proficient in their use of ChatGPT or other apps to carry out cybercrime, it will become more difficult to defend without also using AI in defense to level the playing field.

Undoubtedly, the threat landscape is not what it used to be even a year ago. For starters, there is now an overwhelming amount of data within every organization that needs to be analyzed to identify and mitigate cyber threats.

This alone means that traditional protection solutions are no longer effective in providing cyber protection against increasingly sophisticated threats.

With the shifting cybersecurity landscape and evolving threats making the war on cybercrime increasingly complex, it is critical to implement tools and solutions that can provide actionable and timely intelligence to prevent cybersecurity breaches before they happen.

Here are 5 ways AI can go to battle for you

Research shows that many organizations are responsive to the need for AI, or at least some automation capabilities in the fight against cyber threats. A 2023 study by BlackBerry revealed that the majority (83%) of IT decision-makers plan to invest in AI-driven cybersecurity in the next two years, and almost half (48%) plan to invest before the end of 2023.

This is a step in the right direction. The more we learn about the benefits of AI in fighting cybercrime, the more abundantly clear our need for it becomes.

1. Removing the mundane 

AI automates routine tasks by analyzing large amounts of data in seconds so that security analysts can focus on more complex and critical duties, such as incident response and threat hunting.

In a recent IBM survey, 80% of SOC workers said that having to manually investigate threats slows their overall threat response time. With AI, analysts aren’t sifting through logs, alerts, and reports in hopes of identifying potential threats.

2. High-speed analytics and insights 

IBM’s survey also revealed that SOC members said that they spend one-third of their day validating and investigating incidents that end up not being real threats. This is where large language models and AI algorithms can make all the difference.

They can rapidly process and analyze massive amounts of data to recognize patterns, make inferences, and perform many actions on behalf of the user. This helps analysts identify and prioritize risks efficiently.

3. Vulnerability Intelligence

AI and ML models can be used to predict the exploitability of a threat and prioritize those vulnerabilities with a higher chance of weaponization based on scores from vulnerability intelligence platforms.

There are attack surface monitoring solutions that are powered by artificial intelligence (AI) and machine learning (ML) to ensure that trending vulnerabilities are effectively identified. As a final step, vulnerability validation by human security analysts is recommended.

4. Improved accuracy

AI can help optimize anomaly detection by using big data and deep learning to identify and reduce false positives, which significantly diminish the workload of human analysts while also improving the accuracy and efficiency of threat detection.

AI learns more when provided with real data. Be it new incoming or historical, it will continue to advance and improve its performance over time. This is also where the human aspect is still vital as human feedback can also be used to hone the AI’s skill.

5. Effective prioritization 

Using data generated by AI can give analysts early risk alerts. This helps them prioritize and remediate their most dangerous exposures. This information warns analysts about vulnerabilities that will likely be exploited soon, enabling them to prioritize and patch proactively.

If you pair this strategy with effective threat modeling and routine breach and attack simulations the exploitable holes within your organization’s security plan will be detected faster and fixed before letting a problem in. 

Generative AI will inevitably change the way the world works and security is no exception. The cybercrime market is already booming as a result. With no experience required, bad actors can purchase cybercrime-as-a-service and around-the-clock support to carry out sophisticated attacks.

Ransomware starter kits are even a popular purchase for those looking to exploit a vulnerability or weak link. Trying to fight AI-based cybercrime without AI-based security tools is like bringing a knife to a gunfight.

But, take heart. AI can provide an extreme advantage to modern organizations. Not only can it improve the technologies that companies already have by introducing enhanced automation capabilities, but it can also help companies stay one step ahead of attackers by generating advanced analytics and allowing human analysts to operate with a clearer picture of the threats around them.

However, with all of the benefits that AI can bring to the table, it should never be its standalone solution. AI is a tool and when it is being utilized in the workplace it should always be checked and vetted by a human.

That is why it is important to keep up proper training for your employees so that awareness about AI threats, such as phishing, is a common thread throughout your organization. Being proactive rather than reactive – especially when operating within the cybersecurity industry – is the approach that will better position security experts to tackle the AI challenges that can arise.

Investing in proactive technology that will enhance your cybersecurity posture is a great first step. Proactive technologies like Attack Surface Management (ASM) powered by Vulnerability Intelligence (VI) are there to help companies obtain a comprehensive view of their attack surface and thwart looming threats created using generative AI.

 About the author:
Kiran Chinnagangannagari is the Chief Product and Technology Officer at Securin. He is a highly accomplished and experienced executive with extensive experience in key leadership roles at major multinational companies. Chinnagangannagari was the Co-Founder, President, and Chief Technology Officer at Zuggand, an Amazon Web Services Advanced Consulting Partner.

Before Zuggand, he was the Chief Technology Officer of the state of Arizona, where he was instrumental in advancing IT strategy and enabling efficient, innovative, and sustainable services. Passionate about helping people find solutions that make their lives easier, Chinnagangannagari brings a deep understanding of leveraging technology to solve business challenges. 

Related Stories

No stories found.
CDO Magazine
www.cdomagazine.tech