Home Artificial Intelligence Generative AI in Cybersecurity: The Battlefield, The Threat, & Now The Defense

Generative AI in Cybersecurity: The Battlefield, The Threat, & Now The Defense

1
Generative AI in Cybersecurity: The Battlefield, The Threat, & Now The Defense

The Battlefield

What began off as excitement across the capabilities of Generative AI has quickly turned to concern. Generative AI tools corresponding to ChatGPT, Google Bard, Dall-E, etc. proceed to make headlines on account of security and privacy concerns. It’s even resulting in questioning about what’s real and what is not. Generative AI can pump out highly plausible and due to this fact convincing content. A lot in order that on the conclusion of a recent 60 Minutes segment on AI, host Scott Pelley left viewers with this statement; “We’ll end with a note that has never appeared on 60 Minutes, but one, within the AI revolution, chances are you’ll be hearing often: the preceding was created with 100% human content.”

The Generative AI cyber war begins with this convincing and real-life content and the battlefield is where hackers are leveraging Generative AI, using tools corresponding to ChatGPT, etc. It’s extremely easy for cyber criminals, especially those with limited resources and 0 technical knowledge, to perform their crimes through social engineering, phishing and impersonation attacks.

The Threat

Generative AI has the ability to fuel increasingly more sophisticated cyberattacks.

Since the technology can produce such convincing and human-like content with ease, recent cyber scams leveraging AI are harder for security teams to simply spot. AI-generated scams can are available the shape of social engineering attacks corresponding to multi-channel phishing attacks conducted over email and messaging apps. An actual-world example could possibly be an email or message containing a document that is distributed to a company executive from a 3rd party vendor via Outlook (Email) or Slack (Messaging App). The e-mail or message directs them to click on it to view an invoice. With Generative AI, it will probably be almost unattainable to differentiate between a fake and real email or message. Which is why it’s so dangerous.

One of the vital alarming examples, nonetheless, is that with Generative AI, cybercriminals can produce attacks across multiple languages – no matter whether the hacker actually speaks the language. The goal is to solid a large net and cybercriminals won’t discriminate against victims based on language.

The advancement of Generative AI signals that the size and efficiency of those attacks will proceed to rise.

The Defense

Cyber defense for Generative AI has notoriously been the missing piece to the puzzle. Until now. By utilizing machine to machine combat, or pinning AI against AI, we are able to defend against this recent and growing threat. But how should this strategy be defined and the way does it look?

First, the industry must act to pin computer against computer as a substitute of human vs computer. To follow through on this effort, we must consider advanced detection platforms that may detect AI-generated threats, reduce the time it takes to flag and the time it takes to resolve a social engineering attack that originated from Generative AI. Something a human is unable to do.

We recently conducted a test of how this could look. We had ChatGPT cook up a language-based callback phishing email in multiple languages to see if a Natural Language Understanding platform or advanced detection platform could detect it. We gave ChatGPT the prompt, “write an urgent email urging someone to call a couple of final notice on a software license agreement.” We also commanded it to jot down it in English and Japanese.

The advanced detection platform was immediately capable of flag the emails as a social engineering attack. BUT, native email controls corresponding to Outlook’s phishing detection platform couldn’t. Even before the discharge of ChatGPT, social engineering done via conversational, language-based attacks proved successful because they might dodge traditional controls, landing in inboxes with no link or payload. So yes, it takes machine vs. machine combat to defend, but we must also make certain that we’re using effective artillery, corresponding to a complicated detection platform. Anyone with these tools at their disposal has a bonus within the fight against Generative AI.

With regards to the size and plausibility of social engineering attacks afforded by ChatGPT and other types of Generative AI, machine to machine defense will also be refined. For instance, this defense might be deployed in multiple languages. It also doesn’t just need to be limited to email security but might be used for other communication channels corresponding to apps like Slack, WhatsApp, Teams etc.

Remain Vigilant

When scrolling through LinkedIn, certainly one of our employees got here across a Generative AI social engineering attempt. A wierd “whitepaper” download ad appeared with what can only be described generously as “bizarro” ad creative. Upon closer inspection, the worker saw a telltale color pattern within the lower right corner stamped on images produced by Dall-E, an AI model that generates images from text-based prompts.

Encountering this fake LinkedIn ad was a big reminder of latest social engineering dangers now appearing when coupled with Generative AI. It’s more critical than ever to be vigilant and suspicious.

The age of generative AI getting used for cybercrime is here, and we must remain vigilant and be prepared to fight back with every tool at our disposal.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here