Home Artificial Intelligence Disrupting malicious uses of AI by state-affiliated threat actors

Disrupting malicious uses of AI by state-affiliated threat actors

0
Disrupting malicious uses of AI by state-affiliated threat actors

Based on collaboration and knowledge sharing with Microsoft, we disrupted five state-affiliated malicious actors: two China-affiliated threat actors often known as Charcoal Typhoon and Salmon Typhoon; the Iran-affiliated threat actor often known as Crimson Sandstorm; the North Korea-affiliated actor often known as Emerald Sleet; and the Russia-affiliated actor often known as Forest Blizzard. The identified OpenAI accounts related to these actors were terminated.

These actors generally sought to make use of OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks. 

Specifically: 

  • Charcoal Typhoon used our services to research various corporations and cybersecurity tools, debug code and generate scripts, and create content likely to be used in phishing campaigns.
  • Salmon Typhoon used our services to translate technical papers, retrieve publicly available information on multiple intelligence agencies and regional threat actors, assist with coding, and research common ways processes may very well be hidden on a system.
  • Crimson Sandstorm used our services for scripting support related to app and web development, generating content likely for spear-phishing campaigns, and researching common ways malware could evade detection.
  • Emerald Sleet used our services to discover experts and organizations focused on defense issues within the Asia-Pacific region, understand publicly available vulnerabilities, help with basic scripting tasks, and draft content that may very well be utilized in phishing campaigns.
  • Forest Blizzard used our services primarily for open-source research into satellite communication protocols and radar imaging technology, in addition to for support with scripting tasks.

Additional technical details on the character of the threat actors and their activities could be present in the Microsoft blog post published today. 

The activities of those actors are consistent with previous red team assessments we conducted in partnership with external cybersecurity experts, which found that GPT-4 offers only limited, incremental capabilities for malicious cybersecurity tasks beyond what’s already achievable with publicly available, non-AI powered tools.

LEAVE A REPLY

Please enter your comment!
Please enter your name here