Q: In what ways can artificial adversarial intelligence play the role of a cyber attacker, and the way does artificial adversarial intelligence portray a cyber defender?
A: Cyber attackers exist along a competence spectrum. At the bottom end, there are so-called script-kiddies, or threat actors who spray well-known exploits and malware within the hopes of finding some network or device that hasn’t practiced good cyber hygiene. In the center are cyber mercenaries who’re better-resourced and arranged to prey upon enterprises with ransomware or extortion. And, on the high end, there are groups which can be sometimes state-supported, which might launch essentially the most difficult-to-detect “advanced persistent threats” (or APTs).
Consider the specialized, nefarious intelligence that these attackers marshal — that is adversarial intelligence. The attackers make very technical tools that allow them hack into code, they select the appropriate tool for his or her goal, and their attacks have multiple steps. At each step, they learn something, integrate it into their situational awareness, after which make a choice on what to do next. For the subtle APTs, they might strategically pick their goal, and devise a slow and low-visibility plan that’s so subtle that its implementation escapes our defensive shields. They’ll even plan deceptive evidence pointing to a different hacker!
My research goal is to duplicate this specific form of offensive or attacking intelligence, intelligence that’s adversarially-oriented (intelligence that human threat actors depend upon). I exploit AI and machine learning to design cyber agents and model the adversarial behavior of human attackers. I also model the educational and adaptation that characterizes cyber arms races.
I must also note that cyber defenses are pretty complicated. They’ve evolved their complexity in response to escalating attack capabilities. These defense systems involve designing detectors, processing system logs, triggering appropriate alerts, after which triaging them into incident response systems. They must be consistently alert to defend a really big attack surface that is difficult to trace and really dynamic. On this other side of attacker-versus-defender competition, my team and I also invent AI within the service of those different defensive fronts.
One other thing stands out about adversarial intelligence: Each Tom and Jerry are in a position to learn from competing with each other! Their skills sharpen and so they lock into an arms race. One gets higher, then the opposite, to save lots of his skin, gets higher too. This tit-for-tat improvement goes onwards and upwards! We work to duplicate cyber versions of those arms races.
Q: What are some examples in our on a regular basis lives where artificial adversarial intelligence has kept us protected? How can we use adversarial intelligence agents to remain ahead of threat actors?
A: Machine learning has been utilized in some ways to make sure cybersecurity. There are all types of detectors that filter out threats. They’re tuned to anomalous behavior and to recognizable sorts of malware, for instance. There are AI-enabled triage systems. A number of the spam protection tools right there in your cellular phone are AI-enabled!
With my team, I design AI-enabled cyber attackers that may do what threat actors do. We invent AI to provide our cyber agents expert computer skills and programming knowledge, to make them able to processing all styles of cyber knowledge, plan attack steps, and to make informed decisions inside a campaign.
Adversarially intelligent agents (like our AI cyber attackers) may be used as practice when testing network defenses. A variety of effort goes into checking a network’s robustness to attack, and AI is in a position to help with that. Moreover, after we add machine learning to our agents, and to our defenses, they play out an arms race we will inspect, analyze, and use to anticipate what countermeasures could also be used after we take measures to defend ourselves.
Q: What latest risks are they adapting to, and the way do they accomplish that?
A: There never appears to be an end to latest software being released and latest configurations of systems being engineered. With every release, there are vulnerabilities an attacker can goal. These could also be examples of weaknesses in code which can be already documented, or they might be novel.
Recent configurations pose the chance of errors or latest ways to be attacked. We didn’t imagine ransomware after we were coping with denial-of-service attacks. Now we’re juggling cyber espionage and ransomware with IP [intellectual property] theft. All our critical infrastructure, including telecom networks and financial, health care, municipal, energy, and water systems, are targets.
Fortunately, loads of effort is being dedicated to defending critical infrastructure. We are going to have to translate that to AI-based services that automate a few of those efforts. And, in fact, to maintain designing smarter and smarter adversarial agents to maintain us on our toes, or help us practice defending our cyber assets.