David Kellerman is the Field CTO at Cymulate, and a senior technical customer-facing skilled in the sector of data and cyber security. David leads customers to success and high-security standards.
Cymulate is a cybersecurity company that gives continuous security validation through automated attack simulations. Its platform enables organizations to proactively test, assess, and optimize their security posture by simulating real-world cyber threats, including ransomware, phishing, and lateral movement attacks. By offering Breach and Attack Simulation (BAS), exposure management, and security posture management, Cymulate helps businesses discover vulnerabilities and improve their defenses in real time.
What do you see as the first driver behind the rise of AI-related cybersecurity threats in 2025?
AI-related cybersecurity threats are rising due to AI’s increased accessibility. Threat actors now have access to AI tools that may help them iterate on malware, craft more believable phishing emails, and upscale their attacks to extend their reach. These tactics aren’t “recent,” however the speed and accuracy with which they’re being deployed has added significantly to the already lengthy backlog of cyber threats security teams need to deal with. Organizations rush to implement AI technology, while not fully understanding that security controls have to be put around it, to make sure it isn’t easily exploited by threat actors.
Are there any specific industries or sectors more vulnerable to those AI-related threats, and why?
Industries which might be consistently sharing data across channels between employees, clients, or customers are at risk of AI-related threats because AI is making it easier for threat actors to interact in convincing social engineering schemes Phishing scams are effectively a numbers game, and if attackers can now send more authentic-seeming emails to a wider variety of recipients, their success rate will increase significantly. Organizations that expose their AI-powered services to the general public potentially invite attackers to try to use it. While it’s an inherited risk of constructing services public, it’s crucial to do it right.
What are the important thing vulnerabilities organizations face when using public LLMs for business functions?
Data leakage might be the primary concern. When using a public large language model (LLM), it’s hard to say obviously where that data will go – and the very last thing you would like to do is by accident upload sensitive information to a publicly accessible AI tool. When you need confidential data analyzed, keep it in-house. Don’t turn to public LLMs which will turn around and leak that data to the broader web.
How can enterprises effectively secure sensitive data when testing or implementing AI systems in production?
When testing AI systems in production, organizations should adopt an offensive mindset (versus a defensive one). By that I mean security teams must be proactively testing and validating the safety of their AI systems, reasonably than reacting to incoming threats. Consistently monitoring for attacks and validating security systems may help to make sure sensitive data is protected and security solutions are working as intended.
How can organizations proactively defend against AI-driven attacks which might be continuously evolving?
While threat actors are using AI to evolve their threats, security teams can even use AI to update their breach and attack simulation (BAS) tools to make sure they’re safeguarded against emerging threats. Tools, like Cymulate’s each day threat feed, load the most recent emerging threats into Cymulate’s breach and attack simulation software each day to make sure security teams are validating their organization’s cybersecurity against essentially the most recent threats. AI may help automate processes like these, allowing organizations to stay agile and able to face even the most recent threats.
What role do automated security validation platforms, like Cymulate, play in mitigating the risks posed by AI-driven cyber threats?
Automated security validation platforms may help organizations stay on top of emerging AI-driven cyber threats through tools aimed toward identifying, validating, and prioritizing threats. With AI serving as a force multiplier for attackers, it’s essential to not only detect potential vulnerabilities in your network and systems, but validate which of them post an actual threat to the organization. Only then can exposures be effectively prioritized, allowing organizations to mitigate essentially the most dangerous threats first before moving on to less pressing items. Attackers are using AI to probe digital environments for potential weaknesses before launching highly tailored attacks, which suggests the flexibility to deal with dangerous vulnerabilities in an automatic and effective manner has never been more critical.
How can enterprises incorporate breach and attack simulation tools to arrange for AI-driven attacks?
BAS software is a vital element of exposure management, allowing organizations to create real-world attack scenarios they will use to validate security controls against today’s most pressing threats. The newest threat intel and first research from the Cymulate Threat Research Group (combined with information on emerging threats and recent simulations) is applied each day to Cymulate’s BAS tool, alerting security leaders if a brand new threat was not blocked or detected by their existing security controls. With BAS, organizations can even tailor AI-driven simulations to their unique environments and security policies with an open framework to create and automate custom campaigns and advanced attack scenarios.
What are the highest three recommendations you’d give to security teams to remain ahead of those emerging threats?
Threats have gotten more complex on daily basis. Organizations that don’t have an efficient exposure management program in place risk falling dangerously behind, so my first suggestion can be to implement an answer that permits the organization to effectively prioritize their exposures. Next, be sure that the exposure management solution includes BAS capabilities that allow the safety team to simulate emerging threats (AI and otherwise) to gauge how the organization’s security controls perform. Finally, I’d recommend leveraging automation to be sure that validation and testing can occur on a continuous basis, not only during periodic reviews. With the threat landscape changing on a minute-to-minute basis, it’s critical to have up-to-date information. Threat data from last quarter is already hopelessly obsolete.
What developments in AI technology do you foresee in the subsequent five years that would either exacerbate or mitigate cybersecurity risks?
Rather a lot will rely on how accessible AI continues to be. Today, low-level attackers can use AI capabilities to uplevel and upscale their attacks, but they aren’t creating recent, unprecedented tactics – they’re just making existing tactics simpler. At once, we will (mostly) compensate for that. But when AI continues to grow more advanced and stays highly accessible, that would change. Regulations will play a task here – the EU (and, to a lesser extent, the US) have taken steps to control how AI is developed and used, so it can be interesting to see whether that has an effect on AI development.
Do you anticipate a shift in how organizations prioritize AI-related cybersecurity threats in comparison with traditional cybersecurity challenges?
We’re already seeing organizations recognize the worth of solutions like BAS and exposure management. AI is allowing threat actors to quickly launch advanced, targeted campaigns, and security teams need any advantage they will get to assist stay ahead of them. Organizations which might be using validation tools may have a significantly easier time keeping their heads above water by prioritizing and mitigating essentially the most pressing and dangerous threats first. Remember, most attackers are searching for a simple rating. Chances are you’ll not have the option to stop every attack, but you may avoid making yourself a simple goal.