Jonathan Dambrot, CEO & Co-Founding father of Cranium AI – Interview Series


Jonathan Dambrot is the CEO & Co-Founding father of Cranium AI, an enterprise that helps cybersecurity and data science teams understand in every single place that AI is impacting their systems, data or services.

Jonathan is a former Partner at KPMG, cyber security industry leader, and visionary. Prior to KPMG, he led Prevalent to develop into a Gartner and Forrester industry leader in third party risk management before its sale to Insight Enterprise Partners in late 2016. In 2019 Jonathan transitioned the Prevalent CEO role as the corporate looks to proceed its growth under recent leadership. He has been quoted in numerous publications and routinely speaks to groups of clients regarding trends in IT, information security, and compliance.

Could you share the genesis story behind Cranium AI?

I had the thought for Cranium around June of 2021 once I was a partner at KPMG leading Third-Party Security services globally. We were constructing and delivering AI-powered solutions for a few of our largest clients, and I discovered that we were doing nothing to secure them against adversarial threats. So, I asked that very same query to the cybersecurity leaders at our biggest clients, and the answers I got back were equally horrible. Lots of the security teams had never even spoken to the information scientists – they spoke completely different languages when it got here to technology and ultimately had zero visibility into the AI running across the enterprise. All of this combined with the steadily growing development of regulations was the trigger to construct a platform that would provide security to AI. We began working with the KPMG Studio incubator and brought in a few of our largest clients as design partners to guide the event to satisfy the needs of those large enterprises. In January of this 12 months, Syn Ventures got here in to finish the Seed funding, and we spun out independently of KPMG in March and emerged from stealth in April 2023.

What’s the Cranium AI Card and what key insights does it reveal ?

The Cranium AI Card allows organizations to efficiently gather and share information concerning the trustworthiness and compliance of their AI models with each clients and regulators and gain visibility into the safety of their vendors’ AI systems. Ultimately, we glance to offer security and compliance teams with the flexibility to visualise and monitor the safety of the AI of their supply chain, align their very own AI systems with current and coming compliance requirements and frameworks, and simply share that their AI systems are secure and trustworthy.

What are a few of the trust issues that folks have with AI which might be being solved with this solution?

People generally need to know what’s behind the AI that they’re using, especially as an increasing number of of their each day workflows are impacted ultimately, shape, or form by AI. We glance to offer our clients with the flexibility to reply questions that they’ll soon receive from their very own customers, reminiscent of “How is that this being governed?”, “What’s being done to secure the information and models?”, and “Has this information been validated?”. AI card gives organizations a fast method to address these questions and to reveal each the transparency and trustworthiness of their AI systems.

In October 2022, the White House Office of Science and Technology Policy (OSTP) published a Blueprint for an AI Bill of Rights, which shared a nonbinding roadmap for the responsible use of AI. Are you able to discuss your personal views on the professionals and cons of this bill?

While it’s incredibly essential that the White House took this primary step in defining the guiding principles for responsible AI, we don’t consider it went far enough to offer guidance for organizations and never just individuals anxious about appealing an AI-based decision.  Future regulatory guidance must be not only for providers of AI systems, but in addition users to have the ability to know and leverage this technology in a secure and secure manner. Ultimately, the main profit is AI systems shall be safer, more inclusive, and more transparent. Nonetheless, with no risk based framework for organizations to arrange for future regulation, there may be potential for slowing down the pace of innovation, especially in circumstances where meeting transparency and explainability requirements is technically infeasible.

How does Cranium AI assist corporations with abiding by this Bill of Rights?

Cranium Enterprise helps corporations with developing and delivering secure and secure systems, which is the primary key principle inside the Bill of Rights. Moreover, the AI Card helps organizations with meeting the principle of notice and explanation by allowing them to share details about how their AI systems are literally working and what data they’re using.

What’s the NIST AI Risk Management Framework, and the way will Cranium AI help enterprises in achieving their AI compliance obligations for this framework?

The NIST AI RMF is a framework for organizations to higher manage risks to individuals, organizations, and society related to AI. It follows a really similar structure to their other frameworks by outlining the outcomes of a successful risk management program for AI. We’ve mapped our AI card to the objectives outlined within the framework to support organizations in tracking how their AI systems align with the framework and given our enterprise platform already collects quite a lot of this information, we are able to mechanically populate and validate a few of the fields.

The EU AI Act is certainly one of the more monumental AI legislations that we’ve seen in recent history, why should non-EU corporations abide by it?

Just like GDPR for data privacy, the AI Act will fundamentally change the way in which that global enterprises develop and operate their AI systems. Organizations based outside of the EU will still must listen to and abide by the necessities, as any AI systems that use or impact European residents will fall under the necessities, whatever the company’s jurisdiction.

How is Cranium AI preparing for the EU AI Act?

At Cranium, we’ve been following the event of the AI Act for the reason that starting and have tailored the design of our AI Card product offering to support corporations in meeting the compliance requirements. We feel like now we have a fantastic head start given our very early awareness of the AI Act and the way it has evolved over time.

Why should responsible AI develop into a priority for enterprises?

The speed at which AI is being embedded into every business process and performance signifies that things can get uncontrolled quickly if not done responsibly. Prioritizing responsible AI now initially of the AI revolution will allow enterprises to scale more effectively and never run into major roadblocks and compliance issues later.

What’s your vision for the longer term of Cranium AI?

We see Cranium becoming the true category king for secure and trustworthy AI. While we are able to’t solve every part, reminiscent of complex challenges like ethical use and explainability, we glance to partner with leaders in other areas of responsible AI to drive an ecosystem to make it easy for our clients to cover all areas of responsible AI. We also look to work with the developers of modern generative AI solutions to support the safety and trust of those capabilities. We would like Cranium to enable corporations across the globe to proceed innovating in a secure and trusted way.


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x