David Maher, CTO of Intertrust – Interview Series

-

David Maher serves as Intertrust’s Executive Vice President and Chief Technology Officer. With over 30 years of experience in trusted distributed systems, secure systems, and risk management Dave has led R&D efforts and held key leadership positions across the corporate’s subsidiaries. He was past president of Seacert Corporation, a Certificate Authority for digital media and IoT, and President of whiteCryption Corporation, a developer of systems for software self-defense. He also served as co-chairman of the Marlin Trust Management Organization (MTMO), which oversees the world’s only independent digital rights management ecosystem.

Intertrust developed innovations enabling distributed operating systems to secure and govern data and computations over open networks, leading to a foundational patent on trusted distributed computing.

Originally rooted in research, Intertrust has evolved right into a product-focused company offering trusted computing services that unify device and data operations, particularly for IoT and AI. Its markets include media distribution, device identity/authentication, digital energy management, analytics, and cloud storage security.

How can we close the AI trust gap and address the general public’s growing concerns about AI safety and reliability?

Transparency is crucial quality that I consider will help address the growing concerns about AI. Transparency includes features that help each consumers and technologists understand what AI mechanisms are a part of systems we interact with, what form of pedigree they’ve: how an AI model is trained, what guardrails exist, what policies were applied within the model development, and what other assurances exist for a given mechanism’s safety and security.  With greater transparency, we’ll give you the chance to handle real risks and issues and never be distracted as much by irrational fears and conjectures.

What role does metadata authentication play in ensuring the trustworthiness of AI outputs?

Metadata authentication helps increase our confidence that assurances about an AI model or other mechanism are reliable. An AI model card is an example of a group of metadata that may assist in evaluating the usage of an AI mechanism (model, agent, etc.) for a selected purpose. We’d like to determine standards for clarity and completeness for model cards with standards for quantitative measurements and authenticated assertions about performance, bias, properties of coaching data, etc.

How can organizations mitigate the danger of AI bias and hallucinations in large language models (LLMs)?

Red teaming is a general approach to addressing these and other risks throughout the development and pre-release of models. Originally used to judge secure systems, the approach is now becoming standard for AI-based systems. It’s a systems approach to risk management that may and will include your complete life cycle of a system from initial development to field deployment, covering your complete development supply chain. Especially critical is the classification and authentication of the training data used for a model.

What steps can corporations take to create transparency in AI systems and reduce the risks related to the “black box” problem?

Understand how the corporate goes to make use of the model and what sorts of liabilities it can have in deployment, whether for internal use or use by customers, either directly or not directly. Then, understand what I call the pedigrees of the AI mechanisms to be deployed, including assertions on a model card, results of red-team trials, differential evaluation on the corporate’s specific use, what has been formally evaluated, and what have been other people’s experience. Internal testing using a comprehensive test plan in a practical environment is completely required. Best practices are evolving on this nascent area, so it is crucial to maintain up.

How can AI systems be designed with ethical guidelines in mind, and what are the challenges in achieving this across different industries?

That is an area of research, and lots of claim that the notion of ethics and the present versions of AI are incongruous since ethics are conceptually based, and AI mechanisms are mostly data-driven. For instance, easy rules that humans understand, like “don’t cheat,” are difficult to make sure. Nonetheless, careful evaluation of interactions and conflicts of goals in goal-based learning, exclusion of sketchy data and disinformation, and constructing in rules that require the usage of output filters that implement guardrails and test for violations of ethical principles equivalent to advocating or sympathizing with the usage of violence in output content needs to be considered. Similarly, rigorous testing for bias might help align a model more with ethical principles. Again, much of this might be conceptual, so care should be given to check the results of a given approach for the reason that AI mechanism won’t “understand” instructions the best way humans do.

What are the important thing risks and challenges that AI faces in the longer term, especially because it integrates more with IoT systems?

We wish to make use of AI to automate systems that optimize critical infrastructure processes. For instance, we all know that we are able to optimize energy distribution and use using virtual power plants, which coordinate 1000’s of elements of energy production, storage, and use. This is simply practical with massive automation and the usage of AI to assist in minute decision-making. Systems will include agents with conflicting optimization objectives (say, for the good thing about the patron vs the supplier). AI safety and security might be critical within the widescale deployment of such systems.

What kind of infrastructure is required to securely discover and authenticate entities in AI systems?

We are going to require a sturdy and efficient infrastructure whereby entities involved in evaluating all features of AI systems and their deployment can publish authoritative and authentic claims about AI systems, their pedigree, available training data, the provenance of sensor data, security affecting incidents and events, etc. That infrastructure will even have to make it efficient to confirm claims and assertions by users of systems that include AI mechanisms and by elements inside automated systems that make decisions based on outputs from AI models and optimizers.

Could you share with us some insights into what you’re working on at Intertrust and the way it aspects into what we now have discussed?

We research and design technology that may provide the form of trust management infrastructure that’s required within the previous query. We’re specifically addressing problems with scale, latency, security and interoperability that arise in IoT systems that include AI components.

How does Intertrust’s PKI (Public Key Infrastructure) service secure IoT devices, and what makes it scalable for large-scale deployments?

Our PKI was designed specifically for trust management for systems that include the governance of devices and digital content. We have now deployed billions of cryptographic keys and certificates that assure compliance. Our current research addresses the dimensions and assurances that massive industrial automation and important worldwide infrastructure require, including best practices for “zero-trust” deployments and device and data authentication that may accommodate trillions of sensors and event generators.

What motivated you to hitch NIST’s AI initiatives, and the way does your involvement contribute to developing trustworthy and protected AI standards?

NIST has tremendous experience and success in developing standards and best practices in secure systems. As a Principal Investigator for the US AISIC from Intertrust, I can advocate for necessary standards and best practices in developing trust management systems that include AI mechanisms. From past experience, I particularly appreciate the approach that NIST takes to advertise creativity, progress, and industrial cooperation while helping to formulate and promulgate necessary technical standards that promote interoperability. These standards can spur the adoption of useful technologies while addressing the sorts of risks that society faces.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x