Nick Kathmann, CISO/CIO at LogicGate – Interview Series

-

Nicholas Kathmann is the Chief Information Security Officer (CISO) at LogicGate, where he leads the corporate’s information security program, oversees platform security innovations, and engages with customers on managing cybersecurity risk. With over 20 years of experience in IT and 18+ years in cybersecurity, Kathmann has built and led security operations across small businesses and Fortune 100 enterprises.

LogicGate is a risk and compliance platform that helps organizations automate and scale their governance, risk, and compliance (GRC) programs. Through its flagship product, Risk Cloud®, LogicGate enables teams to discover, assess, and manage risk across the enterprise with customizable workflows, real-time insights, and integrations. The platform supports a big selection of use cases, including third-party risk, cybersecurity compliance, and internal audit management, helping firms construct more agile and resilient risk strategies

You function each CISO and CIO at LogicGate — how do you see AI transforming the responsibilities of those roles in the subsequent 2–3 years?

AI is already transforming each of those roles, but in the subsequent 2-3 years, I feel we’ll see a significant rise in Agentic AI that has the facility to reimagine how we cope with business processes on a day-to-day basis. Anything that may often go to an IT help desk — like resetting passwords, installing applications, and more — may be handled by an AI agent. One other critical use case will probably be leveraging AI agents to handle tedious audit assessments, allowing CISOs and CIOs to prioritize more strategic requests.

With federal cyber layoffs and deregulation trends, how should enterprises approach AI deployment while maintaining a robust security posture?

While we’re seeing a deregulation trend within the U.S., regulations are literally strengthening within the EU. So, in case you’re a multinational enterprise, anticipate having to comply with global regulatory requirements around responsible use of AI. For firms only operating within the U.S., I see there being a learning period when it comes to AI adoption. I feel it’s vital for those enterprises to form strong AI governance policies and maintain some human oversight within the deployment process, ensuring nothing goes rogue.

What are the most important blind spots you see today relating to integrating AI into existing cybersecurity frameworks?

While there are a few areas I can consider, probably the most impactful blind spot could be where your data is positioned and where it’s traversing. The introduction of AI is simply going to make oversight in that area more of a challenge. Vendors are enabling AI features of their products, but that data doesn’t all the time go on to the AI model/vendor. That renders traditional security tools like DLP and web monitoring effectively blind.

You’ve said most AI governance strategies are “paper tigers.” What are the core ingredients of a governance framework that truly works?

Once I say “paper tigers,” I’m referring specifically to governance strategies where only a small team knows the processes and standards, they usually are usually not enforced and even understood throughout the organization. AI may be very pervasive, meaning it impacts every group and each team. “One size matches all” strategies aren’t going to work. A finance team implementing AI features into its ERP is different from a product team implementing an AI feature in a selected product, and the list continues. The core ingredients of a robust governance framework vary, but IAPP, OWASP, NIST, and other advisory bodies have pretty good frameworks for determining what to guage. The toughest part is determining when the necessities apply to every use case.

How can firms avoid AI model drift and ensure responsible use over time without over-engineering their policies?

Drift and degradation is just a part of using technology, but AI can significantly speed up the method. But when the drift becomes too great, corrective measures will probably be needed. A comprehensive testing strategy that appears for and measures accuracy, bias, and other red flags is crucial over time. If firms need to avoid bias and drift, they need to start out by ensuring they’ve the tools in place to discover and measure it.

What role should changelogs, limited policy updates, and real-time feedback loops play in maintaining agile AI governance?

While they play a task straight away to cut back risk and liability to the provider, real-time feedback loops hamper the flexibility of consumers and users to perform AI governance, especially if changes in communication mechanisms occur too incessantly.

What concerns do you will have around AI bias and discrimination in underwriting or credit scoring, particularly with “Buy Now, Pay Later” (BNPL) services?

Last 12 months, I spoke to an AI/ML researcher at a big, multinational bank who had been experimenting with AI/LLMs across their risk models. The models, even when trained on large and accurate data sets, would make really surprising, unsupported decisions to either approve or deny underwriting. For instance, if the words “great credit” were mentioned in a chat transcript or communications with customers, the models would, by default, deny the loan — no matter whether the shopper said it or the bank worker said it. If AI goes to be relied upon, banks need higher oversight and accountability, and people “surprises” should be minimized.

What’s your tackle how we must always audit or assess algorithms that make high-stakes decisions — and who ought to be held accountable?

This goes back to the great testing model, where it’s crucial to repeatedly test and benchmark the algorithm/models in as near real time as possible. This may be difficult, because the model output could have desirable results that can need humans to discover outliers. As a banking example, a model that denies all loans flat out could have an excellent risk rating, since zero loans it underwrites will ever default. In that case, the organization that implements the model/algorithm ought to be accountable for the final result of the model, identical to they’d be if humans were making the choice.

With more enterprises requiring cyber insurance, how are AI tools reshaping each the chance landscape and insurance underwriting itself?

AI tools are great at disseminating large amounts of knowledge and finding patterns or trends. On the shopper side, these tools will probably be instrumental in understanding the organization’s actual risk and managing that risk. On the underwriter’s side, those tools will probably be helpful to find inconsistencies and organizations which might be becoming immature over time.

How can firms leverage AI to proactively reduce cyber risk and negotiate higher terms in today’s insurance market?

Today, the perfect approach to leverage AI for reducing risk and negotiating higher insurance terms is to filter out noise and distractions, helping you deal with a very powerful risks. In the event you reduce those risks in a comprehensive way, your cyber insurance rates should go down. It’s too easy to get overwhelmed with the sheer volume of risks. Don’t get bogged down trying to deal with each issue when specializing in probably the most critical ones can have a much larger impact.

What are just a few tactical steps you recommend for firms that need to implement AI responsibly — but don’t know where to start out?

First, you might want to understand what your use cases are and document the specified outcomes. Everyone desires to implement AI, nevertheless it’s vital to consider your goals first and work backwards from there — something I feel a number of organizations struggle with today. Once you will have a superb understanding of your use cases, you’ll be able to research the several AI frameworks and understand which of the applicable controls matter to your use cases and implementation. Strong AI governance can also be business critical, for risk mitigation and efficiency since automation is simply as useful as its data input. Organizations leveraging AI must accomplish that responsibly, as partners and prospects are asking tough questions around AI sprawl and usage. Not knowing the reply can mean missing out on business deals, directly impacting the underside line.

In the event you needed to predict the most important AI-related security risk five years from now, what would it not be — and the way can we prepare today?

My prediction is that as Agentic AI is built into more business processes and applications, attackers will engage in fraud and misuse to control those agents into delivering malicious outcomes. Now we have already seen this with the manipulation of customer support agents, leading to unauthorized deals and refunds. Threat actors used language tricks to bypass policies and interfere with the agent’s decision-making.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x