Denis Ignatovich, Co-founder and Co-CEO of Imanda – Interview Series

-

Denis Ignatovich, Co-founder and Co-CEO of Imandra, has over a decade of experience in trading, risk management, quantitative modeling, and complicated trading system design. Before founding Imandra, he led the central risk trading desk at Deutsche Bank London, where he recognized the critical role AI can play within the financial sector. His insights during this time helped shape Imandra’s suite of monetary products. Denis’ contributions to computational logic for financial trading platforms include several patents. He holds an MSc in Finance from the London School of Economics and degrees in Computer Science and Finance from UT Austin.

Imandra is an AI-powered reasoning engine that uses neurosymbolic AI to automate the verification and optimization of complex algorithms, particularly in financial trading and software systems. By combining symbolic reasoning with machine learning, it enhances safety, compliance, and efficiency, helping institutions reduce risk and improve transparency in AI-driven decision-making.

What inspired you and Dr. Grant Passmore to co-found Imandra, and the way did your backgrounds influence the vision for the corporate?

After college I went into quantitative trading and ended up in London. Grant did his PhD in Edinburgh after which moved to Cambridge to work on applications of automated logical reasoning for evaluation of safety of autopilot systems (complex algorithms which involve nonlinear computation). In my work, I also handled complex algorithms with a number of nonlinear computation and we realized that there’s a deep connection between those two fields. The way in which that finance was creating such algorithms was really problematic (as highlighted by many news stories coping with “algo glitches”), so we set out to vary that by empowering engineers in finance with automated logical tools to bring rigorous scientific techniques to the software design and development. Nonetheless, what we ended up creating is industry-agnostic.

Are you able to explain what neurosymbolic AI is and the way it differs from traditional AI approaches?

The sphere of AI has (very roughly!) two areas: statistical (which incorporates LLMs) and symbolic (aka automated reasoning). Statistical AI is incredible at identifying patterns and doing translation using information it learned from the info it was trained on. But, it’s bad at logical reasoning. The symbolic AI is sort of the precise opposite – it forces you to be very precise (mathematically) with what you’re attempting to do, but it could possibly use logic to reason in a way that’s (1) logically consistent and (2) doesn’t require data for training. The techniques combining these two areas of AI are called “neurosymbolic”. One famous application of this approach is the AlphaFold project from DeepMind which recently won the Nobel prize.

What do you think that sets Imandra apart in leading the neurosymbolic AI revolution? 

There are a lot of incredible symbolic reasoners on the market (most in academia) that focus on specific niches (e.g. protein folding), but Imandra empowers developers to investigate algorithms with unprecedented automation which has much greater applications and greater goal audiences than those tools.

How does Imandra’s automated reasoning eliminate common AI challenges, corresponding to hallucinations, and improve trust in AI systems?

With our approach, LLMs are used to translate humans’ requests into formal logic which is then analyzed by the reasoning engine with full logical audit trail. While translation errors may occur when using the LLM, the user is supplied with a logical explanation of how the inputs were translated and the logical audits could also be verified by third party open source software. Our ultimate goal is to bring actionable transparency, where the AI systems can explain their reasoning in a way that’s independently logically verifiable.

Imandra is utilized by Goldman Sachs and DARPA, amongst others. Are you able to share a real-world example of how your technology solved a fancy problem?

A fantastic public example of the true world impact of Imandra is highlighted in our UBS Way forward for Finance competition 1st place win (the main points with Imandra code is on our website). While making a case study for UBS that encoded a regulatory document that they submitted to the SEC, Imandra identified a fundamental and subtle flaw within the algorithm description. The flaw stemmed from subtle logical conditions that need to be met to rank orders inside an order book – something that may be unattainable for humans to detect “by hand”. The bank awarded us 1st place (out of greater than 620 corporations globally).

How has your experience at Deutsche Bank shaped Imandra’s applications in financial systems, and what’s essentially the most impactful use case you have seen to date?

At Deutsche Bank we handled a number of very complex code that made automated trading decisions based on various ML inputs, risk indicators, etc. As any bank, we also needed to abide by quite a few regulations. What Grant and I spotted was that this, on a mathematical level, was very just like the research he was doing for autopilot safety.

Beyond finance, which industries do you see as having the best potential to learn from neurosymbolic AI?

We’ve seen AlphaFold get the Nobel prize, so let’s definitely count that one… Ultimately, most applications of AI will greatly profit by use of symbolic methods, but specifically, we’re working on the next agents that we are going to release soon: code evaluation (translating source code into mathematical models), creating rigorous models from English-prose specifications, reasoning about SysML models (language used to explain systems in safety-critical industries) and business process automation.

Imandra’s region decomposition is a novel feature. Are you able to explain how it really works and its significance in solving complex problems?

A matter that each engineer thinks about when writing software is “what the sting cases?”. When their job is QA they usually need to write down unit test cases or they’re writing code and fascinated by whether or not they’ve appropriately implemented the necessities. Imandra brings scientific rigor to reply this query – it treats the code as a mathematical model and symbolically analyzes all of its edge cases (while producing a proof in regards to the completeness of coverage). This feature is predicated on a mathematical technique called ‘Cylindrical Algebraic Decomposition’, which we’ve “lifted” to algorithms at large. It has saved countless hours for our customers in finance and uncovered critical errors. Now we’re bringing this feature to engineers in all places.

How does Imandra integrate with large language models, and what recent capabilities does this unlock for generative AI?

LLMs and Imandra work together to formalize human input (whether it’s source code, English prose, etc), reason about it after which return the output in a way that’s easy to grasp. We use agentic frameworks (e.g. Langgraph) to orchestrate this work and deliver the experience as an agent that our customers can use directly, or integrate into their applications or agents. This symbiotic workflow addresses most of the challenges of using LLM-only AI tools and extends their application beyond previously seen training data.

What’s your long-term vision for Imandra, and the way do you see it transforming AI applications across industries?

We predict neurosymbolic techniques will likely be the muse that paves the way in which for us to understand the promise of AI. Symbolic techniques are the missing ingredient for many of the economic applications of AI and we’re excited to be on the forefront of this next chapter of AI.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

5 1 vote
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x