Matthew Fitzpatrick is a seasoned operations and growth specialist with deep expertise in scaling complex workflows and teams. With a background that spans consulting, strategy, and operational leadership, he currently serves as CEO at Invisible Technologies, where he focuses on designing and optimizing end-to-end business solutions. Matthew is obsessed with combining human talent with automation to drive efficiency at scale, helping firms unlock transformative growth through process innovation.
Invisible Technologies is a business process automation company that blends advanced technology with human expertise to assist organizations scale efficiently. Fairly than replacing humans with automation, Invisible creates custom workflows where digital staff (software) and human operators collaborate seamlessly. The corporate offers services across areas like data enrichment, lead generation, customer support, and back-office operations—enabling clients to delegate complex, repetitive tasks and give attention to core strategic goals. Invisible’s unique “work-as-a-service” model provides enterprises with scalable, transparent, and cost-effective operational support.
You lately transitioned from leading QuantumBlack Labs at McKinsey to becoming CEO of Invisible Technologies. What drew you to this role, and what excites you most about Invisible’s mission?
At McKinsey, I had the privilege of working on the forefront of AI innovation – constructing AI software products, leading R&D efforts, and helping enterprises harness the facility of knowledge. What drew me to Invisible Technologies was the chance to make it operational at scale with a mixture of a uniquely flexible AI software platform and an authority marketplace for human-in-the loop feedback – I imagine Reinforcement Learning from Human Feedback (RLHF) is the important thing to accurate and reliable GenAI implementations. Invisible supports AI across your complete value chain, from data cleansing and data entry automation to chain-of-thought reasoning and custom evaluations. Our mission is easy: mix human intelligence and AI to assist businesses deliver on AI’s potential, which within the enterprise has been quite a bit harder than most individuals expected.
You’ve overseen 1,000+ engineers and scaled multiple AI products across industries. What lessons from McKinsey are you applying to Invisible’s next phase of growth?
Two lessons stand out. First, successful AI adoption is as much about organizational transformation because it is about technology. You wish the suitable people and processes in place – on top of great models. Second, the businesses that win in AI are those who master the “last mile” – the transition from experimentation to production. At Invisible, we’re applying that very same rigor and structure to assist customers move beyond pilots and into production, delivering real business value.
You’ve said that “2024 was the 12 months of AI experimentation, and 2025 is about realizing ROI.” What specific trends are you seeing amongst enterprises actually achieving that ROI?
Enterprises seeing real ROI this 12 months are doing three things well. First, they’re aligning AI use cases tightly with core business KPIs – reminiscent of operational efficiency or customer satisfaction. Second, they’re investing in higher quality data and human feedback loops to repeatedly improve model performance. Third, they’re shifting from generic solutions to tailored, domain-specific systems that reflect the complexity of their environments. These firms are not any longer just testing AI – they’re scaling it with purpose.
How is the demand for domain-specific and PhD-level data labeling evolving across foundation model providers like AWS, Microsoft, and Cohere?
We’re seeing a surge in demand for specialised labeling as foundation model providers push into more complex verticals. At Invisible, we now have a 1% annual acceptance rate on our expert pool, and 30% of our trainers hold master’s or PhDs. That deep expertise is increasingly essential – not only to accurately annotate data, but to offer nuanced, context-aware feedback to enhance reasoning, accuracy, and alignment. As models get smarter, the bar for training them gets higher.
Invisible is on the forefront of agentic AI, emphasizing decision-making in real-world workflows. What’s your definition of agentic AI, and where are we seeing essentially the most promise?
Agentic AI refers to systems that don’t just reply to instructions – they plan, make decisions, and take motion inside defined guardrails. It’s AI that behaves more like a teammate than a tool. We’re seeing essentially the most traction in high-volume, complex workflows: reminiscent of customer support and insurance claims, for instance. In these areas, agentic AI can reduce manual effort, increase consistency, and deliver outcomes that will otherwise require large human teams. It’s not about replacing humans – as an alternative, we’re augmenting them with intelligent agents who can handle the repetitive and the routine.
Are you able to share examples of how Invisible trains models for chain-of-thought reasoning and why it’s critical for enterprise deployment?
Chain-of-thought (CoT) reasoning has unlocked recent potential for enterprise AI. At Invisible, we train models to reason step-by-step, which is crucial when stakes are high – whether you’re diagnosing a patient, analyzing a contract, or validating a financial model. CoT not only improves transparency, but additionally enables debugging, refinement, and performance gains without massive recent datasets. We’ve seen leading models like Gemini, Sonnet, and Grok begin disclosing their reasoning paths, which allows us to look at not only what models output, but how they arrive there. That is laying the groundwork for more advanced methods like Tree of Thought (where models evaluate multiple possible reasoning paths before selecting a solution) and Self-Consistency (where multiple reasoning paths are explored).
Invisible supports training across 40+ coding languages and 30+ human languages. How vital is cultural and linguistic precision in constructing globally scalable AI?
It’s critical. Language isn’t nearly translation – it’s about context, nuance, and cultural norms. If a model misinterprets tone or misses regional variation, it could result in poor user experiences, and even compliance risks. Our multilingual trainers aren’t just fluent – they’re embedded within the cultures they represent.
What are the common failure points when firms attempt to scale from proof of concept to production, and the way does Invisible help navigate that “last mile”?
Nearly all of AI models never make it to production because firms underestimate the operational lift required. They lack clean data, robust evaluation protocols, and a technique for embedding models into real workflows. At Invisible, we mix deep technical experience with production-grade data infrastructure to assist enterprises bridge the gap. Our symbiotic capabilities in training and optimization allow us to each construct higher models and deploy them successfully.
Are you able to walk us through Invisible’s approach to RLHF (Reinforcement Learning from Human Feedback) and the way it differs from others within the industry?
At Invisible, we see Reinforcement Learning from Human Feedback (RLHF) as greater than just high-quality tuning – it allows for more sophisticated custom evaluation (“eval”) design, and a shift toward training models with nuanced human judgment relatively than binary signals like thumbs up and thumbs down. While industry approaches often prioritize scale through high-volume, low-signal data, we give attention to collecting structured, high-quality feedback that captures reasoning, context, and trade-offs. This richer signal enables models to generalize more effectively and align more closely with human intent. By prioritizing depth over breadth, we’re constructing the infrastructure for more robust, aligned AI systems.
How do you envision the long run of AI-human collaboration evolving, especially in high-stakes fields like finance, healthcare, or public sector?
AI isn’t replacing human expertise – it’s becoming the infrastructure that supports it. I envision a future where AI agents and human experts work in tandem – where clinicians are supported by diagnostic copilots, government agencies use AI to triage advantages more efficiently, and financial analysts are free to give attention to strategy relatively than spreadsheets. Our focus is designing systems where AI enhances human capability, relatively than obscuring or overruling it.