Exclusive: The enterprise AI playbook

-

Good morning, AI enthusiasts. Cloudera just released its State of Enterprise AI in 2025 survey, capturing the views of 1,500+ IT executives and revealing a critical paradox: AI is in every single place, nevertheless it isn’t yet fully unlocked.

Leaders see clear value in AI, yet they’re coping with infrastructure gaps like expensive compute, broken data, and governance issues that resolve whether AI scales or fails.

To grasp these problems (and their solutions) higher, we partnered and sat down with Cloudera’s CTO, Sergio Gago, for an exclusive Q&A.

In today’s AI rundown:

  • Why only 21% enterprises have full AI integration

  • AI playbook for organizations ranging from scratch

  • Securing early business wins with AI

  • Measuring the wins for constructing on growth

  • Taking AI to data for full security

  • Baking in compliance by design

  • Making ‘AI in every single place’ a reality

LATEST DEVELOPMENTS

AI INTEGRATION

Image: Kiki Wu / The Rundown

The Rundown: While enterprises are bullish on AI and proceed to cite heavy investments and confidence within the tech, only 21% of the leaders in Cloudera’s survey said they’ve fully integrated AI into their core business processes.

Cheung: Why is full AI integration so hard even today? What are the largest aspects holding firms back?

Gago: One in all the largest shifts our survey uncovered was the fee of coaching models. In comparison with our survey one yr ago, we found that the fee to access computer capability for training AI is on the rise, jumping from 8% in 2024 to 42% now.

Just as vital is access to the suitable data. To coach AI models effectively, organizations need access to a hundred percent of their data in all forms and wherever it resides. Without full access, models are limited in scope and accuracy. This also applies to RAG (Retrieval Augmented Generation) techniques, giving the LLMs contextual access to your enterprise information.

When AI might be applied to all this data—whether within the cloud, in the information center, or at the sting—it becomes more trustworthy, more contextual, and ultimately more priceless to the business.

Why it matters: For AI practitioners and decision-makers, this finding from Cloudera highlights that it’s not the running or scaling that’s hindering AI integration — however the core foundation that lies underneath. The one path to trustworthy, enterprise-wide AI goes through solving infrastructure efficiency and unlocking all organizational data.

PLAYBOOK

Image: Kiki Wu / The Rundown

The Rundown: To succeed in 100% AI integration, organizations must follow a structured path: first anchoring efforts with clear business goals, then breaking down data and infrastructure barriers, and eventually scaling through focused, value-driven use cases.

Cheung: What measures should organizations with zero to little AI take to go as much as the complete 100% AI integration mark?

Gago: First, make clear your goals: define which business problems you are trying to resolve and who owns those decisions. Next, ensure your data is clean, contextual, and accessible. Meaning unifying structured, semi-structured, and unstructured data across environments: clouds, in the information center, or edge.

From there, construct a versatile infrastructure that may evolve as AI models and frameworks change. Prioritize security, governance, and transparency from the beginning, because trust is foundational.

Finally, use reference architectures or accelerators to maneuver quickly with targeted, high-impact use cases. The organizations that succeed are those who move with focus and responsibility and scale from there.

Why it matters: Gago’s roadmap makes AI integration a step-by-step journey. By following this approach, organizations can turn AI from scattered experiments with fragmented data efforts right into a trusted capability that delivers measurable impact across the business.

EARLY WINS

Image: Kiki Wu / The Rundown

The Rundown: While processes across industries are being reshaped by AI, organizations should start off with select, tightly-scoped use cases that may deliver measurable results.

Cheung: What business processes are being reshaped by AI, and what’s a straightforward, high-confidence AI win you recommend shipping first?

Gago: The use cases span industries from manufacturing to banking. Whether a company is attempting to get ahead of maintenance on the factory floor, desires to revamp its customer experience, or leverages AI agents to assist discover fraud and security risks, AI has change into a ubiquitous asset in every IT leader’s toolbelt.

Early use cases come from well-defined, ROI-driven domains. Within the case of formats like AI agents, this could mean areas like IT helpdesk agents and DevOps assistants. Prioritizing adopting AI in these domains gives IT leaders an amazing opportunity to introduce automation, while driving tangible results.

Gago added: Helpdesk agents might be deployed to automate micro-tasks comparable to password resets, reply to tier-one support tickets, and recommend knowledge base content. DevOps assistants can detect anomalies, automate remediation, improve cost control, or generate alerts for infrastructure management.

Why it matters: AI can feel overwhelming when applied in every single place without delay, but focus wins first. By starting with ROI-driven domains like IT helpdesk, enterprises can deliver quick, measurable results — providing early wins that construct confidence, prove value to stakeholders, and create momentum for scaling AI responsibly across functions.

MEASURING OUTCOMES

Image: Kiki Wu / The Rundown

The Rundown: In Cloudera’s survey, operational efficiency was cited as the largest ROI from their AI projects, but measuring impact shouldn’t stop at cost and speed. It also needs to account for customer/user satisfaction.

Cheung: How do you measure if an AI project is definitely helping?

Gago: Our survey asked respondents to share where they expect the largest ROI from AI over the yr. 29% pointed to operational efficiency, followed by 18% citing customer experience, 15% product innovation, 14% revenue generation, 13% risk management, and 11% talent productivity.

Gago added: To measure if it’s helping, organizations should have a look at metrics tied to hurry, cost, and satisfaction. That may include ticket resolution time, reduction in manual workload, incident frequency, or internal user feedback. AI’s impact becomes evident when it consistently shortens cycles, reduces costs, and improves outcomes.

Why it matters: Measuring AI’s impact through efficiency and satisfaction makes its value tangible. When organizations track outcomes and show real advantages, they construct a powerful case for AI’s use — and drive executive confidence to fuel broader adoption across the enterprise.

SECURITY

Image: Kiki Wu / The Rundown

The Rundown: As AI adoption accelerates, so do security risks. But proactive governance, enforcing lineage, and bringing AI to the information (moderately than moving data to AI) may also help harness AI’s power, without compromising trust or security.

Cheung: AI ties to several security concerns, with 50% of survey respondents worrying about training data leaks and 48% about unauthorized access. How does Cloudera bridge this gap for secure AI?

Gago: Governance is critical. Without consistent governance and security standards in place, anytime a company opens its data up to coach AI models there may be a risk that it becomes prone to leakage or a third-party actor. The industry was generally superb at that with classical machine learning, but one way or the other many firms forgot about data governance on the planet of Generative AI.

Apart from Cloudera’s governance tooling, our major advantage is bringing AI to your data. By partnering with Cloudera, you possibly can maintain data ownership, keep it wherever it resides, and apply AI on top of it — capturing all insights without opening yourself and your online business to increased risk. That is: data access, fine-grained controls, catalog, and lineage because the constructing blocks for protected and personal AI deployments.

Cloudera also delivers data lineage to make sure data quality and help teams understand how AI is applying it to make decisions. This eliminates the black box conundrum, giving users visibility into the information AI is using to reply or take motion.

Why it matters: Strong governance and keeping AI near the information not only reduce the danger of leaks and unauthorized access but in addition improve visibility into how decisions are made. The profit is obvious: organizations can unlock more value from AI while protecting sensitive data and constructing trust with customers and regulators.

COMPLIANCE

IImage: Kiki Wu / The Rundown

The Rundown: Teams can often get stuck on writing and implementing policies for enforcing security and governance compliance. But, as Gago points out, it needs to be baked in right from the start — not as an afterthought.

Cheung: What’s a practical, non‑scary method to put basic security rules in place across a setup — and truly implement them?

Gago: Start by embedding rules directly into your data architecture, not layered on top of it. Meaning things like encryption, access controls, lineage, and audit trails needs to be baked in from the start, not retrofitted after the very fact.

Write policies once, then apply them universally across public cloud, private cloud, and in the information center, wherever the information lives. Enforcing policy should feel automatic, not manual. The perfect systems don’t depend on someone remembering to examine a box; they implement compliance by design.

Gago added: Concentrate on just a few high-impact rules: who can see what, where sensitive data lives, and the way it’s tracked. Start small, then scale up. Most significantly, make policy transparent and explainable — involve your legal, IT, cybersecurity, and compliance teams from the start. Governance can’t be an afterthought.

Why it matters: When policies are enforced by design and made explainable, teams understand the foundations, the reasoning behind them, and their implementation — and accept the guardrails for ensuring security across business processes using AI.

SCALING AI

Image: Kiki Wu / The Rundown

The Rundown: AI may soon be deeply embedded across most enterprises, however the journey won’t be easy. Teams must overcome barriers around integration, management, and security, and above all, deal with constructing trust of their systems.

Cheung: If we zoom out 5 years, do you suspect ‘AI in every single place’ shall be a reality? And what’s your personal north star for Cloudera to that end?

Gago: ‘AI in every single place’ is feasible even today, but provided that organizations construct with governance and suppleness on the core and create access to data anywhere. The largest barriers (data silos, cost, and compliance) might be solved with an open and policy-driven architecture. But the true challenge won’t just be infrastructure. It’ll be trust.

Gago added: The long run belongs to groups that may scale AI responsibly, with visibility into how decisions are made and confidence in the information behind them To that end, Cloudera’s north star is obvious: bringing AI to data, anywhere. Meaning enabling large enterprises to securely apply and scale AI to 100% of their data. Cloudera goals to be the platform enterprises trust most to innovate confidently, govern effectively, and drive lasting value.

Why it matters: Trust is emerging because the true currency of enterprise AI. Gago makes it clear that scaling AI responsibly is all about ensuring decisions are explainable, governed, and grounded in reliable data. Enterprises that embed trust on the core of their AI efforts shall be the winners.

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x