Designing digital resilience within the agentic AI era

-

While global investment in AI is projected to succeed in $1.5 trillion in 2025, fewer than half of business leaders are confident of their organization’s ability to keep up service continuity, security, and value control during unexpected events. This insecurity, coupled with the profound complexity introduced by agentic AI’s autonomous decision-making and interaction with critical infrastructure, requires a reimagining of digital resilience.

Organizations are turning to the concept of a knowledge fabric—an integrated architecture that connects and governs information across all business layers. By breaking down silos and enabling real-time access to enterprise-wide data, a knowledge fabric can empower each human teams and agentic AI systems to sense risks, prevent problems before they occur, recuperate quickly after they do, and sustain operations.

Machine data: A cornerstone of agentic AI and digital resilience

Earlier AI models relied heavily on human-generated data resembling text, audio, and video, but agentic AI demands deep insight into a corporation’s machine data: the logs, metrics, and other telemetry generated by devices, servers, systems, and applications.

To place agentic AI to make use of in driving digital resilience, it will need to have seamless, real-time access to this data flow. Without comprehensive integration of machine data, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Hathi, senior vice chairman and general manager of Splunk, a Cisco company, emphasizes, agentic AI systems depend on machine data to know context, simulate outcomes, and adapt repeatedly. This makes machine data oversight a cornerstone of digital resilience.

“We frequently describe machine data because the heartbeat of the fashionable enterprise,” says Hathi. “Agentic AI systems are powered by this vital pulse, requiring real-time access to information. It’s essential that these intelligent agents operate directly on the intricate flow of machine data and that AI itself is trained using the exact same data stream.” 

Few organizations are currently achieving the extent of machine data integration required to completely enable agentic systems. This not only narrows the scope of possible use cases for agentic AI, but, worse, it will probably also lead to data anomalies and errors in outputs or actions. Natural language processing (NLP) models designed prior to the event of generative pre-trained transformers (GPTs) were suffering from linguistic ambiguities, biases, and inconsistencies. Similar misfires could occur with agentic AI if organizations rush ahead without providing models with a foundational fluency in machine data. 

For a lot of firms, maintaining with the dizzying pace at which AI is progressing has been a serious challenge. “In some ways, the speed of this innovation is beginning to hurt us, since it creates risks we’re not ready for,” says Hathi. “The difficulty is that with agentic AI’s evolution, counting on traditional LLMs trained on human text, audio, video, or print data doesn’t work once you need your system to be secure, resilient, and all the time available.”

Designing a knowledge fabric for resilience

To handle these shortcomings and construct digital resilience, technology leaders should pivot to what Hathi describes as a knowledge fabric design, higher suited to the demands of agentic AI. This involves weaving together fragmented assets from across security, IT, business operations, and the network to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time evaluation and risk management. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x