The accountability challenge: It’s not them, it’s you
Until now, governance has been focused on model output risks with humans within the loop before consequential decisions were made—akin to with loan approvals or job applications. Model behavior, including drift, alignment, data exfiltration, and poisoning, was the main target. The pace was set by a human prompting a model in a chatbot format with loads of backwards and forwards interactions between machine and human.
Today, with autonomous agents operating in complex workflows, the vision and the advantages of applied AI require significantly fewer humans within the loop. The purpose is to operate a business at machine pace by automating manual tasks which have clear architecture and decision rules. The goal, from a liability standpoint, isn’t any reduction in enterprise or business risk between a machine operating a workflow and a human operating a workflow. CX Today summarizes the situation succinctly: “AI does the work, humans own the chance,” and California state law (AB 316), went into effect January 1, 2026, which removes the “AI did it; I didn’t approve it” excuse. This is comparable to parenting when an adult is held liable for a baby’s actions that negatively impacts the larger community.
The challenge is that without constructing in code that enforces operational governance aligned to different levels of risk and liability along the whole workflow, the advantage of autonomous AI agents is negated. Previously, governance had been static and aligned to the pace of interaction typical for a chatbot. Nonetheless, autonomous AI by design removes humans from many selections, which may affect governance.
Considering permissions
Very similar to handing a three-year-old child a video game console that remotely controls an Abrams tank or an armed drone, leaving a probabilistic system operating without real-time guardrails that may change critical enterprise data carries significant risks. As an example, agents that integrate and chain actions across multiple corporate systems can drift beyond privileges that a single human user could be granted. To maneuver forward successfully, governance must shift beyond policy set by committees to operational code built into the workflows from the beginning.
A humorous meme across the behavior of toddlers with toys starts with all the explanations that whatever toy you’ve gotten is mine and ends with a broken toy that is certainly yours. For instance, OpenClaw delivered a user experience closer to working with a human assistant;, but the thrill shifted as security experts realized inexperienced users may very well be easily compromised by utilizing it.
For a long time, enterprise IT has lived with shadow IT and the truth that expert technical teams must take over and clean up assets they didn’t architect or install, very like the toddler giving back a broken toy. With autonomous agents, the risks are larger: persistent service account credentials, long-lived API tokens, and permissions to make decisions over core file systems. To fulfill this challenge, it’s imperative to allocate upfront appropriate IT budget and labor to sustain central discovery, oversight, and remediation for the 1000’s of worker or department-created agents.
Having a retirement plan
Recently, an acquaintance mentioned that she saved a client a whole lot of 1000’s of dollars by identifying after which ending a “zombie project” —a neglected or failed AI pilot left running on a GPU cloud instance. There are potentially 1000’s of agents that risk becoming a zombie fleet inside a business. Today, many executives encourage employees to make use of AI—or else—and employees are told to create their very own AI-first workflows or AI assistants. With the utility of something like OpenClaw and top-down directives, it is straightforward to project that the variety of build-my-own agents coming to the office with their human worker will explode. Since an AI agent is a program that may fall under the definition of company-owned IP, as a worker changes departments or firms, those agents could also be orphaned. There must be proactive policy and governance to decommission and retire any agents linked to a particular worker ID and permissions.
Financial optimization is governance out of the gate
While for some executives, autonomous AI seems like a strategy to improve their operating margins by limiting human capital, many are finding that the ROI for human labor alternative is the flawed angle to take. Adding AI capabilities to the enterprise doesn’t mean purchasing a brand new software tool with predictable instance-per-hour or per-seat pricing. A December 2025 IDC survey sponsored by Data Robot indicated that 96% of organizations deploying generative AI and 92% of those implementing agentic AI reported costs were higher or much higher than expected.
