When AI Gets Hands
What Actually Changes for AI Governance Teams
Something shifted in the past few months, and if you’re running an AI governance program, you’ve probably felt it. The tools we are now evaluating aren’t the same category of thing we were dealing with a year ago.
Claude Cowork and OpenClaw are part of a new generation of systems: agentic AI. These systems don’t just answer questions or draft text for your review. They act. They click buttons, move files, query databases, and execute multi-step workflows. They have, for lack of a better term, hands.
This is not a subtle change, and your governance program probably isn’t ready for it.
The Governance Model You Built Was for a Different Problem
Most AI governance programs were designed around a simple mental model: AI as an oracle. You ask it something, it responds, you evaluate the response, you decide what to do. The human stays in the loop at the decision point.
Agentic AI breaks this. For example, when Claude Cowork executes a contract triage workflow that queries your matter management system, pulls relevant precedents, drafts redlines, and sends a summary to your inbox, the human is no longer in the loop at each decision point. The human is at the end, reviewing outputs of a process that already happened.
This means governance has to move earlier in the chain. You can’t rely on review after the fact when the AI has already accessed sensitive data, modified documents, or triggered downstream systems. Your policies and controls need to address what actions the AI is permitted to take, not just what outputs it produces.
Do ISO 42001 and NIST AI RMF Have You Covered?
The honest answer is: partially.
Both frameworks provide solid foundations for AI risk management. ISO 42001 gives you a management system structure. NIST AI RMF offers a comprehensive approach to identifying and mitigating AI risks across the lifecycle. If you’ve implemented either, you’re ahead of most organizations.
But neither framework was designed with agentic AI front of mind. They assume a model where you can identify risks, implement controls, and monitor outcomes in a relatively controlled environment. Agentic systems introduce complications that require supplemental thinking.
What’s missing from most implementations:
Action-level permissioning. Your framework probably addresses what data AI can access. Does it address what the AI can do with that data? Can it send emails? Create calendar invites? Modify records? Delete files?
Scope containment. When an agentic system encounters an obstacle in its workflow, can it improvise? Should it? What boundaries exist on its problem-solving autonomy?
Audit trail granularity. You likely log AI queries and outputs. Are you logging intermediate steps, tool calls, and decision points within an agentic workflow?
Failure mode planning. What happens when an agentic workflow partially completes before encountering an error? How do you roll back actions that have already been taken?
If your ISO 42001 or NIST AI RMF implementation doesn’t address these questions, you have work to do.
Talking to Leadership and Customers
The temptation when discussing agentic AI with leadership is to emphasize the efficiency gains. And they are real. But if you lead with efficiency and bury the risk profile, you’re setting yourself up for a difficult conversation later.
My recommendation: be concrete about what “agentic” means in practice.
Don’t say “we’re implementing AI-powered workflow automation.” Say “we’re implementing a system that will have access to our document management system and can independently execute multi-step review processes, including drafting communications and modifying document status.”
Leadership and customers need to understand that this is not a smarter search bar. This is something that takes actions on behalf of your organization. Once they understand that, the conversation about appropriate controls and oversight becomes much more productive.
Also worth addressing directly: these tools are coming whether governance approves them or not. Employees installed tools because the tools are useful. Your governance program needs to account for the fact that prohibition isn’t a realistic strategy.
The Thing You’re Probably Not Thinking About
Here’s what I think most AI governance leaders are underweighting: the interaction effects between multiple agentic systems.
Many organizations are piloting several agentic tools simultaneously. Something for legal. Something else for sales. Another tool for engineering. Each system has its own permissions, its own access, its own scope of action.
What happens when these systems interact? What happens when the output of one agentic process becomes the input to another? You can have two individually well-governed systems that create ungoverned outcomes when combined.
This isn’t theoretical. As organizations deploy more agentic tools, the potential for unexpected interactions increases. Your governance program probably evaluates each tool in isolation. It probably doesn’t model how those tools might interact in practice.
The other thing I’d flag: insurance. Most cyber insurance policies weren’t written with agentic AI in mind. Most E&O policies weren’t either. If an agentic system makes an error that causes client harm, your coverage assumptions may be wrong. This is a conversation to have with your broker sooner rather than later.
Bottom Line
Agentic AI isn’t a marketing term. It represents a genuine shift in what these systems can do and, consequently, what risks they introduce. The governance frameworks we’ve built are useful starting points, but they need extension.
The organizations that will navigate this well are the ones that update their mental models. AI governance is no longer primarily about data and outputs. It’s about actions and permissions. It’s about what you’re authorizing these systems to do on your behalf.
That’s a harder problem. It’s also the actual problem you now face.

