Why enterprise AI contracts are missing the most dangerous lock-in clause in technology history.
Every major AI lab is building the same thing: a persistent agent that watches how you work, learns your patterns, and acts on your behalf. The lock-in is not about data. Enterprises have spent two decades building legal frameworks for data portability. The lock-in is about telemetry—the accumulated behavioral model an always-on agent builds by observing how your people work, what they prioritize, how they communicate, and what they ignore.
That model has no export format, no portability standard, and no legal framework governing ownership. When you switch providers, you lose it. Your next agent starts from zero.
In March 2026, a packaging error pushed approximately 500,000 lines of Anthropic’s Claude Code source to a public registry. Among the internal projects revealed was Conway: a standalone agent environment with persistent memory, extension support through a proprietary .cnw.zip format, external event triggers, and browser integration.
Conway is not on Anthropic’s public roadmap. It is an internal project that reveals the platform strategy the company has been executing across five surfaces in a single quarter.
The significance is not what Conway does. Always-on agents are inevitable. The significance is what Conway accumulates. After six months of operation, a Conway instance doesn’t just store your files and calendar entries. It holds a model of how you work: which emails you respond to immediately, which you ignore, which meetings you reschedule, which Slack threads you monitor, which decisions you delegate and which you keep.
Conway does not exist in isolation. It is the capstone of a five-move strategy Anthropic executed in under 90 days:
Every piece pushes in one direction: build inside our walls, use our surfaces, run through our billing. This is the Active Directory playbook. Microsoft went from operating system to desktop to enterprise identity in fifteen years. Anthropic is attempting the same arc—model provider to developer tool to enterprise platform to agent operating system—in fifteen months.
Anthropic published MCP—the Model Context Protocol—as an open standard. OpenAI adopted it. Google adopted it. The Linux Foundation hosts it. Then Conway adds a proprietary extension layer on top. Extensions packaged as .cnw.zip include custom interfaces and tools that work only inside Conway’s environment. They are not portable.
This is the Google Play Services pattern applied to AI. Android is open source. Google Play Services—Maps, payments, push notifications—is proprietary. You can build Android without Google’s services. In practice, nobody does. MCP is the open foundation. Conway’s extension ecosystem is the proprietary layer on top.
Enterprise procurement teams have spent twenty years developing frameworks for data portability. GDPR, CCPA, SOC 2, data residency requirements, right-to-delete provisions—the legal infrastructure for “can I take my stuff with me” is mature. Imperfect, but functional.
None of it covers what Conway-class agents accumulate.
Behavioral lock-in compounds. Every day Conway operates, it learns more about how your organization works. At month one, switching means losing convenience. At month six, switching means losing an agent that understands your VP’s communication style, your team’s deployment cadence, and which Slack channels carry signal. At month twelve, you are not switching AI providers. You are amputating institutional memory.
The switching cost curve is not linear. It is exponential. And unlike data migration—which is painful but bounded—behavioral context migration is currently impossible. Not expensive. Not difficult. Impossible. The format does not exist. The standard does not exist. The receiving system has no way to ingest it.
Where Conway accumulates behavioral context inside the provider’s infrastructure, the Autonomaton Pattern makes that context sovereign by architecture. Not by policy. Not by contract clause. By the structural properties of the system itself.
Every Autonomaton instance traverses the same five stages, regardless of domain, model, or provider:
The pipeline is not a suggestion. It is an invariant. Nothing runs alongside it. No sub-pipelines. No bypasses. This constraint is what makes the pattern auditable, composable, and—critically—portable.
Three architectural properties make the Autonomaton’s portability structural rather than aspirational:
Declarative Governance. Every behavior rule is externalized in configuration files a non-technical domain expert can read and edit. Change the config, the behavior changes. Version the config, the behavior has a history. Audit the config, the behavior is explained. No separate “explainability layer” required. The governance is the architecture.
Sovereign Zones. Every action has an explicit risk classification: Green (autonomous routine), Yellow (supervised proposals), Red (human-only). Zone boundaries are declarative—defined in configuration, not hardcoded. The system earns autonomy through demonstrated reliability. It cannot unilaterally grant itself new authority.
Feed-First Telemetry. Every interaction generates structured telemetry as its primary output—not as a side effect. This telemetry is the behavioral context. It belongs to the operator. It is structured, inspectable, exportable, and deletable. It is the thing Conway locks inside provider infrastructure and the Autonomaton externalizes by design.
Standard enterprise AI contracts address data handling comprehensively. They do not address what an always-on agent learns. The gap is not ignorance. It is a category error. Procurement teams are applying data-era frameworks to an intelligence-era problem.
Convergence. Every major lab—Anthropic, OpenAI, Google, Microsoft—is building always-on persistent agents. By mid-2027, the persistent agent will be the default enterprise AI interface, not an option.
Compounding. Behavioral lock-in compounds daily. Enterprises deploying in H2 2026 without telemetry terms will face material switching costs by H1 2027.
Contract cycles. Enterprise AI contracts are typically 12–36 months. Contracts signed in 2026 without telemetry provisions will not be renegotiable until 2028 or 2029—by which time the behavioral lock-in will be two to three years deep.
Policy alone cannot solve an architecture problem. Data portability regulations did not prevent cloud lock-in—they made migration legally possible while leaving it technically expensive. The same dynamic will play out with behavioral telemetry unless the intervention is structural.
Whether or not an enterprise adopts the Autonomaton Pattern, the architectural principles establish a minimum standard for any AI platform contract signed in 2026:
This has happened before. TCP/IP was not built by AT&T. The shipping container was not designed by a shipping line. The entities that benefit most from proprietary infrastructure never build the open standard that replaces it. The open standard comes from outside the incumbents—from an institution with no revenue model tied to lock-in.
The Linux Foundation generates $300 million per year. Red Hat is a $5 billion company. Neither writes code. They write standards, certify compliance, convene ecosystems, and publish the research that keeps the open-source supply chain credible. The Grove Foundation runs the same playbook for cognitive sovereignty that Linux ran for compute sovereignty.
Current AI regulation asks the wrong question. The EU AI Act, Executive Order 14110, and most regulatory frameworks focus on: which model did you use? What data did it train on? Is the output biased? These are important questions. They are not the structural question.
The structural question is: how is the system governed? Who controls the routing? Who owns the telemetry? Who can audit the behavioral model? Who can port it to a competitor?
An open governance standard published under CC BY 4.0 cannot be captured by a single vendor. It cannot be preempted by regulation because it is already in the public commons. It creates the structural condition for a competitive market in AI agents—a market where switching costs are bounded by architecture, not compounded by behavioral lock-in.
The industry is in a transition between two eras. The era of model competition—benchmarks, context windows, training runs—is giving way to the era of persistence competition: who owns the always-on layer, the agent that accumulates context across every session and becomes harder to leave every day.
The margins between frontier models have compressed to the point where the model itself is no longer the primary competitive axis. The strategic logic has shifted: stop winning on model releases and start winning on the layer that makes customers sticky regardless of which model sits underneath. The persistent agent layer—the thing that holds your memory, your context, your workflows—is the actual product. The model is the loss leader.
This is the moment—right now, in 2026—when the terms get set. The contracts being signed this year will determine whether enterprise AI operates on open architectural standards with portable behavioral context, or on proprietary platforms where the deepest form of organizational intelligence ever generated is locked inside a single provider’s infrastructure.
The Autonomaton Pattern is not the only answer. It is the open answer. Three files and a loop. A routing config, a zones schema, a structured telemetry log, and a pipeline that traverses them in order. It proves that sovereign, portable, auditable AI governance is not a theoretical aspiration. It is an architectural property you can implement this quarter.
The window is open. It is closing. And every enterprise that deploys an always-on agent without negotiating telemetry terms is making a decision they will spend years trying to reverse.
See the evidence: the G7 divergence, the infrastructure failures, and the math →
The Grove Foundation publishes open architectural standards for AI governance. The Autonomaton Pattern is available under CC BY 4.0 at the-grove.ai.
Contact: jim@the-grove.ai