Anthropic Just Shipped the Layer That’s Already Going to Zero
Last Updated on April 13, 2026 by Editorial Team Author(s): Gaurav Yadav Originally published on Towards AI. Anthropic shipped Managed Agents this week. AWS Bedrock AgentCore has been GA for five months. The interesting question isn’t who wins the runtime — it’s where the value migrates when the layer goes flat. On April 8, Anthropic launched the public beta of Claude Managed Agents. The launch coverage hit the predictable beats: ten-times-faster shipping, Notion and Asana as adopters, sandboxed execution and checkpointed sessions and credentialed tool calls handled by Anthropic so developers don’t have to. The accompanying engineering post made a more interesting argument — that Anthropic had decoupled the agent stack into stable abstractions the way operating systems virtualized hardware in the 1990s. Session as durable event log living outside the model context. Harness as stateless executor that calls containers via execute(name, input) → string. Sandboxes as cattle, not pets, provisioned on demand. Reported wins: p50 time-to-first-token down roughly 60%, p95 better than 90%. What Anthropic actually built Strip away the launch language and Managed Agents is a reasonable, well-engineered hosted runtime. You define an agent — its system prompt, its tools, its guardrails — in YAML or natural language. Anthropic runs it. Sessions persist across days; tool calls happen inside isolated environments; credentials live in vaults the sandbox never sees; the trace of what the agent did is queryable after the fact. Pricing is consumption-based: $0.08 per session-hour of active runtime, on top of standard Claude token rates. Notion is using it to let teams delegate work to Claude inside their workspace. Rakuten built sales, marketing, and finance agents that route through Slack and Teams. Sentry pairs its debugging agent with a Claude agent that writes patches and opens pull requests. The architectural piece is genuinely good, and the session-as-event-log pattern is the part worth isolating. State lives outside the harness. The harness can crash and resume from a wake(sessionId) call. The model context window stops being the load-bearing storage layer. I’ll say something specific about why this matters. I ran an agent system last year where session state lived inside the context window. Forty minutes into a multi-step retrieval task, the context hit the window ceiling. The agent didn’t fail gracefully — it silently dropped the earliest tool results and started hallucinating against a partial history. We lost the session. We couldn’t replay it. There was no event log to inspect. The failure wasn’t dramatic; it was quiet and expensive. We rebuilt the state layer outside the context window the following week. Anthropic’s session-as-event-log is the same fix, productized. Anyone who has lost a long-running agent to context overflow knows immediately why this is the right pattern. The credential isolation is the other detail that matters at production scale. Credentials bundled into the sandbox at provision time, never injected as environment variables the agent can read — that’s the kind of thing you only build after an LLM has already chosen the wrong curl command with a token it should never have seen. The architecture is clean. It also shipped five months after the same primitive from Amazon. The incumbent everyone forgot to mention Amazon Bedrock AgentCore hit general availability in late 2025. By March 2026, AWS reported the AgentCore SDK had been downloaded over two million times in its first five months, with policy controls reaching GA in the same window. Each session runs in its own microVM with isolated CPU, memory, and filesystem. Sessions can run up to eight hours. The runtime is framework-agnostic — it will host LangGraph, CrewAI, Strands, or anything else that compiles down to a request-response loop, with the model choice left open to whichever Bedrock-hosted family the developer wants. Google Vertex AI Agent Builder ships its own version with an Agent Registry plumbed through Apigee. Microsoft folded AutoGen and Semantic Kernel into Azure AI Foundry to occupy the same slot. Read against that backdrop, Anthropic’s launch is defensive, not pioneering. AgentCore can already host a Claude-powered agent. So can Vertex. If a developer’s primary loyalty is to Claude-the-model, the question Anthropic needed to answer wasn’t “should we build a managed runtime” — it was “if we don’t, how many of our token-buying customers will run their agents on someone else’s runtime, and how easily will they swap models when AWS undercuts us on session-hour pricing?” That’s the actual launch logic. The coverage frames it as Anthropic claiming a new category. The competitive map says Anthropic is fortifying a developer base it cannot afford to lose to a hyperscaler that already owns the layer. The obvious objection here is that Anthropic isn’t trying to win the runtime layer at all — they’re using managed agents as a distribution channel for Claude tokens, which is a fine and probably profitable thing to do. The objection is correct, and it doesn’t change the argument. Anthropic-the-company may be perfectly fine. Anthropic sells model inference, and model inference is a different layer with different economics. But the runtime layer they just entered is the layer being compressed, and a Claude-locked managed runtime is at best a distribution mechanism for tokens, not a defensible category in its own right. The piece of the stack that gets bid down toward zero is the piece they just shipped. What the OS analog actually predicts The engineering post leans on the operating-systems comparison deliberately. Sessions, harnesses, and sandboxes get separated into stable interfaces the way virtual memory and file descriptors abstracted hardware — the claim being that this lets each layer evolve independently and lets future Claude harnesses ship without rearchitecting the world below them. It’s a fair piece of pattern matching. It also has a known historical outcome that the post does not discuss. VMware created the commercial x86 hypervisor in 1999. For about a decade, virtualization was a premium product — VMware sold ESX for tens of thousands of dollars per host and built one of the most valuable enterprise software businesses on the planet. Then the […]
