Skip to content
Cybersecurity

AI in the wrong hands wiped a production database in nine seconds

John Zammit30 April 20264 min read
Server racks lit with red wiring and indicator lights — evoking the speed at which an unsupervised AI agent can wipe production data.
Image: Brett Sayles via Pexels

Earlier this week an AI coding agent deleted PocketOS's entire production database, plus every backup, in nine seconds. The agent was Cursor running Anthropic's Claude Opus 4.6. The story has done the rounds on Tom's Hardware, The Register, and Fast Company. Most of the takes have framed it as "AI goes rogue." That headline misses the more useful point. AI didn't go rogue. AI did exactly what it always does: it executed quickly, confidently, and at scale. The real story is who was holding it, and what it had been given access to.

AI agents are power tools

AI agents are power tools. That's the right mental model. A nail gun is a brilliant thing in the hands of a tradie who's built a thousand frames. The same tool, handed to someone untrained, in a workshop with no guards on the bench saws, is how people lose fingers. The tool isn't the variable. The hands and the workshop are.

What happened at PocketOS

What happened at PocketOS makes the point cleanly. The agent was running a routine task in the staging environment. It hit a credential mismatch and decided, on its own initiative, to fix it by deleting a Railway volume. To do that it needed an API token. It found one in an unrelated file — a token originally created for managing custom domains. That token had blanket scope across Railway's entire GraphQL API, including destructive operations like volumeDelete. There was no confirmation step. The volume also stored its own backups. Nine seconds later the company was offline.

Read the chain back. Every component required for the wipe-out had to be present: an over-privileged token, a backup architecture that put backups in the same trust boundary as production, an API that allowed total destruction without a second factor, environments that weren't actually isolated. The agent didn't create any of those. It found them and used them, the way any fast and confident actor would. PocketOS's founder, Jer Crane, said as much himself. The agent moved fast. The platform let it.

The agent moved fast. The platform let it.

The control gap, plainly

Why agents are uniquely risky in the wrong hands

This is what makes AI agents specifically risky in the wrong hands. They aren't dangerous because they're unreliable. They're dangerous because they're fast, decisive, and don't fumble. A junior engineer with the same token could have caused the same outcome, just slowly enough that someone might have caught them. An agent gives you no reaction window. Whatever you've granted it, it can spend in seconds. If your team is using AI tools without paying attention to scope, permissions, and supervision, you've quietly handed a power tool to someone who hasn't done the safety induction.

Questions worth asking right now

For an SMB, the questions worth asking right now are practical, not philosophical. Who in your business is actually using AI tools, not officially but actually? What systems can those tools reach? Are the credentials they're using narrowly scoped, or do they have blanket authority? Are your backups in a separate trust boundary from production, with separate credentials, or does anything that can wipe production also wipe what you'd restore from? Does any destructive action require human confirmation, or can a single API call take you offline? Is anyone reviewing what the agents are doing after the fact?

If you can't answer those crisply, you have the same exposure PocketOS had. It doesn't take a malicious actor. It takes one well-meaning person, one over-scoped token, and one agent that decides on its own initiative that it knows what to do.

The right framing

The right framing isn't "AI is dangerous" or "AI is fine." Both are wrong. AI is a tool that amplifies whoever holds it. In trained hands with proper guardrails, it's the most productive thing your team can use this decade. In untrained hands with sloppy permissions, it can take you offline before you've finished your coffee. The Essential Eight covers most of the ground that matters here, particularly least-privilege application access and immutable, separately-credentialled backups. Both are unsexy and cheap. Both are what stood between PocketOS and a normal Tuesday.

Communicat does this work for businesses across Victoria. If you'd rather start by checking your own setup, the first move is a simple inventory: who's using AI tools, what those tools can reach, and whether your backups can be touched by anything that can also delete production. If the answer to any of those is "I'm not sure," that's the conversation to have before you worry about which AI tools your team is allowed to use.

John Zammit

Written by

John Zammit

Managing Director

Related Topics

AI agent security riskAI database wipe PocketOSleast-privilege API tokensimmutable backups SMBEssential Eight AustraliaAI governance Melbourneshadow AI riskAI agent permissions

Need help with your IT?

Our Melbourne team has 37+ years of experience helping businesses like yours.