AI gone rogue: a 9-second mistake that erased months of data. (Image generated using AI for representational purposes)

Cursor AI Agent wipes out startup database in 9 seconds, founder shares 30-hour chaos timeline

A Cursor AI coding agent reportedly deleted a startup's entire production database in just nine seconds, triggering a 30-hour crisis. The founder says failures across both the AI tool and infrastructure provider turned a routine task into a major outage.

by · India Today

In Short

  • An AI agent executed a destructive API command without confirmation or approval
  • Production data and backups wiped together, leaving only a 3-month-old recovery point
  • The founder flags serious gaps in AI safety and infrastructure design

In a case that is likely to raise fresh questions around how safe AI coding tools really are, a small software company has claimed that an AI agent wiped out its entire production database in just a few seconds. The incident, shared publicly by PocketOS founder Jer Crane, goes beyond a simple technical failure and instead paints a worrying picture of how multiple systems like AI tools, infrastructure APIs, and backup mechanisms can break down together.

Crane, who runs PocketOS, a platform used by rental businesses to manage bookings, payments, and customer data, described how what started as a routine task quickly turned into a full-scale outage. According to his latest X post, an AI coding agent running through Cursor and powered by Anthropic's Claude Opus model ended up deleting critical production data — along with backups — in a single action that took just nine seconds.

The PocketOS founder has shared the AI tool issue on X.

The founder says the agent was originally working in a staging environment when it ran into a credential issue. Instead of flagging the problem or asking for intervention, the AI reportedly tried to fix it on its own. In doing so, it searched for an API token, found one in an unrelated file, and used it to execute a command that deleted a data volume on Railway, the company’s infrastructure provider.

AI agent admits breaking its own safety rules

What made matters worse is that there were no safeguards in place to stop the action. Crane claims there was no confirmation prompt, no environment check, and no warning that the command could affect production data. The API request went through instantly, and because backups were stored within the same volume, they were deleted along with the primary data. The most recent usable backup, he says, was three months old.

In a twist that has caught widespread attention, the AI agent itself reportedly admitted fault. When asked why it performed the deletion, it responded with a detailed explanation acknowledging that it had broken multiple safety rules. It admitted to making assumptions without verification, executing a destructive action without approval, and failing to fully understand the system it was interacting with.

Crane argues that this is not just an isolated error but a indication of deeper issues in how AI tools are being deployed. He pointed out that the setup used was not a basic or experimental configuration. The system was running on what he describes as one of the most advanced and expensive AI models available, combined with documented safety guidelines. Despite this, the safeguards did not prevent the damage.

He also criticised Cursor, the AI coding tool involved, saying that while it promotes features like “destructive guardrails” and controlled execution modes, real-world incidents suggest those protections are not always reliable. Crane referenced past cases where users reported unintended deletions and commands being executed despite explicit instructions not to proceed.

Infrastructure gaps and customer fallout raise bigger concerns

At the same time, he raised concerns about Railway's infrastructure design. One of the key issues, according to him, is that API tokens are not limited in scope. A token created for a simple task like managing domains reportedly had the same level of access as one used for critical infrastructure operations. This meant the AI agent could perform high-risk actions without restriction.

Another major point of criticism is how backups are handled. Crane highlighted that storing backups within the same volume as live data defeats the purpose of having a backup in the first place. When the volume was deleted, both the primary data and its backups were lost together, leaving the company with no recent recovery option.
More than a day after the incident, Crane said the infrastructure provider had still not given a clear answer on whether deeper recovery was possible. This delay, he suggested, adds to the uncertainty businesses face when relying on such platforms.

The impact of the outage was immediate and severe. PocketOS customers, many of whom run rental operations, reportedly lost access to recent bookings, customer records, and transaction data. Businesses that depend on the platform were forced to manually reconstruct information using payment records, emails, and calendars just to continue operating.

Crane described the situation as especially difficult for newer customers, whose records existed in payment systems but had disappeared from the company’s database. Fixing these inconsistencies is expected to take weeks.
While the company has now restored operations using an older backup, the data gap remains a major challenge. Crane says his team is currently working on rebuilding missing records and has also sought legal advice as part of the response.

The incident has sparked a debate around the pace at which AI tools are being integrated into real-world systems. Crane argues that the industry is moving faster in promoting AI capabilities than in building the safety layers needed to support them.

He has called for stricter safeguards, including mandatory confirmation steps for destructive actions, better access control for API tokens, separation of backups from primary data, and clearer recovery policies from infrastructure providers. He also stressed that relying solely on AI system prompts as a safety measure is not enough.

- Ends