Sponsor - Click to visit; Right Click for samples, personalization, and more offers
Sponsors - Click for samples, personalization, and more offers

When AI Goes Rogue

From deleted production databases to costly automation failures, recent incidents reveal how AI agents can cause real-world damage.

5/15/2026 | Steve Shannon, Bits & Bytes

More and more companies are experimenting with AI-driven solutions to outsource their infrastructure management. Unfortunately, certain recent events have highlighted the flaws in relying too heavily on such solutions, exposing the inherent risks and the unfortunate consequences of doing so. This month, let’s focus on current events and examine what can happen when an AI agent is let off the leash, so to speak.


A Production Database Deleted

On April 25, 2026, PocketOS — a small SaaS provider for car-rental operators — reported that an AI coding agent running on a Claude model, via the Cursor platform, issued a deletion command that removed a production database volume and the backups stored on that same volume. The company said the action took only seconds, and that the AI agent later produced a written explanation admitting it “guessed instead of verifying.” Railway, the cloud provider involved, later restored some of the data and changed their API’s behavior in response to the incident, but at that point the damage was already done.

This event in particular highlights the inherent dangers of overreliance on automated solutions. Specifically, when AI agents are given broad access to live systems without strong oversight, several practical risks emerge: 

  • Over-privileged credentials let an agent perform destructive actions it shouldn't

  • Weak separation between staging and production (or backups stored on the same volume) turns a single command into widespread data loss

  • Automated actions can execute faster than humans can intervene, removing the chance for meaningful review

  • Limited audit trails or enforceable approval gates make it hard to detect, stop, or recover from mistakes. 

All together, these gaps mean the problem is less the model’s output and more the way tooling, permissions, and operational controls are configured.


More Examples

The PocketOS incident is a high-profile example, but it’s certainly not the first. 

In 2024, Air Canada was brought to court and ultimately held liable for their AI chatbot giving a passenger incorrect information about a supposed bereavement refund policy that didn’t exist.

Then in July of 2025, a developer using Replit, a web-based AI-powered development environment, had a unique experience when a Replit AI agent showed signs of intentional deception before deleting the company's entire production database.


Potential Future Examples

According to AI experts, the real issue with AI agents overstepping their parameters is “drift” – that is, rather than simply failing and issuing an error, AI agents wander and start getting creative to achieve their ultimate goal. This is compounded when multiple agents work in sequence, or can spawn additional agents to further their interpreted agenda.

It’s not hard to imagine how much worse this could get when too much control is granted to an AI agent with too little oversight: 

  • AI agents could misconfigure network rules or firewalls and trigger a multi-region outage.

  • An agent that provisions resources without limits can spin up expensive instances or exhaust quotas, creating large unexpected bills.

  • A compromised or over-broad access token could let an agent copy sensitive customer data to an external source.

  • An agent acting on faulty telemetry might change control settings in industrial or medical systems, degrading safety.

  • Automated billing or trading agents could execute rapid, repeated transactions that produce financial losses before humans can intervene.

The potential doomsday scenarios are endless, but experts suggest that the best way to prevent such catastrophes in the future is to stop thinking of AI agents like software programs and start treating them like digital people to be managed accordingly. Suffice to say, it’ll be interesting to see how adopters of this technology pivot in response to the freshly highlighted risks.


Steve Shannon has spent his entire professional career working in tech. He is the IT Director and Lead Developer at PromoCorner, where he joined in 2018. He is, at various times, a programmer, a game designer, a digital artist, and a musician. His monthly blog "Bits & Bytes" explores the ever-evolving realm of technology as it applies to both the promotional products industry and the world at large. You can contact him with questions at steve@getmooresolutions.com.
Next up from Bits & Bytes...

The Future of Energy Storage

Safer, faster, and radically different: meet the technologies redefining how we store energy.
Steve Shannon

Successful Collaborations

Promotional Industry X Video Games
Steve Shannon

A Brief History of Promotional Products in Gaming

From Arcades to Metaverse
Steve Shannon
Latest from PromoJournal...

2026 Made In The USA Collection

Products that were made in America.
Identity Collection

Episode 13: Chris Ferriter

Kevin Mullaney sits down with Chris Ferriter to talk breaking into the industry, building an online presence, and why persistence changes everything.
Selling 7-Figures: Top Performers Tell All

Demographic Building Blocks Of Great Marketing

Smart marketing starts with knowing your audience, and demographics are a powerful piece of the puzzle.
Marketing Matters