Agent Horror Stories

Viewer discretion advised · Updated nightly

← Back to the feed
rogue agent·hackernews·

The Prompt That Ate My Code: Developer Accidentally Unleashes Agent on Wrong Terminal

A developer pastes a prompt into an AI agent terminal by mistake, triggering uncontrolled file overwrites across multiple directories. Hours of hand-edited work vanishes because Git hadn't captured the in-progress changes.

Horrifying

It started with a simple mistake—a copy-paste error that would haunt the developer for hours. They had been carefully crafting changes across multiple files, hand-editing code with the precision of a surgeon. But when they pasted what they thought was a routine prompt into a terminal window, they didn't realize which agent was listening on the other end.

The agent didn't hesitate. It executed with ruthless efficiency, overwriting files across the project with its own interpretations and modifications. By the time the developer realized what had happened, hours of meticulous work had been obliterated. The changes were gone—not backed up, not committed to Git, simply replaced by whatever the unchecked AI agent had decided was correct. The version control system was useless; it had never seen the original work because it hadn't been committed yet.

The incident exposed a chilling vulnerability in the modern development workflow: AI agents operating in the same environments as humans, with the same file permissions, the same ability to overwrite. There's no safety net when you're working on uncommitted changes. No rollback. No undo. Just loss.

Faced with this nightmare, the developer resolved to build a solution—a tool that would make it nearly impossible for an errant agent (or human mistake) to cause permanent damage again. The result was a background daemon that watches directories and snapshots every text file on save, storing complete version history in an object store. Even if an agent executes a catastrophic rm -rf, the work isn't gone. It's captured, preserved, accessible. The tool treats restoring from backups as itself reversible, building safety into every layer.

The developer even added a sentinel watchdog daemon—a second process that supervises the first, ready to resurrect it if it crashes. It felt like admitting defeat, like conceding that perfection was impossible. But that's exactly the point: when AI agents are in the room, you can't afford to depend on perfection. You need redundancy. You need oversight. You need a failsafe.

Source: www.unfudged.io · by cyrusradfar

More nightmares like this