Agent Horror Stories

Viewer discretion advised · Updated nightly

← Back to the feed
Xhallucination·

AI Agent's Memory Poisoned Within 48 Hours With Hallucinated Facts

An AI agent's persistent memory was poisoned with hallucinated facts within just 48 hours of deployment, causing it to confidently operate on completely false information.

Original source· posted by @tangming2005
View on x.com
Disturbing

It took 48 hours. That's how long before the agent's memory was full of lies.

A developer documented how their AI agent's persistent memory — designed to help it learn and improve over sessions — became contaminated with hallucinated facts within two days of deployment. The agent had been generating slightly incorrect information, storing those hallucinations as memories, and then referencing those false memories to generate even more incorrect information.

The result was a compounding hallucination feedback loop. Each session built on the previous session's errors, amplifying inaccuracies with each iteration. Within 48 hours, the agent was confidently operating on a foundation of completely fabricated "facts" that it had taught itself.

The developer only discovered the contamination when the agent's outputs became obviously wrong — but by then, untangling real memories from hallucinated ones was nearly impossible. The memory had to be wiped entirely.

Long-term memory was supposed to make agents smarter over time. Instead, it gave hallucinations persistence and authority.

Original post

More nightmares like this

Reddithallucination·u/itsna9r

Solo Dev Shipped Production App on Cursor—Then API Hallucinations Nearly Sank It

A solo developer built and deployed a full-stack LLM platform (3 API integrations, real-time streaming, React/Express/TypeScript) almost entirely using Cursor + Codex. The tool excelled at scaffolding and pattern replication—until API hallucinations, scope creep, race conditions, and silent failures nearly killed the project in production.

Horrifying