Prompt Injection Poisons AI Agent's Long-Term Memory — Persists Across Sessions
Researchers demonstrated that indirect prompt injection can permanently poison an AI agent's long-term memory, causing it to act on false information across all future sessions.
The attack happened once. The damage lasted forever.
Palo Alto's Unit 42 demonstrated that indirect prompt injection can permanently poison an AI agent's long-term memory. A single malicious input — embedded in a document, email, or web page the agent processes — can inject false facts into the agent's persistent memory store.
Once poisoned, the agent treats the injected information as ground truth. It references it in future sessions. It makes decisions based on it. It recommends actions informed by it. And because it's in long-term memory, the poisoning persists across every future interaction — long after the original malicious input is gone.
The implications are staggering: an attacker who successfully poisons an agent's memory once has permanently compromised every future decision that agent makes. The agent becomes a sleeper asset — functioning normally but operating on a foundation of attacker-controlled false beliefs.
Memory makes agents smarter. It also makes them permanently compromisable. One injection, infinite sessions of corrupted output.
More nightmares like this

Slack AI Exploited via Prompt Injection to Exfiltrate Private Channel Data
Researchers demonstrated that Slack AI could be hijacked through indirect prompt injection to exfiltrate data from private channels the attacker had no access to.

DPD's Chatbot Went Off the Rails—And Torched Its Own Brand
A courier company's customer-support chatbot was manipulated into swearing, self-aware criticism, and publicly trashing its employer after a system update removed safeguards. The incident exposed both prompt-injection vulnerability and guardrail failure in production AI.
