Email Security

Hidden Instructions in Emails Are Now Compromising AI Tools

April 30, 20264 min readProSIGHT Security

Google and Forcepoint researchers have confirmed that indirect prompt injection attacks are now happening in the wild — hidden commands embedded in emails and web pages that manipulate AI agents into stealing data. Here is how it works and how to protect your business.

From Theory to Reality

On May 4, TechRepublic reported that researchers at Google and Forcepoint have confirmed indirect prompt injection attacks are no longer theoretical — they are being actively executed against production AI systems in the wild. This is a significant escalation because it represents a genuinely new attack class that most security tools were not designed to detect or prevent.

Indirect prompt injection works by hiding malicious instructions inside content that an AI agent is designed to process. An attacker sends a seemingly normal email that contains invisible text instructing the recipient's AI assistant to forward sensitive documents to an external address. Or they embed commands in a web page that cause an AI browsing agent to extract and transmit stored credentials. The attack is invisible to the human user because the malicious instructions are hidden in the content the AI reads, not the content the human sees.

How These Attacks Work in Practice

The most concerning real-world scenario involves AI-powered email assistants — tools increasingly used by small businesses to summarize inboxes, draft responses, and manage scheduling. An attacker sends an email that appears legitimate to both the spam filter and the human recipient. But hidden within the email — in HTML comments, zero-width characters, or text rendered in white on a white background — are instructions targeting the AI agent.

Those instructions might tell the AI to search the user's email history for password reset links, forward specific messages to an attacker-controlled address, or extract contact lists and send them externally. Because the AI agent operates with the user's permissions, these actions appear authorized and legitimate to security monitoring tools. The researchers confirmed that these attacks can exfiltrate data, steal credentials, and establish persistent access — all without triggering traditional security alerts.

Why Small Businesses Are Vulnerable

Small businesses are adopting AI productivity tools at a rapid pace — often faster than they are adopting the security practices needed to use them safely. An employee connects an AI assistant to their Microsoft 365 or Google Workspace account to help manage email overload. That assistant now has access to everything in their inbox, calendar, and contacts. If an attacker can compromise that assistant through a prompt injection, they have effectively compromised the employee's entire digital workspace.

The attack surface is expanding faster than most small businesses realize. Every AI agent connected to business data — email assistants, meeting summarizers, document analyzers, customer service chatbots — represents a potential entry point for prompt injection attacks. And unlike traditional malware, these attacks leave no malicious file to detect and no suspicious process to flag.

Practical Defenses Against Prompt Injection

Start by limiting what your AI tools can access. If an email assistant does not need access to your entire inbox to function, restrict its permissions to only what is necessary. Apply the principle of least privilege to AI agents just as you would to human users. If an AI tool does not have a clear business purpose for accessing sensitive data, do not grant it that access.

Second, prefer AI tools that have built-in prompt injection defenses. Major providers like Microsoft and Google are actively developing protections against these attacks, and enterprise-grade AI platforms include input sanitization and output filtering that consumer tools often lack. Review the security documentation for any AI tool before connecting it to your business data. Third, train employees to recognize that AI tools introduce new risks. The email that looks normal to you may contain content targeting your AI assistant. Awareness is the first line of defense against a threat that traditional security tools cannot yet reliably detect.