
Safety researchers at AIM Safety have revealed a severe zero-click vulnerability dubbed “EchoLeak.” The flaw targets the AI-powered Microsoft 365 Copilot, permitting cybercriminals to exfiltrate non-public information from a person’s organizational atmosphere by merely sending a rigorously created e-mail.
In a report printed this week, AIM Safety said that is the primary recognized “zero-click” AI exploit affecting a significant software like Microsoft 365 Copilot, which means customers don’t must take any motion for the assault to achieve success.
“The chains enable attackers to mechanically exfiltrate delicate and proprietary data from M365 Copilot context, with out the person’s consciousness, or counting on any particular sufferer habits,” AIM Safety defined.
That is made attainable by what researchers name a “LLM Scope Violation.” In less complicated phrases, the flaw tips Copilot’s underlying AI, which is predicated on OpenAI’s GPT fashions, into pulling in non-public person information after studying malicious directions hidden in a regular-looking e-mail.
How the assault works
The researchers laid out an in depth, multi-part assault chain that bypasses Microsoft’s present protections.
- XPIA bypass: Microsoft makes use of filters generally known as XPIA classifiers to establish malicious prompts. Nonetheless, by writing the e-mail in plain, non-technical language that sounds prefer it’s meant for a human, not an AI, the attacker circumvents these protections.
- Hyperlink redaction bypass: Sometimes, hyperlinks to exterior web sites are eliminated; nonetheless, AIM Safety found Markdown hyperlink tips that circumvent redaction. These hyperlinks ship again confidential information within the URL.
- Picture trick: Copilot could be tricked into producing picture hyperlinks that set off automated browser requests, sending information to the attacker with out person clicks.
- CSP Bypass through Microsoft Companies: Though Microsoft has safety guidelines in place to dam exterior pictures, attackers have discovered methods to route information by Microsoft Groups and SharePoint, that are allowed domains.
The researchers additionally found how attackers can increase their possibilities of success utilizing a way referred to as “RAG spraying.” As a substitute of sending one e-mail, attackers both:
- Ship many brief emails with barely completely different wordings, or
- Ship one very lengthy, specifically crafted e-mail that will get break up into smaller chunks by the AI system.
This tips the AI into retrieving the malicious message extra usually throughout regular use.
What’s in danger?
By design, Microsoft 365 Copilot has entry to a variety of enterprise information, together with emails, OneDrive recordsdata, Groups chats, inside SharePoint paperwork, and different related information.
Though Copilot is constructed to observe strict permission fashions, EchoLeak circumvents these by manipulating how Copilot interprets and responds to person prompts, basically inflicting the AI to show data it shouldn’t.
“An ‘underprivileged e-mail’… shouldn’t be in a position to relate to privileged information… particularly when the comprehension of the e-mail is mediated by an LLM,” the researchers harassed.
Microsoft confirms CVE-2025-32711 and mitigates it
Microsoft has confirmed the difficulty, assigning it CVE-2025-32711, rated “Crucial” with a CVSS rating of 9.3 out of 10. Microsoft Safety Response Heart formally described it as, “AI command injection in M365 Copilot permits an unauthorized attacker to reveal data over a community.”
The corporate mentioned no buyer motion is required, because the vulnerability has already been totally mitigated on its finish. Microsoft additionally thanked Intention Labs for its accountable disclosure.
Learn TechRepublic’s information protection about this week’s Patch Tuesday, by which Microsoft patched 68 safety flaws, together with one for focused espionage.