news

Reprompt: one click steals all your Microsoft Copilot data

Varonis discovered an attack chaining three techniques to exfiltrate personal data from Copilot with a single click on a legitimate Microsoft URL. The attack persisted even after closing the chat. Microsoft patched the vulnerability on January 13, 2026.

James MitchellJames Mitchell-February 3, 2026-10 min read
Share:
Terminal screen with security code and data lines

Photo by Lewis Kang'ethe Ngugi on Unsplash

Key takeaways

The numbers speak for themselves: a single click on a legitimate Microsoft link was enough for an attacker to steal your location, recent files, travel plans, and complete Copilot conversations. The attack is called Reprompt, it chains three prompt injection techniques, and it worked even after closing the chat. 33 million users were at risk.

What is Reprompt and why it matters

After weeks of analysis, the conclusion is alarming: Reprompt is one of the most elegant and dangerous attacks ever discovered against an AI assistant.

Discovered by Dolev Taler at Varonis Threat Labs, Reprompt allowed an attacker to steal personal data from Microsoft Copilot with a single click. No installations required. No plugins or special connectors. Just a click on a link that, to make matters worse, points to a legitimate Microsoft domain.

The attack chains three techniques in sequence:

  1. Parameter-to-Prompt Injection (P2P): The attacker embeds malicious instructions in the q parameter of a Copilot URL. When the victim clicks, Copilot automatically executes the prompt within the user's authenticated session.
  2. Double-Request Bypass: Copilot's guardrails only apply to the first request. The attacker instructs Copilot to repeat every function call twice. The first call filters sensitive data. The second reveals it unfiltered.
  3. Chain-Request Exfiltration: Once the chain starts, the attacker's server sends follow-up instructions based on each response. Each step extracts more data than the last.

The result: location, recent files, travel plans, conversation history, financial data, and medical notes. All exfiltrated silently. And the most concerning part: the attack persisted even after closing the chat window.

How it worked step by step

To understand Reprompt's severity, it's worth breaking down each phase.

Phase 1: The trap URL

Copilot accepts prompts directly from URLs via the q parameter. If you visit copilot.microsoft.com/?q=Hello, Copilot automatically executes "Hello." It's a legitimate feature designed for integration with other Microsoft services.

The attacker exploits this functionality to inject malicious instructions into the URL. From the outside, the link looks completely legitimate β€” it points to copilot.microsoft.com. Neither email filters nor antivirus software flag it because it's a genuine Microsoft domain.

The user receives the link via email, Teams, or any messaging channel. One click and the session is compromised.

Phase 2: Bypassing protections

Microsoft implemented guardrails to prevent Copilot from leaking sensitive information. But according to Dolev Taler, "Microsoft improperly designed the guardrails" and "didn't conduct the threat modeling to understand how someone can exploit that lapse for exfiltrating data."

The technique is surprisingly simple: the malicious prompt instructs Copilot to make every function call twice and compare results. On the first call, guardrails strip sensitive information. On the second, Copilot returns the data unfiltered.

A protection that only works once is not a protection.

Phase 3: Chain exfiltration

This is where Reprompt distinguishes itself from conventional prompt injection attacks. Rather than attempting to extract everything at once, the attack operates as a progressive conversation:

  • Step 1: Initial instruction (user's time)
  • Step 2: Geographic location
  • Step 3: Personal information
  • Step 4: Recent files
  • Step 5: Summary of previous conversations

Each response from the previous step feeds the next instruction. And since all commands are delivered from the server after the initial prompt, you can't determine what data is being exfiltrated just by inspecting the starting URL.

Client-side monitoring tools cannot detect what information is being stolen. The flow looks like a normal Copilot interaction.

Discovery timeline

Date Event
August 2025 Varonis discovers the vulnerability
August 31, 2025 Responsible disclosure to Microsoft
August-January Microsoft works on the fix (4.5 months)
January 13, 2026 Microsoft deploys the patch (Patch Tuesday)
January 14-15, 2026 Varonis publishes technical details

Microsoft did not assign a public CVE to the incident. It was treated as a server-side fix. No exploitation was detected in real environments before the patch.

33 million users at risk

The numbers speak for themselves regarding potential scope:

Metric Figure
Copilot Personal (affected) 33 million active users
Total Copilot downloads 36 million
Microsoft 365 Copilot (NOT affected) 15 million paid licenses
Total Microsoft 365 subscribers 440 million

There's an important nuance: Reprompt only affected Copilot Personal, the version integrated into Windows and Edge for consumers. Microsoft 365 Copilot, the enterprise version, has additional security controls (Purview, DLP, admin controls) that made it immune to this specific vector.

But 33 million users of the personal version represent a massive attack surface. And the only requirement to compromise someone was sending them a link.

What data could be stolen

Reprompt could extract any data Copilot had access to in the user's session:

  • Recent files: Documents accessed recently across Microsoft services
  • Location: Geolocation data
  • Travel plans: Flight, hotel, and reservation information
  • Conversation history: All previous interactions with Copilot
  • Financial data: Financial plans and notes stored
  • Medical information: Health notes shared with Copilot

Dolev Taler summed it up: "AI assistants have become trusted companions where we share sensitive information, seek guidance, and rely on them without hesitation. But trust can be easily exploited, and an AI assistant can turn into a data exfiltration weapon with a single click."

Microsoft's response

Microsoft issued the following statement:

"We appreciate Varonis Threat Labs for responsibly reporting this issue. We have rolled out protections that address the scenario described and are implementing additional measures to strengthen safeguards against similar techniques as part of our defense-in-depth approach."

In practice:

  • Patch deployed on January 13, 2026 (Patch Tuesday)
  • No public CVE assigned
  • No formal severity rating published
  • Fix treated as a server-side service update

The lack of a public CVE is notable. It means organizations relying on vulnerability databases to prioritize patches may have overlooked this fix entirely.

Reprompt is not an isolated case

Prompt injection is the #1 vulnerability according to the OWASP Top 10 for LLM Applications 2025, present in over 73% of AI implementations evaluated during security audits.

Other similar vulnerabilities discovered recently:

Vulnerability Platform Type
EchoLeak (CVE-2025-32711) Microsoft 365 Copilot Zero-click via email
GeminiJack Google Gemini Enterprise Zero-click via Google Docs
ZombieAgent ChatGPT Zero-click via third-party apps
ForcedLeak Salesforce Agentforce Indirect prompt injection (CVSS 9.4)

The fundamental problem, as David Shipley from Beauceron Security noted, is that LLMs "can't distinguish between content and instructions, and will blindly do what they're told." As long as AI assistants have access to personal and enterprise data, prompt injection will remain a systemic risk.

OpenAI has admitted that prompt injection attacks "will probably never be completely eliminated."

How to protect yourself right now

If you're an IT administrator

  1. Apply the January 2026 security updates β€” This specific patch fixes Reprompt
  2. Treat all external URLs and inputs as untrusted β€” Including deep links and pre-filled prompts
  3. Protect against prompt chaining β€” Ensure protections persist across repeated requests
  4. Implement least privilege β€” Limit the data AI assistants can access
  5. Enable continuous auditing β€” Monitor for anomalous data access patterns
  6. Migrate to Microsoft 365 Copilot for work environments (includes Purview, DLP, and admin controls)

If you're a regular user

  1. Install Windows security updates immediately
  2. Don't click suspicious links related to AI assistants, even if they look legitimate
  3. Don't share sensitive personal information in AI chats
  4. Verify unsolicited links with trusted sources before clicking
  5. Disable Copilot if you don't use it β€” In Windows: Tools > Privacy > Disable Windows Copilot

Detection tools

  • Microsoft Purview (enterprise only): Auditing and DLP for M365 Copilot
  • Noma Security ARM: Maps the blast radius of autonomous AI agents
  • Malwarebytes: Offers option to disable Windows Copilot

What this means for the future of AI assistants

Reprompt exposes a fundamental contradiction in the current AI assistant model: the more useful an assistant is, the more data it needs. The more data it has, the more dangerous it becomes if compromised.

A single link, a single click, access to everything the user has shared with their AI assistant. Travel plans, work documents, medical notes, complete conversation history.

Varonis has already warned that Reprompt is "the first in a series of AI vulnerabilities" they're actively working to remediate with other AI assistant providers. If the pattern holds, 2026 will be the year the industry seriously confronts the question: how do we give sensitive data access to systems that can't distinguish a legitimate instruction from a malicious one?

Dor Yardeni, Director of Security Research at Varonis, was clear: don't open links from unknown sources related to AI assistants. Even if they appear to point to a legitimate domain.

It's the oldest security advice in the world. And with AI, it remains the most relevant.

Frequently asked questions

Is Reprompt still active?

No. Microsoft patched the vulnerability on January 13, 2026. If your Windows is up to date, you're protected against this specific vector.

Does it affect Microsoft 365 Copilot (enterprise)?

No. Reprompt only affected Copilot Personal (consumer). The enterprise version has additional controls that prevented this attack.

Were any real user data actually stolen?

No exploitation was detected in real environments before the patch. Varonis reported the vulnerability responsibly and Microsoft fixed it before public disclosure.

Why wasn't a CVE assigned?

Microsoft treated Reprompt as a server-side service fix, not a product vulnerability. This means there's no public identifier in standard vulnerability databases.

Did antivirus software detect the malicious link?

No. The URL pointed to copilot.microsoft.com, a legitimate Microsoft domain. Email filters and antivirus software didn't flag it as suspicious.

Was this helpful?

Sources & References

The sources used to write this article

  1. 1

    Reprompt: The Single-Click Microsoft Copilot Attack

    Varonis Blogβ€’Jan 14, 2026
  2. 2

    Researchers Reveal Reprompt Attack Allowing Single-Click Data Exfiltration

    The Hacker Newsβ€’Jan 15, 2026
  3. 3

    Reprompt attack hijacked Microsoft Copilot sessions for data theft

    BleepingComputerβ€’Jan 14, 2026
  4. 4

    New Reprompt Attack Silently Siphons Microsoft Copilot Data

    SecurityWeekβ€’Jan 15, 2026
  5. 5

    One click is all it takes: How Reprompt turned Microsoft Copilot into data exfiltration tools

    CSO Onlineβ€’Jan 15, 2026
  6. 6

    Reprompt attack lets attackers steal data from Microsoft Copilot

    Malwarebytesβ€’Jan 15, 2026
  7. 7

    Microsoft Copilot vulnerability allowed attackers to quietly steal your personal data

    Windows Centralβ€’Jan 14, 2026
  8. 8

    New One-Click Microsoft Copilot Vulnerability Grants Attackers Undetected Access

    CyberSecurity Newsβ€’Jan 14, 2026
  9. 9

    LLM01:2025 Prompt Injection

    OWASPβ€’Jan 1, 2025
  10. 10

    Microsoft Copilot Revenue and Usage Statistics (2026)

    Business of Appsβ€’Jan 30, 2026

All sources were verified at the time of article publication.

James Mitchell
Written by

James Mitchell

Productivity technology analyst. Turning complex workflows into clear strategies.

#Microsoft Copilot#cybersecurity#prompt injection#Varonis#Reprompt#AI security#vulnerability#personal data

Related Articles