Contact Us
Error: Contact form not found.
Contact Us
Error: Contact form not found.
Client Login
Select a platform below to log in

If you’ve ever checked your Microsoft 365 quarantine (or had a user ask, “Why did my email get blocked?”), you’ve likely seen a reason that seems especially alarming: High confidence phish. Unlike general spam or “suspected phishing,” this label means Microsoft’s filtering systems believe the message is not just questionable, but highly likely to be a real phishing attempt. In most organizations, this triggers stricter handling (often quarantine by default) to reduce the chance that a single click turns into credential theft, financial loss, or a broader security incident.
In Microsoft 365 (Exchange Online Protection and Microsoft Defender for Office 365), messages are evaluated continuously for malicious intent. A “high-confidence phish” classification is Microsoft’s way of saying: based on the signals and models available, this message strongly matches known phishing patterns or behaviors.
That distinction matters. Many unwanted emails are merely annoying, marketing blasts, low-quality bulk mail, or generic spam. Phishing is different because it’s designed to manipulate a person into taking an action that benefits an attacker: entering credentials, approving an MFA prompt, opening a booby-trapped attachment, or paying a fraudulent invoice. When Microsoft tags a message as high-confidence phishing, the service typically applies more restrictive actions (commonly quarantine) to keep it away from the user until it’s reviewed.
Microsoft doesn’t rely on a single magic number. Instead, Microsoft 365 uses multiple layers of detection to judge whether a message is safe, suspicious, or dangerous. Some of those layers are “content-based” (what’s inside the email), and others are “context-based” (who sent it, how it was sent, and what Microsoft has seen before across the ecosystem).
Here are the categories of signals that commonly drive phishing confidence decisions:
1) Sender authenticity and domain trust signals
Microsoft evaluates whether the message appears legitimately sent by the claimed sender. This includes checks and alignment signals such as SPF, DKIM, and DMARC, plus broader sender reputation and sending infrastructure history. If an email claims to be from a well-known brand but fails authentication checks, or comes from infrastructure associated with abuse, that’s a strong indicator of risk.
2) Impersonation and social engineering indicators
A large portion of phishing succeeds because it “looks right.” Microsoft’s protections include anti-phishing features that focus on impersonation, messages that pretend to be an executive, a vendor, HR, payroll, or a common internal service. These detections often look at display names, domain similarity, reply-to mismatches, and patterns consistent with business email compromise (BEC).
3) Link and attachment intelligence
Phishing emails frequently contain a link to a credential-harvesting site or a malicious attachment. Microsoft inspects URLs, looks for known-bad destinations, detects suspicious redirects, evaluates domain age and reputation, and checks attachments using a combination of signature-based detections and behavioral analysis. Even when an email “reads” convincingly, a risky URL pattern can push the verdict toward phishing.
4) Message patterns and behavioral telemetry at scale
Microsoft benefits from global telemetry, what users report, what security teams remediate, what previous waves looked like, and how similar messages behaved across tenants. If a message is part of a known campaign (or resembles one), it may be classified more confidently as phish.
All of these are used to determine the appropriate classification and action. On the spam side, Microsoft also uses the concept of a spam confidence level (SCL), where higher values indicate a greater likelihood of spam and help determine what happens to the message. For phishing, Microsoft similarly applies phishing-related verdicting and actions, especially for high-confidence cases that warrant stronger default handling.
In many environments, high-confidence phishing is quarantined automatically, and the quarantine experience (including whether end users are notified or allowed to request release) is governed by quarantine policies. Microsoft’s documentation calls out that high-confidence phish quarantining is a common default action, and quarantine policies control how those messages are handled for recipients and admins.
This is why a user might not see the email in their inbox at all, because Microsoft intercepted it earlier in processing. It also explains why these messages sometimes feel “harder to override” than normal spam. In practice, high-confidence phish is treated as a higher-risk category for good reason: allowing easy bypasses would undermine the protection.
Even strong filters aren’t perfect. Occasionally, legitimate messages get flagged, especially if the sender’s environment is misconfigured, a third-party system suddenly changes behavior, or the message includes patterns that resemble phishing campaigns. The goal isn’t to panic; it’s to use a consistent, safe decision process.
Here’s a simple approach users can follow (with minimal jargon):
A good rule of thumb: if the email creates urgency (“act now”), threatens consequences, asks for credentials or MFA approval, requests payment/gift cards, or pushes you to open a file, treat it as hostile until proven otherwise.
The most effective user guidance is short and repeatable. Consider reinforcing three ideas in awareness messaging:
If your organization uses quarantine notifications and release requests, make sure users understand the intent: quarantine is not a personal punishment; it’s a safety gate. Microsoft’s quarantine policy options can be tuned so that end users get the right amount of control without increasing risk.
High-confidence phish alerts exist because phishing remains one of the fastest ways for attackers to gain access, often without needing advanced malware. Microsoft 365 uses layered detection (sender authenticity, impersonation clues, URL/attachment intelligence, and large-scale telemetry) to decide when a message is risky enough to block. When that verdict is “high confidence,” the safest response is simple: don’t engage, verify independently, and report. Done consistently, those habits turn a scary-looking banner into what it’s meant to be: an early warning that prevents a bad day from becoming a real incident.