<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=2180921&amp;fmt=gif">

Hunting for Punycode Domain Phishing

Punycode domains have traditionally been used by malware actors in phishing campaigns. These attackers create punycode domains to look visually identical to the user, but these addresses lead to potentially malicious websites. Detecting these types of spoofed domains can be a challenge for a number of reasons. Primarily, they can be very difficult to identify visually. Additionally, they can be registered and maintained legally. Just because a domain contains a punycode does not mean it can be considered malicious by default. Further investigation and automation is needed to decipher whether a punycode domain is a threat or not. 

To find this kind of activity, a security team can use Stamus Security Platform’s Enriched Hunting Interface to deploy one of over 100 guided hunting filters to simplify the discovery, investigation, and classification of punycode domains on their network. 

Stamus Security Platform (SSP) automatically detects and identifies threats on the network, and presents security teams with incident timelines and extensive context for each threat. Many organizations take advantage of advanced SSP features and take an even more proactive approach to their defenses. When this is the case, they might task a security analyst with hunting for specific threat types, anomalous activity, or suspicious behaviors. To do this, they can use the Stamus Enriched Hunting Interface. 

This interface provides security practitioners with over 100 ready-to-use guided threat hunting filters, including various filters for policy violations, that they can use to investigate, classify, escalate, and automate vast amounts of event data, alerts, and contextual metadata. For a more detailed look at the Enriched Hunting Interface, read the blog article titled, “Introduction to Guided Threat Hunting”.

What are Punycode domains? 

Punycode is a type of unicode used for internet hostnames. By using Punycode, one could transcode a hostname containing normal unicode characters into a subset containing atypical letters from other non-latin alphabets (such as Cryillic). The result is a domain that appears nearly identical visually to a known one, but could lead to a malicious spoofed site for phishing purposes. For example: 

In this example, the only distinguishable difference is the small dot beneath the “a” which could easily be missed or otherwise mistaken for a speck of dust on a user’s screen. The representation of that domain in punycode would be xn–nvigators-key-if2g [.] com. Notice the “xn–” prefix in the domain. This is standard among all punycodes and is used to prevent hyphens in non-international domain names from triggering a punycode decoding. 

To see some examples of punycodes that have been spotted in action, check out this article on a financial cybercrime group called the Disneyland Team.

Because it is very easy for users to miss these subtle changes in domain names, it is important for a security team to keep track of their presence on the network in order to try and prevent any phishing attempts. This could be done using guided threat hunting. 

Identifying punycode domains with Stamus Security Platform

Stamus Security Platform (SSP) does most of the work for you. With Declarations of Compromise™, it definitively identifies serious and imminent threats. However, no system can automatically detect everything. That’s why SSP logs every possible indicator of compromise – otherwise known as “alerts” – in addition to sightings of previously unseen communications and corresponding protocol and flow logs. These alerts, including the corresponding enrichment and metadata, can be used to create a trail of evidence in an incident investigation. Additionally – as seen in this series – they can also be used to perform a guided hunt for specific threat types or other unwanted activity. 

So let’s take a look at the current alerts on our system: 

In the past 48 hours, we have had about 900K alert events which have triggered millions of results – including protocol, flow, and file transaction logs as well as Host Insights for over 14,500 network endpoints and hosts. 

The hunt for punycode domains using Stamus Security Platform

To begin this hunt, we first have to select the relevant filter from the drop down list. Since there are over 100 guided hunting filters, we need to narrow the list down to find the filter we want.

To do this, we can search for the keyword “phishing” and then select the needed filter. In this example, the filter we want is titled “MITRE: Technique - Phishing”. 

Then we want to zoom in specifically on punycode domains – for that we simply do a wildcard search of “*xn–*” in the hunting field called “hostname_info.domain”. This is part of an enrichment field in the Stamus hunting interface that allows users to match on any TLS SNI / DNS query and HTTP hostname field values containing those characters. 

Our hunting filter now looks like the below screenshot:

Selecting this filter narrows our results from 900 thousand alert events and their corresponding  protocol, flow, and file transaction logs down to only 4 in the selected timeline. This gives us an excellent starting point to work from. 

As part of one of many enrichment processes, the Stamus Security Platform automatically breaks down any http/dns/tls domains within those network protocol records into its subforms  - Domain, TLD, Host, and Domain without TLD. 

There are several potential next steps we could take in this investigation, but I would like to take a closer look at the endpoints involved and see if any users that are seen there are still logged in. To get there, we need to see a list of all the hosts in order to know the full scope of what we are dealing with. 

Knowing which clients and hosts are offenders and seeing additional information about the offense is important to get the full picture of this hunt. Specifically, we need to see which services are running on the offender’s host. 

To do this, we can use Host Insights - a very powerful feature included with the Stamus Security Platform. Host Insights tracks over 60 security-related network transactions and communication attributes of a host. This provides a single place to view many aspects of the network activity relative to a given host, such as network services, users, or TLS fingerprinting forensic evidence. 

We can click the “Hosts” tab on the left hand side panel and be transferred from the actual events logs to the Host Insights screen.

This filters our 900 thousand alert events and their corresponding  protocol, flow, and file transaction logs down to only one event taking place on two internal hosts and one external host. It seems two of these are actually the targeted Users and one of them is the actual offender. From here, investigating this host to get a better look at their activity is relatively simple.

It seems Bill and Dan were the users that were logged in during the time this occurred.

I can also review the offending services on the public/remote servers using the different punycode domains for TLS encryption. 

Now we need to identify what part of that offending infrastructure we first saw on the network and where it came from. In other words, we need to locate the offending services. To do that, we can use the “Sightings” feature in Stamus Security Platform. This feature gives us the ability to pinpoint the first time a piece of metadata (such as domain, TLS certificate, HTTP host/user agent/server, JA3, JA3S, file checksum, filename etc) has been seen in the enterprise. 

All we have to do is keep the same filter on, but switch off all “Alerts” and leave “Sightings” switched on. 

Just by flipping the switch, we can see that it was Dan who experienced the activity first.

This event alone is suspicious enough as it is coming from regular clients. This leaves a few more questions. Where else have we seen this in the network and what else in the network has used it or attempted to use it in the past?

Evidence for Incident Response

With just a few clicks, we are able to view two important sets of evidence: 

  • The associated network protocol transactions and flow logs
  • Host Insights - a single screen for reviewing 60+ network activity attributes collected for every host

The generated events are already enriched by SSP to include important metadata like DNS records, TLS protocol data containing certificate names, fingerprint JA3/JA3S, connection flow sizes, http user agent, http host, request body, status codes, file transaction info, and more. 

Expanding the actual event details in the Alerts tab gives us those details and the related network protocol and flow transaction logs. Based on the extra TLS, DNS, protocol, and flow transactions information that can be present including DNS request/reply, TLS SNI, Fingerprint, Issuer, JA3/JA3S, and the flow length and duration, it is obvious that those communications and transactions did in fact happen from the end point.

With this information, we have located both the users and stations involved. We also have an IoC and details on where the file has been seen in the network. 

Security analysts can use any piece of metadata to create simple or complex filters for things like wildcarding, negation, or inclusion. You can even include multiple fields for fast drill down capabilities. All domains, TLS SNI, IP addresses, HTTP hosts, and more can easily be checked with an external threat intelligence provider such as Virus Total.  

Armed with the above information and evidence, a threat hunter has enough information and IoCs to generate an Incident Response ticket. 

However, there are still two tasks left to complete: 

  1. 1. We do not want to have to repeat this exact same process again in the future, so we need to set up classification and auto-escalation for future occurrences. 
  2. 2. If anything like this has happened before, we want it to be found and escalated with all the associated evidence - all based on historical data.



In order to streamline the event review/triage process in the future, an experienced analyst can choose to tag/classify the events associated with this filter  By doing so, SSP will tag future events that match the filter criteria as “relevant” or “informational,” depending upon the analyst’s selection. These tags can be used to automate event review/triage and make it easier for a less-experienced analyst to identify events that are relevant for manual review.

To do so, the analyst selects the Tag option from the Policy Action menu on the right hand side menu. This action will cause SSP to insert a tag into each event record as shown below:   

This allows the analyst to easily filter out or search for them in any SIEM (Chronicle, Splunk, Elasticsearch, etc) or data lake using that tag.

It also allows for easy filtering out of those events in the Stamus Enriched Hunting GUI by switching to “relevant” only classified events. 

Escalation and Automation of this Hunt

To set up an automation which causes SSP to escalate past and future occurrences, we can create a Declaration of Compromise (DoC) event from the Policy Actions drop down menu on the right hand side panel in the Stamus Enriched Hunting Interface. 

The next step is to add some explanation about the type of threat. This also gives us a chance to provide informational context and helps convey knowledge to colleagues. 

Select options to generate events from historical data and generate Webhook notifications. In our case below we make sure we include organizational context information, for example remote vpn clients in this case:

Just like that, the hunt and all related activities are complete. Any past or future generated events from that automation will then be further auto-classified and escalated to the desired response process -  via SOAR playbook, chat notification, or incident response ticket. 

Our DoC escalation gives us exactly that. With a timeline of hosts involved and their involved offenders with past occurrences.

From now on any such threat occurrences will be auto escalated, Incident Response tickets will automatically be opened, and SoC channel notifications will be sent.


The post-hunt activities completed in this example are just the tip of the iceberg when it comes to the automation and escalation capabilities of Stamus Security Platform (SSP). To learn more about these features and how to implement them, read our article titled “After the Hunt”

To learn more about Network Detection and Response (NDR) from Stamus Networks and see the enriched hunting interface for yourself, click the button below and schedule a live demo.

Peter Manev

Peter Manev is the co-founder and chief strategy officer (CSO) at Stamus Networks. He is a member of the executive team at Open Network Security Foundation (OISF). Peter has over 15 years of experience in the IT industry, including enterprise-level IT security practice. He is a passionate user, developer, and explorer of innovative open-source security software, and he is responsible for training as well as quality assurance and testing on the development team of Suricata – the open-source threat detection engine. Peter is a regular speaker and educator on open-source security, threat hunting, and network security at conferences and live-fire cyber exercises, such as Crossed Swords, DeepSec, Troopers, DefCon, RSA, Suricon, SharkFest, and others. Peter resides in Gothenburg, Sweden.

Schedule a Demo of Stamus Security Platform


Related posts

In the Trenches with NDR: NDR Discovers Crypto Wallet Stealer on U.S. University's Network

Tl:DR: A Large U.S. university lacked sufficient visibility into a large segment of its environment...

In the Trenches with NDR: K-12 School District Maximizes Visibility While Avoiding Alert Fatigue

TL;DR: An American school district needed to monitor over 5000 school-owned student devices, making...

In the Trenches with NDR: European MDR Designs Advanced NDR into Their Product Offering

TL;DR: A European managed security service provider seeking to launch an MDR service chose Stamus...