Recon Series : Domain Enumeration (Part 1)

This is part 1 of a 3 part series explaining reconnaissance (recon) for bug bounty hunting. In part 1, I explained different methods to find subdomains using active and passive methods.
Devang Solanki
October 18th 2023.
recon reconnaissance bugbounty infosec

Recon Series Part 1: Domain Enumeration

Recon Series Part 1: Domain Enumeration

What is Recon?

Recon in the context of a bug bounty is the process of gathering information about a target. It is the most important step in the bug bounty hunting process and can help to identify vulnerabilities that may not be apparent.

There are a variety of different recon techniques that bug bounty hunters use, including but not limited to:

  • Subdomain enumeration: Finding all of the subdomains of a target domain.
  • URL enumeration: Finding all of the accessible URLs on a target website.
  • Port scanning: Scanning all of the ports on a target server to see which ones are open.

There are two ways a bug bounty hunter can perform a recon: actively and passively.

Passive recon in the context of bug bounty involves gathering information about the target system or network without directly interacting with it. This can be done by monitoring or querying publicly available information, such as social media, search engines, and public records. Passive recon is less likely to be detected than active recon, and may not provide accurate or updated information.

Examples of passive recon in bug bounty:

  • Searching for the target organization on social media and other websites to gather information about its employees and technologies
  • Searching for the target organization in search engines like Shodan to gather information about its products and services
  • Reviewing public records to gather information about the target organization such as WHOIS Records, ASN Register, Wayback Machine, etc
  • Dorking is the use of advanced search operators to find specific types of information. For example, you can use Google Dorking to find sensitive files that have been accidentally exposed online or you can use GitHub Dorking to find repositories that contain sensitive information

Active recon involves directly interacting with the target system or network. This can be done by sending requests to the target system, running scans, or using other tools to gather information. Active recon can provide more detailed information about the target system, but it also carries a higher risk of detection by a firewall.

Examples of active recon in bug bounty:

  • Port scanning
  • URL enumeration through crawling or brute forcing
  • Scanning for vulnerabilities using automated tools

Which type of recon is best for bug bounty?

Honestly... it depends. To this date, I have rarely engaged in active recon because it's very time-consuming and resource-intensive. I know many successful hunters who only perform passive recon, and I've also seen successful hunters who do both. So, it's a very subjective matter. If you have a good CPU or a cloud VPS, you could incorporate both active and passive techniques into your recon methodology or automation.

How to do Recon?

In this part, we will only be focusing on subdomain enumeration. Below is the flowchart of what we are going to do in this blog.

image If you find this intimidating, don't worry - I will explain each step with some example commands.

Domain Enumeration

Let's first start with subdomain enumeration on a target * For this, we will first use a passive subdomain enumeration tool. We have plenty of tools available for this such as Subfinder, Amass, Assetfinder etc. But I prefer Amass and Subfinder since they have better results than any other tool I have used. Both Subfinder and Amass have their pros and cons. For example, Amass with or without API keys provides more results than Subfinder, but Subfinder is way faster and lighter on resources.

To find subdomains using Subfinder or Amass, use the below commands:

subfinder -d -all -o subdomain.txt

amass enum --passive -d
amass db -names -d -o out.txt

The first amass command will not provide output in a text file. For that, we have to use the second command.

It's important to fill in API keys as much as you can in the config file for both commands.

After running the above, most bug hunters would directly run httpx to filter out live domains, which is wrong. Since by default, httpx only filters out domains running a website on port 80 or 443. It will miss out on any domain that is running a website on an arbitrary port such as 8000 or any other port.

To filter out live domains, we should ideally first try to resolve all domains using tools such as dnsx, massdns, puredns etc. All mentioned tools are fast and support custom lists of resolvers. You can choose any tool you like or prefer the one which is well maintained.

Pass our subdomain.txt file, which we got from the above tools, and pass it into any resolver you choose.

puredns resolve subdomain.txt -r resolvers.txt --write live_subdomain.txt

massdns -r resolvers.txt -o S -w live_subdomain.txt subdomain.txt

dnsx -silent -r resolvers.txt -l subdomain.txt -o live_subdomain.txt

You might be wondering what the resolvers.txt file is in every command. Well, these tools query DNS server records to check if the domain is alive or not. By passing the resolvers.txt file, we tell the tools to use the DNS servers that we have mentioned. You can run these tools without giving resolver.txt since every tool has a default list of DNS servers which they will query from. You can also use DNS servers of your choice or use any public list of DNS servers such as Trickest Trusted Resolvers.

At this point, we have got live domains from our passive sources. Now we will do brute-forcing of subdomains. This should not be followed in sequence - you can run passive and active tasks in parallel. Multiple tools can help you in performing brute-forcing such as shuffledns, puredns, etc. To bruteforce subdomains you will need a wordlist. You can search GitHub and find one yourself or you could use the wordlist by Trickest or wordlist by Jhaddix.

shuffledns -d -w wordlist.txt -r resolvers.txt -o brute_subdomain.txt

puredns bruteforce ~/path/to/wordlist.txt -r resolvers.txt

Here, all tools support resolving the domain while brute-forcing, so you won't have to resolve it again.

Now we will do permutation on domains we found from passive sources. Previously we just did brute force with wordlist which might not have meaningful words/domain in the context of a given target. Moreover, we were just appending a bunch of words to root domain and resolving. With permutations, we will be making new subdomains with some weird combinations of the already gathered subdomains from passive sources or/alongwith you could also give your wordlist here.

Permutation allows the combining of words from passive sources in new ways. For example, if you found "admin" and "portal" in reconnaissance, permutation could try "adminportal" or "portaladmin". This helps discover subdomains that simple brute forcing may miss. There are some tools that can help in permutation such as regulator, altdns, gotator etc.

python3 -t -f subdomains.txt -o perm_subdomain.txt `

altdns -i subdomains.txt -o perm_subdomain.txt -w wordlist.txt

gotator -sub subdomains.txt -perm wordlist.txt -depth 2 -numbers 5 > perm_subdomain.txt

Now after getting the wordlist from the above commands you will again have to pass it to resolvers to get live domains.

While running active reconnaissance, you will understand why I said it is very time-consuming and resource-heavy. This active process can take days to complete, and this is the reason why I hate it.

Finally combine all the live domains we got from passive sources, active sources, and permutations. The game is not over yet! If you wish or have a lot of time, you can recursively find third-level domains (* for gathered subdomains.

Service Enumeration on Live Subdomains

Most bug hunters will here run httpx to get live subdomains, but this is wrong as explained previously. Httpx by default only filters out 443 and 80.

Here we will perform port scanning on all the live subdomains we just found. For this example, we will use naabu. Naabu also supports excluding CDNs while scanning for ports, which will save a lot of time. This is because CDNs mostly run on 80 and 443, and doing port scans would just waste resources and time. I would suggest scanning only the top 1000 ports, but if you wish, you can scan for all ports.

naabu -list live_subdomain.txt -tp 1000 -exclude-cdn -o open_ports.txt

naabu -list live_subdomain.txt -p - -exclude-cdn -o open_ports.txt

We will pass this result to httpx to get ports that are running webservices. I would suggest you run httpx again on live_subdomain.txt since nabbu will exclude all the CDN and there might be some false positives too.

httpx -list open_ports.txt -silent -o http1.txt

httpx -list live_subdomain.txt -silent -o http2.txt

cat http1.txt http2.txt | sort -u | tee -a http.txt

For those ports that are not running a web service, you could further scan them with nmap and figure out what they are running. We will not be focusing on those ports as of now since we just focusing on web recon.

In the next part, we will see what to do with all the URLs we just got from httpx. We will explain the next steps on how to analyze these URLs to find vulnerabilities. This will be an important and exciting stage as we dig into these URLs to uncover potential bugs and bounties! Stay tuned for the next part where I will walk through the process in simple terms with clear examples.

Let's take your security
to the next level