How do you recon a website?

This is how I would recon a website. I would gather subdomains, and directories of a website. Gathering subdomains is not a problem since there are tools or search engines to gather info about subdomains. A problem rises when I want to gather the directories of a website.

I have been thinking about using automated tools and user-directed tools to gather the dirs of the website. But automated-tools simply brute-force the dirs of a website which would send thousands of http requests and I think this is easily traceable and risky. But without automated tools, I wouldn’t be able to gather enough number of directories.

I would like to hear how you usually carry out recon of a website to get some insights:D

Since harvesting a website’s directories is more like guessing, it is almost impossible to make it quiet. What came to my mind is that you can try directory brute-force with rate limiting: use a brute-force tool, set rate limits to reduce traffic and logs. Tools like Gobuster and Dirsearch allow you to control the request rate. You can also use a proxy to make it more like connecting from different locations. You may be able to use Google Dorks, which are specific search queries to find directories and files on a website. This doesn’t involve making direct requests to the server and is relatively discreet. Use tools such as Burp Suite’s Spider or wget to recursively crawl a website. This can be done more selectively and quietly than brute-force enumeration of directories.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.