Real Penetration Tests: Equalizers and Dirty Tricks

Firstly, due to my insane schedule, I do not get to interact with all of you as I would like.

Let me take this opportunity to state that many of you are doing incredible work with the content of your posts; I do read just about everything released here on the Big 0; I am proud to be here among such talented people.

My goal for this year is to really try and be a more active member of the wider InfoSec/Hacker community; this is a small step in that direction…

1) Background

This essay will examine an external, blackbox penetration testing engagement I ran as a solo tester against a the regional headquarters of a Fortune 500 (the target network itself could be called a medium sized enterprise of around 300 -500 hosts, not counting servers and web appliances).

Ultimately, I gained ingress through exploiting a gap in the perimeter of the client’s network; this gap was created by a small IT/Web Dev/Web Marketing company the regional office had contracted.

Some of my career has been spent working under management that did not believe or trust in what I do. They needed me because important clients and/or governing/certification(s) bodies demanded they hire someone to perform penetration tests.

Call me a sucker, but I really care about what I do and I believe in trying to inspire change with my work.

In order to inspire that change in the face of general indifference, I quickly discovered that only full exploitation of targets could combat management’s apathy, draw needed attention to vulnerabilities and justify my employment to the powers that be.

Anything less would be minimized or excused; unless I gained a shell or accessed the target (at a minimum), I would need to debate multiple parties expressing magical thinking concerning the effectiveness of firewalls and AV/AM.

With little understanding of penetration testing, management often influenced or decided on the parameters, objectives and/or scope of the engagements I undertook.

This meant I was often short on resources even though I need little to ply my trade; at Schneider Electric I had a Harvard degree worth of security tool licenses (Nexpose, Metasploit Pro, Nessus, etc.)left over by my predecessors that I never touched.

I hate the very idea of using heavily automated tools for penetration testing. My belief is that a tool like Nessus should be used in audits and vulnerability assessments by IT, whereas my job is to find and exploit those holes IT is not likely to ever find themselves.

The resource I was routinely denied was one penetration testers often have little of anyway: engagement duration, the amount of company hours allotted to a penetration test

During the term of my employment when the penetration test detailed herein was undertaken, the work (or billed) hours I was allotted for each engagement were short.

The objectives that were established during the scope of engagements during this period rarely changed: ingress/gain a foothold within the LAN, exploit hosts or a host within scope, escalate privilege(s) and establish persistence within the network(s).

I was always right behind the eightball duration wise, so I could not afford much time for backtracking, jumping down every rabbit hole or cherry picking just the right tool.

I learned to treat each engagement as if it were a living thing with its own pulse and rhythm. I was along for the ride and used every trick I could to remain atop the avalanche.

Though I hid it as to not antagonize management, I loved every second of those engagements.

An Equalizer (I also think off them/call them Dirty Tricks) are words I use for software that is an improvised offensive or defensive staple of my engagement arsenal, even though it is not a security tool.

As for the name “Equalizers” or “Dirty Tricks”, I had thought about the Joker’s lapel flower, a mundane gag made dangerous (spraying acid, poison gas, etc.) by the Joker’s use of strategy, tactics, intelligence and instincts (insanity?).

After all, Joker’s lapel flower isn’t really a long range weapon, it likely wouldn’t be very accurate, it would be slow to deploy and would need a direct hit to specific areas of a target’s physiology to be effective.

Joker must use guile to counteract the limitations above: perhaps faking an injury to put an enemy/victim within range or retreat into a closet to restrict the use of evasive action by an enemy/victim.

The ideas, strategies and tactics behind each Equalizer/Dirty Trick are more important to me than the tools themselves (and will be explained in the final section).

Weaponizing mundane software with strategy/tactics without changing any code speaks to the “thinking around corners” mindset that allows me to react, flow and improvise to the circumstances of an engagement…

My penetration testing philosophy is that I am engaged in a contest of recognizing /utilizing resources and advantages.

The better I become at using every resource at my disposal, the better I will become at harvesting advantages from engagement environments.

These posts will include images from a real world penetration test I’ve undertaken where these tool(s)l or technique(s) played a major part: maybe the tool saved me substantial time, spared me frustration or yielded results that made an impact on the meetings, presentations and reporting afterward.

2) The Engagement: Key Tactics, Tools and Methodology

1.DownThemAll! - is a browser extension for Firefox released under the GNU General Public License. This extension can function as an advanced Download Manager. It can also allows a user to download or list all of the links, images or embedded objects contained in a website/web page, with both HTTP and FTP protocols supported.

2. HackBar - In my humble opinion, HackBar is a must have tool for manually testing web applications. The most basic functionality of this extension allows you to toggle on/off a space below your browser where you can construct and execute (or copy and paste) various variations of a URL.

_Image directly above: blue == HackBar toggle, Purple == HackBar dropdown, Red == DownThemAll! options/management interface(s)

So to start, here is the website of an organization; I needed ingress and often ingress is going to be gained through a web application or website on the perimeter.

Image directly below: Prior DNS enumeration with tools like Fierce, Dig and NSlookup cross referenced with online sources (to be covered in another post) showed the site below as being hosted by an IP within the range of the target domain (and within the scope of the engagement).

I utilize DownThemAll! to begin my enumeration of a target’s web presence (my other methods will be covered in a future post) in search of a foothold.

I will do this while other methods of enumeration I put in play (Dirbuster, Nikto, OneTwoPunch for certain IP/Masscan for full CIDR, Sn1per if time is way too short) work themselves out.

Depending on the composition of the hosts/appliances on a target LAN, of those aforementioned/similar tools, I may only run an ultra mellow, tightly focused port scan.

This is the norm when I engage targets in the industrial/energy sectors, as some network appliances responsible for critical infrastructure (PLCs, SCADA, etc.) can be disrupted or made to crash under the force of a -T2 SYN/ACK (-sS) Nmap scan.

In a Red Team type engagement , I tend to only run the tools I mentioned above against external facets of a target network. This way the scans are easier to obfuscate in keeping with the methodologies of real world attacks: the activity gains cover from the cacophony that is traffic on today’s internet, plus further obfuscation via methods such as proxychains, VPNchains, Tor (TCP based or TCP/UDP based after advanced prep), I2P, and multi-hop SSH.

To my mind, other than daily training/self improvement for performance,enumeration is the most important part of an engagement.

Time management withstanding, I will manually investigate/probe findings disclosed by a tool during enumeration. Regardless of how a pain point/point of interest may find my attention, I review them through manual methods, comparing/contrasting/cross referencing my findings vs. the initial results.

Image directly above: DownThemAll! is run against the target site, producing a dropdown listing 200+ files/media on the site within seconds. By examining the results and the directory structures of where the results reside, I get an instant, comprehensive look at possible points of interest.

Searching through the list of files/links/embedded media present on the site., I am most interested in the directories/directory structure where these resources reside on the site (remember, DownThemAll! can detect resources reachable by HTTP and FTP)…

Image directly above: Lower on the list, CMS directories are found due to DownThemAll detecting HTM related to the pages…

When a CMS directory is discovered (and you can navigate it rather than getting served a 403 Forbidden) , it is worth enumerating for instances of data leakage, at minimum.

**Image directly above:**If a CMS directory (or any directory really) contains these or similar/related words, my attention is grabbed.

  1. “sharing.htm”- Hints at controls/site/front end functionality capable of effecting/manipulating media to interact with backend resources.

Forms/menu based I/O often enables vulnerabilities/exploitable configurations where prefab websites are concerned (PHP and JS tend to be a force multiplier of this).

“sharing.htm” hints at interactions between different resources/levels of the development stack with a means of content storage/data transfer originating on the target host.

  1. “users.htm” and “myaccount.htm”- Hints at there being a high probability that credentials are available somewhere in the back end with connectivity/interaction with form/menu based I/O in the front end.

Image directly below: Each of these .htm examples hint at the possibility that this site has forms/menus I can attempt to abuse outside of an Administrator Panel. If everything on the site is fully updated and configured to Best Practice, gaining access to a web based administrative console (examples: Wordpress Admin Panel, C Panel) is likely a time/resource sink.

The .htm examples above hint that a search for error logs on the site may also be worthwhile; if the CMS directories (and especially sensitive portions of it) are not hidden/unreachable to visitors, than a configuration issue making error logs searchable/viewable (while displaying sensitive data such as failed logins) are not out of the question.

Image directly above: I toggle HackBar on, choose the “Load URL” option (which populates the blank dropdown field with the URL in the browser of that tab) adding “/cms” to the end of the URL, followed by choosing the “Execute” option.

The CMS belonged to am IT/Web Dev company hired by the client to care for their web presence. Images/captions detailing what I found after landing in/enumerating the CMS directory/directories of the targetedt site mostly speak for themselves.:

Images directly above and below: It appears the IT/Web Development company servicing this website for the regional branch of a Fortune 500 company leaves a user logged into the CMS at all times.

Image below and those that follow it show data leakage perfect for myriad variety of password attacks social engineering attacks:

Image directly above with close up of the image directly below: I watched up to the minute updates to the status of company employees, tasks and events in real time via my accessing of the “Notifications” tab.

Images directly above with close up of the image directly below: Reading portions of e-mails from the e-mail tab; these were a few minutes old, complete with full names.

Image directly below: Employee names /positions, rating of time spent on the site with graphs charting developer tasks underway.

Image directly above: Employee names, company e-mail addresses, company cell phone numbers, project status of their current project for this client and “Notes”.

The “Notes section” is a sort of short bulletin board for the employees to save a message concerning the status of task/project they are responsible for providing to the client.

I collected/saved (copy and pasted) all communications I found throughout the client site/client network, both employee to employee and employee to client.

Almost all of the employees that maintained a presence on the CMS of the client’s site also used multiple public facing social media accounts; while attacking these were out of scope, collecting data from them were not, as long as the accounts were set as visible to the public.

All of the employees had signed releases for the client months prior. These releases allowed the client /entities contracted by the client the right to collect
this data for the purposes of security testing, audits and criminal investigations

The employees were notified that by setting personal or business social media accounts to private, that those accounts became off limits for any manner of data collection relating to security testing,

All of the employees with public social media accounts posted textual content regarding their personal and work lives on a regular basis.

I collected/saved (copy and pasted) the varying lengths of textual communications on relevant social media accounts/client networks whenever they interacted with a fellow employee or the client/persons employed by the client.

This allowed me to study/collect nuances of their written communications through cross referencing them:

These were labeled as:

A) employee to employee (at work) positive, employee to employee (at work) negative, employee to employee (at work) neutral,

B) employee to employee (leisure) positive, employee to employee (leisure) negative, employee to employee (leisure) neutral,

C)employee to client (work )positive, employee to client (work )negative, employee to client (work )neutral

D)employee to client (leisure )positive, employee to client (leisure) negative, employee to client (neutral )

This system allowed me to catalogue the responses/nuances of each individual employees when interacting with each other at work, when they were not working (leisure), when interacting with the client while working/while off the clock,when doing any of the latter in a good,stressed or neutral mood. (etc.).

I could label work/ leisure based on the time stamps on social media/site textual content(e-mail, Notes, Posts) and sometimes that content itself.

This system allowed me to organize/amass my collection for rapid deployment of banter as a means of impersonation toward exploitation ( when combined with information from the data leaks I had found).

Often, social engineering all rides on causing a target to react in such a way that they trigger the method of exploitation you have deployed against them.

By developing rough proximity of multiple employees reactions during calm and stress, I raised the probability of my succesfully deploying a social engineering attack.

For example, sending a spearfishing e-mail that capitilizes on a mistake an employee had made last week that a co-worker had reacted with annoyance to.

By repeating much of the message in the e-mail (using the annoyance of one employee vs. the guilt of another) while adding “I need to you to fix this; I am catching hell for your mistake…” I may be able to numb the targets better judgement for the moments it takes to deploy a payload (while the negative emotions involved have a higher probability of limiting back chatter regarding my message).

The initial scope of the engagement had social engineering attacks off the table. However, the contract governing the engagement stated that the client could add phishing/spearfishing to the scope at any time ( which would automatically extend the billed hours/duration of the engagement) as an added, bonus objective.

Image directly above: Team schedule for the month with team To-do list populated with tasks they were obligated to provide for the client.

Image directly below below: Captured templates for the official digital/paper letterhead that had been created by the contractors for the client; some versiona of the templates had been prepared for automated e-mail deployment by the client.

Image directly below: The official invoice template belonging to the client. Created by the contractor, the client utilized the template anytime they billed for payment, whether by digital or physical media.

Some versions of the templates had been prepared for automated e-mail deployment by the client.

Notice HackBar; during an engagement,It is important that I have the most control of my traffic possible. Editing a URL without HackBar could lead to attention I don’t want; what if when I was editing the URL in my address bar it was executed by error, tripping a key term detected by Fortinet, which in turn e-mails a member of IT?

Image directly below: Open tickets with employee names and a description of the issue(s) listed with the state of ticket.

Image directly above and below: The Account Details page had the account data of the user logged into the CMS populating the form data:Full Name, Email Address, Username, Password, Role, Phone number and Website.

Important considerations regarding the CMS:

A) I connected to the contractor’s CMS while enumerating our mutual client’s website, I instantly gained Administrative access, whereas I had no privileged access at all before.

While this privilege looked to be pruned on the CMS (I can access almost everything in a read fashion, but cannot change much in a write fashion), this was still a dangerous game the developers are playing.

B) This user account seemed to be permanently logged into the CMS; per the “My Account Details”, field, this account was stated to have an Administrator role/level of privilege.

C)This default Administrator account seemed to be the default level of privilege a user gained when accessing the CMS (at least until l a user entered their unique employee credentials).

D) I counted over 20 unique usernames on the CMS. It is likely that the CMS was left in this condition so that the contractor’s employees (and possibly the client) could keep track of the progress of projects undertaken by other employees, even if they were having some temporary credential issue.

E)The contractor’s CMS was not expressly within scope, so I couldn’t attack it directly.

However, I could enumerate the contractor’s web presence; after doing so (which will be covered in another post), I discovered that the contractor in question was fond of a Sitepoint-like, multi site/domain CMS solution.

I think it is likely this CMS is reachable and reaches to multiple networks/domains in the client network…

Having opened DownThemAll!, I noticed two things (outlined in red):

  1. The name of this CMS page is “contractor WebApp”; my gut said high likelyhood that the CMS was connected to multiple domain/network segments belonging to the client.
  2. Using DownThemAll! once more, the dropdown revealss multiple CMS entries, then a “www.client.com” entry, which I haven’t seen yet when running the extennsion on the CMS.

So I navigate to it…

Image directly above - The first thing that stands out is that the SSL/TLS certificate does not look like it is issued by a Certificate Authority; the first site did not have this issue…

Image directly below: I deploy DownThemAll!, and many results radically different than the first search on the original home page are found. The first being a link that equates to https:/ / 0.0.0.0 /~clientsite .com/

Directly below: Using with HackBar, we head there…

Directly below: We use HackBar to visit http:/ / 0.0.0.0 (IP redacted)…

Looks like we found our way out of the web directory responsible for the site; there is no longer any HTTP/HTTPS URL in the search bar, only the IP.

Let’s try something back on the root directory. We go back to http: / 0.0.0.0 / ~clientsite. com and use DownThemAll!..

We have two more entries for https:/ / clientsite. com / and http: // clientsite. com/, except they have the symbols for a Windows binary/executable, which we haven’t seen yet.

Image directly below - Also, we have an entry for https:// clientsite .com/ internal/ corp2.htm; we head there…

Image directly below - We end up in an employee internal corporate account…from 2006 (at least a decade old at that point), evidenced by the latest entry in the company calender…

Image directly above - Down them all shows some interesting results; notice that HTTPS is back in play, a bunch of unknown documents, some labeled Outlook 2000, XP, or and Server 2003…

Image directly below - After checking one of the links, I find this…

3) Closing Statements

So what happened?

We will follow the second half of this penetration test at another time. What happened is that I had utilized the contractor’s CMS to jump into an internal computer that had been utilized by the client’s former helpdesk/IT staff; the CMS was in no way secured at that moment vs. trespass.

The contractor stated that they had counted on multiple security technologies within the network and a WAF on the host in question, all of which “failed” to secure the CMS during the engagement.

The other “www .client.com” host I ended up in was on the network used internally within the headquarters by the client’s employees.

Thus, I utilized the CMS as a pivot to gain ingress onto a host on the inner perimeter of the LAN/Intranet.

The webpage without the expected SSL/TLS certificate had been a host where a mockup of the site had been constructed for development.

I began outside the network and ended this essay on a host running a mess of outdated software inside corporate headquarters…

The example images I’ve shown were caught by DownThemAll! and searched/expanded via HackBar during a real penetration test.

The images I’ve shown were by no means all of them pertaining to DownThemAll!, the CMS or the website during this specific engagement.

However, I’ve kept you too long already; so let’s wrap things up with two more things.

First: If you happened by the LinkedIn of some of the employees belonging to the IT/web developers in question, you would have seen the words “security expertise” thrown around quite a bit.

This is just one reason why penetration testing is important: we are the acid test that shows a company the truth worth of what it is paying for.

A security incident can establish this truth as well, though this revelation will almost always be far more painful and expensive.

The internet is now a brave place; untested things left in this space will be tested by someone, which may end up deciding ownership preemptively.

Second: the reason I believe DownThemAll! is worth this level of exhibition is that far more often than not, the extension provides me with many significant options/functions with very little time sink.

During many engagements, time is the main nemesis of the pentester.

In the engagement documented by this article, I was on the website with an open browser window just prior to deploying DownThemAll!. Perhaps three to five seconds after hitting the toolbar icon, the extension populated results; I noticed the entries for the CMS directory perhaps ten to fifteen later.

In less than thirty seconds, DownThemAll! provided multiple paths leading to viable methods for threatening the target network: initially, password attacks, configuration issues with the site (such as the CMS) and social engineering attacks (not in scope, though my pentest report analyzed the risks).

With little investment, DownThemAll! provides versatile, multi-applicable, strategic intelligence with easily identifiable value

Multi-applicable strategic intelligence - At minimum, discovery of CMS directories led to myriad resources capable of weaponization for reasons of exploitation and/or resources capable of repurposing for tactics that supported those purposes.

This included, but was not limited to: employee names, company e-mail/invoice/inquiry templates,employee names with job titles, phone numbers (multiple per employee in many cases), Account Login credentials (with username/password, password length,e-mails attached/phone number attached to account), target intelligence (employee/team schedules, employee/team To-do list, messages, ticketing system, etc.)and suspect site configurations.

Easily identifiable value - The value of the data provided by DownThemAll! is easy to identify, requiring little time or energy to analyze or apply effectively.

The data displayed by the extension is easy to process as findings are conveyed in a simple manner: a vertical column in descending order with few controls, colored white and gray.

I have found that this presentation helps me with processing the data when comparing, contrasting or applying critical thinking skills. (example: prioritizing which single instance of two web servers you will enumerate first based on the directories, files and links present in each servers DownThemAll! table).

Furthermore, I believe the presentation aids me in utilizing the data in an improvisational, spontaneous and/or creative manner.

Versatile - The tool can be applied with ease and haste. Unless changes are being made through the controls or management interface (which I very rarely find a need to adjust), the longest prep for deployment of the extension involves a web browser being opened to the website/web page in question(and execution of the add-on by way of the tool bar icon).

Thus, it should have little interference with or impede other tools/strategies being; I have yet to have experience any interactions between the extension and another tool (including other browser extensions).

The intended functionality of the extension allows for downloading all manner and number of files, links and embedded content (all three can be downloaded any combination of ways: separately, compiled as a text-based list, downloaded from a list, etc.) off of a website, web page,browser tab or off browser tabs.

The functions in the paragraph above can aid in capturing data for later reporting, for finding otherwise hidden links/files/embedded content on a site and many other tasks during/after an engagement.

Another bonus with DownThemAll! is actually its earliest key function: it can function as a more conventional download manager. This includes download by way of multi-part download (quickened downloads by receiving the data in pieces, assembling it when the download ends), Metalink (able to download data/checksums for one file from over multiple URLs at once) while being able to stop, restart and pause downloads.

Sometimes you need to download something during an engagement, but the connection speed is terrible, whether by browser or terminal.

As a download manager, DownThemAll! does seem to speed things up more than I remember similar browser extensions or software doing. It does this while allowing you to focus on the engagement rather than that download that keeps failing.

Links:

DownThemAll:https://addons.mozilla.org/en-US/firefox/addon/downthemall/

Note: Firefox over version 51 cannot use DownThemAll! (or multiple other add-ons), so plan accordingly for security issues related to outdated browsers.

HackBar: https://addons.mozilla.org/en-US/firefox/addon/hackbar/

19 Likes

Hi @maderas,
It’s very nice to hear from you after a while! Very good post providing some very interesting insights into your work.

Which is the vulnerability you encounter most in your pentests? Misconfigured Web-Apps?

P.S.: If you want to win a BinaryNinja License, you might wan to add the “freestylefebruary” tag to your post. I’m sure you are one of the top candidates for winning it!

Best regards,
SmartOne

Hello SmartOne and thank you.

To be honest, I only really began working on penetrating web applications because that is becoming the best way to gain ingress.

I am more a grasp a bunch of things and get what I need guy; I’m like a half blind lab rat that always finds the cheese…

There are people here with individual skills way better than mine.

I am just obsessive and work really, really hard.

I would say the web vuln I find most isn’t a conventional vuln: it is companies losing a grasp on their IP: no or too few sec employees (who are often an ill equipped CISP there/promoted there for need or the pay), bad documenting of company IP, etc.

Misconfigured applications definitely; that IT member who jury rigs something so the load balancers work, or who didn’t have time to update plugins on the day or week a vuln drops…

As for what I am finding most…I would say a bunch of shit that comes with companies having a lot of IP. Many do not have any or many dedicated security employees, so IT won’t catch up with the glut of vulns being found right away.

Look at the recent-ish Apache strut vulns; knowing what I know from testing some pretty huge companies, I bet blackhats gnawed on those forever; keep up with the vulns being found, and you will find the same

Companies are getting cagier; they may have IT run a vuln scan every once in a great while. That will catch the biggies: serious SQL injection (though blind cases may stick around), I haven’t seen RFI/LFI in awhile…

If I had to think with a gun to my head, I’d say sneakier stuff, like path/directory transversal. If the company utilizes its own in-house software for a web appliance, local/remote command execution or injection.

This sounds terrifying, but it is true: I’ve seen companies shrug off XSS or CSRF, saying something like “it is not pertinent to our usecase”, because I couldn’t directly execute it for a shell or ingress.

I think this is why understanding concepts of computing/networking trumps knowing tools 10,000x: you will see or feel something looks wrong, and you will figure it out from there.

And I don’t deserve the BinaryNinja license; maybe someday when I have more time to be a better community member.

5 Likes

Awesome read! Really nice… thank you for posting.

Techno_Forg -

Thank you. You are welcome.

I don’t get to give back here as much as I’d like, so when I can, I am glad to read members here are finding it of value.

-maderas

This topic was automatically closed after 30 days. New replies are no longer allowed.