SANS


Install & Configure Filebeat on Raspberry Pi ARM64 to Parse DShield Sensor Logs, (Sun, Jul 23rd)
Follow the step-by-step instructions provided [1] to install our DShield Sensor using Raspberry Pi Imager with Raspberry Pi OS Lite (64-bit). The following are the scripts used to parse the data published in this diary [4]. Al the scripts part of this diary are listed here with my other Elasticsearch projects.
Some of the recent changes implemented in the DShield Sensor, no longer save the web data into the sqlite database. These are the steps to save the weblogs into sqlite database and are also listed in the sqlite.sh script used to parse and dump the weblogs:
$ sudo cp -r ~/dshield/srv/www /srv
Add the following webpy service to the DShield sensor (with using vi or nano) and save the script:
$ sudo vi /lib/systemd/system/webpy.service
[Unit]
Description=DShield Web Honeypot
After=systemd-networkd-wait-online.service
Wants=systemd-networkd-wait-online.service
[Service]
Type=idle
WorkingDirectory=/srv/www/bin
User=cowrie
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=webpy
ExecStart=/usr/bin/python3 /srv/www/bin/web.py
[Install]
WantedBy=multi-user.target
Save the file webpy.service and complete the setup with the following command:
$ sudo ln -s /lib/systemd/system/webpy.service /etc/systemd/system/multi-user.target.wants/webpy.service
$ sudo chown -R cowrie:root /srv/www/DB
$ sudo systemctl enable webpy.service
$ sudo systemctl start webpy.service
$ sudo systemctl status webpy.service
Setup DShield Sensor Filebeat
After completing the installation of the SQLite database, add the following ARM64 Filebeat packages to the Pi to send the logs the Elasticsearch.
Installing ARM64 Filebeat package using [3] the following commands:
$ wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
$ sudo apt-get install apt-transport-https
$ echo "deb https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list
$ echo "deb https://artifacts.elastic.co/packages/oss-8.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-8.x.list
$ sudo apt-get update && sudo apt-get install filebeat
Download the updated filebeat.yml file that will forward the logs the Elasticsearch:
$ sudo curl https://handlers.sans.edu/gbruneau/elk//DShield/filebeat.yml -o /etc/filebeat/filebeat.yml
Edit the filebeat.yml file and change the IP address to the logstash parser (192.168.25.23):
$ sudo vi /etc/filebeat/filebeat.yml
Start Filebeat
$ sudo systemctl enable filebeat
$ sudo systemctl start filebeat
$ sudo systemctl status filebeat
Setup Logstash Collection & Parsing
Install logstash and configure with the following 4 scripts in the /etc/logstash/conf.d
$ sudo curl https://handlers.sans.edu/gbruneau/elk/DShield/logstash-200-filter-cowrie.conf -o /etc/logstash/conf.d/logstash-200-filter-cowrie.conf
$ sudo curl https://handlers.sans.edu/gbruneau/elk/DShield/logstash-202-filter-cowrie-sqlite.conf -o /etc/logstash/conf.d/logstash-202-filter-cowrie-sqlite.conf
$ sudo curl https://handlers.sans.edu/gbruneau/elk/DShield/logstash-300-filter-iptables.conf -o /etc/logstash/conf.d/logstash-300-filter-iptables.conf
$ sudo curl https://handlers.sans.edu/gbruneau/elk/DShield/logstash-900-output-elastic.conf -o /etc/logstash/conf.dlogstash-900-output-elastic.conf
These 4 files are used to merge the logs to cowrie.* in the Elasticsearch server. Edit the three logstash configuration files and changed the DNS IPs to match your own network (logstash-200-filter-cowrie.conf, logstash-202-filter-cowrie-sqlite.conf and DShield/logstash-300-filter-iptables.conf). Look for: nameserver => [ "192.168.25.2", "192.168.25.3" ]
Edit the logstash-900-output-elastic.conf and change the domain (can be an IP) and the SSL certificate (if you are using it) to the Elasticsearch server location (remote.ca):
$ sudo vi /etc/logstash/conf.dlogstash-900-output-elastic.conf
Download and apply these four Elasticsearch mapping templates using the the Console in Dev Tools to install them (copy/paste/apply):
https://handlers.sans.edu/gbruneau/elk//DShield/cowrie.txt
https://handlers.sans.edu/gbruneau/elk//DShield/cowrie-dshield.txt
https://handlers.sans.edu/gbruneau/elk//DShield/cowrie-sqlite.txt
If you don't have any replica (single server), make sure you change the 1 to a 0 before applying the policy. Repeat this for all 3 policies.
Import each policy
After the policiies and templates are imported, it is time to start Logstash. If successful, logstash will create the Indices under Index Management.
$ sudo systemctl enable logstash
$ sudo systemctl start logstash
$ sudo systemctl status logstash
Last part is to dowload and import dshield_sensor_8.71.ndjson into Kibana in Stack Management → Saved Objects to have the dashboard available for viewing the data under Dashboard → [Logs DShield Sensor] Overview.
This step gets a copy of the weblogs hourly for filebeat to add them to Elasticsearch. While on the DShield sensor in your home user directory, download and install the Bash script as follow:
$ mkdir scripts
$ cd scripts
$ mkdir sqlite
$ wget https://handlers.sans.edu/gbruneau/elk/DShield/sqlite.sh
$ chmod 755 sqlite.sh
# Add this line to the crontab to run hourly which will save the logs in ~/sqlite and save the change
$ crontab -e
# Dump the cowrie web logs every hours
0 * * * * /home/guy/scripts/sqlite.sh > /dev/null 2>1&
The output will look like this and is parsed by the logstash parser:
It may take a little while before the weblogs are logged into the sqlite database. However, the following commands can be used to dump to a file the weblog that have been captured since it was started:
sqlite3 /srv/www/DB/webserver.sqlite '.mode insert' 'SELECT strftime("%d-%m-%Y %H:%M", "date", "unixepoch"), address,cmd, path, useragent from REQUESTS' | sed "s/..n',char(10)),//g" | awk '{gsub(/\\n/," ")}1' >> ~/request.sql
sqlite3 /srv/www/DB/webserver.sqlite '.mode insert' 'SELECT strftime("%d-%m-%Y %H:%M", "date", "unixepoch"), address, cmd, headers, path from POSTLOGS' | sed "s/..n',char(10)),//g" | awk '{gsub(/\\n/," ")}1' >> ~/postlogs.sql
[1] https://isc.sans.edu/tools/honeypot/
[2] https://www.elastic.co/downloads/beats/filebeat
[3] https://www.elastic.co/guide/en/beats/filebeat/8.8/setup-repositories.html#_apt
[4] https://isc.sans.edu/diary/DShield+Honeypot+Activity+for+May+2023/29932
[5] https://isc.sans.edu/diary/DShield+Sensor+JSON+Log+to+Elasticsearch/29458
[6] https://isc.sans.edu/diary/DShield+Sensor+JSON+Log+Analysis/29412
[7] https://isc.sans.edu/diary/DShield+Honeypot+Maintenance+and+Data+Retention/30024/
[8] https://github.com/jslagrew/cowrieprocessor/blob/main/submit_vtfiles.py
[9] https://handlers.sans.edu/gbruneau/elk/DShield/dshield_sensor_8.71.ndjson
[10] https://handlers.sans.edu/gbruneau/elastic.htm
-----------
Guy Bruneau IPSS Inc.
My Handler Page
Twitter: GuyBruneau
gbruneau at isc dot sans dot edu
YARA Error Codes, (Sat, Jul 22nd)
I recently had to help out a friend with a YARA error. I've never seen this before, but the YARA error was just a number, not an error description.
An error description would be, for example, something like "syntax error".
But here it was just a number: error number 4.
I looked at Windows Error codes: error 4 is ERROR_TOO_MANY_OPEN_FILES
That didn't make sense in this situation here.
So I started to look at YARA's source code on github, and found this file: error.h.
Error 4 is: ERROR_COULD_NOT_MAP_FILE
Thus, if you have a numeric YARA error, look it up in this file to know what it means: error.h
Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com
ISC Stormcast For Friday, July 21st, 2023 https://isc.sans.edu/podcastdetail/8582, (Fri, Jul 21st)
Shodan's API For The (Recon) Win!, (Fri, Jul 21st)
Ever been on a call with a client, and had that "I need a full set of nmap results for that host in 5 seconds" moment? Like when you're trying to scope out the size of a project (maybe a pentest project) and if you *just* had the list of open ports you'd have an answer other than "I'll call you back", because nmap will take 10 minutes?
Well Shodan has you covered, but even that takes a login. Shodan has you even better covered with their API! First, get your API key, you'll find it in your account page.
You'll find the API documentation here: https://developer.shodan.io/api
But for "recon on the fly", you'll just need a few API calls. You can do all of these using curl, so they're easy to script
This one will give you info on a target host:
curl -s -k "https://api.shodan.io/shodan/host/<host_ip>?key=%shodan-api-key%"
Like most jason based APIs, if you want to read the returned data with your eyes (instead of code), running it through jq really helps (stay tuned, more on this on Monday)
Looking at 45.6.103.34 (one of the IP's behind isc.sans.edu), we get a BOATLOAD of information,
curl -s -k "https://api.shodan.io/shodan/host/45.60.103.34?key=%shodan-api-key%" | jq
{
"region_code": "ON",
"tags": [
"cdn"
],
"ip": 758933282,
"area_code": null,
"domains": [
"cio.org",
"ranges.io",
"cyberaces.org",
"sans.co",
"imperva.com",
"cyberfoundations.org",
"securingthehuman.org",
"sans.org",
"giac.net",
"sans.edu",
"giac.org",
"cybercenters.org"
],
"hostnames": [
"cio.org",
"ranges.io",
"cyberaces.org",
"sans.co",
.. and so on
If you run this through "wc -l", you'll find that there are 12353 lines in the results for this host. That's a lot to plow through! It includes open ports, services on them, certificates, CVEs that might be in play, everything you'd get in the Shodan web interface. I tend to run this command through "less", then search for what I need.
If you just want port information for an IP, you can use the same API and grep for it:
.
Running that against the same host give us:
"port": 25,
"port": 53,
"port": 80,
"port": 81,
"port": 82,
"port": 83,
"port": 84,
"port": 88,
"port": 389,
"port": 443,
"port": 444,
"port": 465,
"port": 554,
"port": 587,
"port": 631,
"port": 636,
"port": 1024,
"port": 1177,
"port": 1234,
"port": 1337,
"port": 1400,
"port": 1433,
.... for a total of 130 open ports
No surprise there, we are a research site after all ...
How about for the entire DNS zone "sans.org"?
curl -s -k "https://api.shodan.io/dns/domain/%1?key=%shodan-api-key" | jq | less
As with all Shodan data, all the data you get back is historic - that's how you get it so quick. They scan the internet and when you check a host via the API or website, you are querying their database of saved values, not the host itself. This means that if it's a faster-moving host, your data might be off a bit here or there, it's from yesterday or maybe last week. If you look at any particular record, you'll find a timestamp on it so you can see how current it is.
So this type of information is good to point you in the right direction to narrow down a real port scan, to get ballpark values if you are scoping out a project and similar work. Or if need to query "find me all of port "X" on the internet", this is a great way to get that job done in a few seconds.
Let's look for all the hosts that run ssh on port than 22:
curl -s -k "https://api.shodan.io/shodan/host/search?key=%shodan-api-key%&query=ssh&port:22" | grep \"ip\": | wc -l
97
Yup, this API also returns larger datasets in chunks, mainly so that you can reasonably digest it. You can "page" through the data by adding a page number:
curl -s -k "https://api.shodan.io/shodan/host/search?key=%shodan-api-key%&query=ssh&port:22&page=2"
(and so on).
This isn't too practical by hand, the cursor (the pointer that keeps track of where you are in the list) will time out after a short period of inactivity, but it does great in a while loop.
A more typical answer is you generally want to narrow down your searches to a reasonable result set. Or if you want to search the entire internet and just get counts, there's an API just for that. Let's look for SSH on port 22:
curl -s -k "https://api.shodan.io/shodan/host/count?key=%shodan-api-key%&query=ssh+port:22" | jq
{
"matches": [],
"total": 19033015
}
How about SSH that's on ports OTHER than 22?
curl -s -k "https://api.shodan.io/shodan/host/count?key=%shodan-api-key%&query=ssh+-port:22" | jq
{
"matches": [],
"total": 22832793
}
Look for telnet on port 23 and on "not port 23", that'll keep you up at night!
Or, say you were digging into open RDP ports. Look at our page for that port (https://isc.sans.edu/data/port.html?port=3389), you'd see a down-tick in recent scanning activity for that port. I wonder how many internet-facing hosts have RDP open on 3389?:
curl -s -k "https://api.shodan.io/shodan/host/count?key=%shodan-api-key%&query=port:3389" | jq
{
"matches": [],
"total": 3333166
}
Yikes!!
How about RDP with screenshots?
curl -s -k "https://api.shodan.io/shodan/host/count?key=%shodan-api-key%&query=windows+port:3389+has_screenshot:true" | jq
{
"matches": [],
"total": 635901
}
Webcams with screenshots? Not as many as I thought, though the count of live webcams with unauthenticated or default-creds video is likely higher
curl -s -k "https://api.shodan.io/shodan/host/count?key=%shodan-api-key%&query=webcam+has_screenshot:true" | jq
{
"matches": [],
"total": 235
}
Looking for a specific vulnerability? Let's hunt for CVE-2022-43497, one of the FortiOS vulns that can be detected with a non-intrusive scan:
curl -s -k "https://api.shodan.io/shodan/host/count?key=%shodan-api-key%&query=vuln:cve-2022-43497
{"matches": [], "total": 73208}
(note, you can't query for vulns with a base Shodan subscription, you'll need the small business subscription or better for this)
You can also combine any of the above with "default password" as a search term. And yes, you'll find stuff here too - just looking for a total of non-specific devices with default creds:
curl -s -k "https://api.shodan.io/shodan/host/count?key=%shodan-api-key%&query=default+password" | jq
{
"matches": [],
"total": 30956
}
I'm sure that this count is higher, they're only comparing against what their list of default creds is.
You can add to your query using more filters (https://beta.shodan.io/search/filters) and narrow things down further by adding "facets" like country, city or ASN to your query (https://beta.shodan.io/search/facet)
Long story short, this is a way cool way to get a ton of information in just a few seconds, these are some handy scripts to keep ready-to-run in your search path!
Example scripts, along with the other recon scripts I've posted recently can all be found in my github: https://github.com/robvandenbrink/recon_scripts
If you've got a cool shodan search that you've used, please share in our comment section!
===============
Rob VandenBrink
rob@coherentsecurity.com
Deobfuscation of Malware Delivered Through a .bat File, (Thu, Jul 20th)
I found a phishing email that delivered a RAR archive (password protected). Inside the archive, there was a simple .bat file (SHA256: 57ebd5a707eb69dd719d461e1fbd14f98a42c6c3dcb8505e4669c55762810e70) with the following name: SRI DISTRITAL - DPTO DE COBRO -SRI Informa-Deuda pendiente.bat. Its current VT score is only 1/59![1]
Let’s have a look at this file! After the classic “@echo off”, there is a very long line that looks like a payload, it starts with “::”, a comment in .bat files (a common alternative to the REM command):
The payload looks encrypted and takes most of the file size. At the end of the script, we find some code that seems obfuscated, but we immediately can see a pattern (the human eye will always be more powerful than a computer)
The deobfuscated script reveals a piece of Powershell that uses the same technique:
Once beautified, we have this:
This confirms our idea! The script will read and decompress the payload from the original file (the line starting with “::”). The code reveals that the payload is split into two parts separated by q “:”.
$bxDfq=[System.Linq.Enumerable]::$xRIq([System.IO.File]::$cBAX([System.IO.Path]::$UAbP([System.Diagnostics.Process]::$TkRf().$rvdd.FileName, $null)), 1); $dfqcx=$bxDfq.Substring(2).$oVot(':'); $xlGba=aBhVu (PaVHU ([Convert]::$wYqi($dfqcx[0]))); $UiOEh=aBhVu (PaVHU ([Convert]::$wYqi($dfqcx[1])));Let’s decode the two payloads with a simple Cyberchef recipe:
The two decrypted payloads are:
- Payload1: ce8994715e43e82ec8eec439418ceef0fff238c661f873b069de402360bb671d
- Payload2: af276f76e20bfcf9250335fe6bd895faf9c2b106a4edd23ea85594a7bd182635
Both are unknown on VT at this time.
The first payload launches a Powershell script that implements persistence via multiple technique like a scheduled task:
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" Register-ScheduledTask -TaskName 'OneDrive uXeplsWzSa' -Trigger (New-ScheduledTaskTrigger -AtLogon) -Action (New-ScheduledTaskAction -Execute 'C:\Users\Admin\AppData\Roaming\uXeplsWzSa.vbs') -Settings (New-ScheduledTaskSettingsSet -AllowStartIfOnBatteries -Hidden -ExecutionTimeLimit 0) -RunLevel Highest -ForceThe uXeplsWzSa.vbs file contains:
CreateObject("Shell.Application").ShellExecute """C:\Users\REM\AppData\Roaming\uXeplsWzSa.cmd""", "", "", "open", 2Unfortunately, the process crashes. The cmd file is the original .bat script. To increase the chances of making it run, I executed the script in a sandbox, and the final malware executed was 'wkx5nrg2.isx.exe' (SHA256:42BA54142CD9E5DE5C6370F26DB8AEE6870FF8D0E4A86546E855CDF6828621AD). This one is also unknown on VT but belongs to the Remcos[2] malware family. Here is the extracted config:
{ "c2": [ "microsoftteams[.]con-ip[.]com:2450" ], "attr": { "mutex": "Rmcau1mstub-R03XGF", "copy_file": "remcos.exe", "hide_file": false, "copy_folder": "Remcos", "delete_file": false, "keylog_file": "logslmilo.dat", "keylog_flag": false, "audio_folder": "MicRecords", "install_flag": false, "keylog_crypt": false, "mouse_option": false, "connect_delay": "0", "keylog_folder": "logslilo", "startup_value": "\u0001", "screenshot_flag": false, "screenshot_path": "%AppData%", "screenshot_time": "10", "connect_interval": "1", "hide_keylog_file": true, "screenshot_crypt": false, "audio_record_time": "5", "screenshot_folder": "Screenshots", "take_screenshot_time": "5", "take_screenshot_option": false }, "rule": "Remcos", "botnet": "STUB1", "family": "remcos" }[1] https://www.virustotal.com/gui/file/57ebd5a707eb69dd719d461e1fbd14f98a42c6c3dcb8505e4669c55762810e70
[2] https://malpedia.caad.fkie.fraunhofer.de/details/win.remcos
Xavier Mertens (@xme)
Xameco
Senior ISC Handler - Freelance Cyber Security Consultant
PGP Key
ISC Stormcast For Thursday, July 20th, 2023 https://isc.sans.edu/podcastdetail/8580, (Thu, Jul 20th)
Citrix ADC Vulnerability CVE-2023-3519, 3466 and 3467 - Patch Now!, (Wed, Jul 19th)
Citrix released details on a new vulnerability on their ADC (Application Delivery Controller) yesterday (18 July 2023), CVE-2023-3519. This is an unauthenticated RCE (remote code execution), which means an attacker can run arbitrary code on your ADC without authentication.
This affects ADC hosts configured in any of the "gateway" roles (VPN virtual server, ICA Proxy, CVPN, RDP Proxy), which commonly face the internet, or as an authentication virtual server (AAA server), which is usually visible only from internal or management subnets.
This issue is especially urgent because malicious activity targeting this is already being seen in the wild, this definitely makes this a "patch now" situation (or as soon as you can schedule it). If your ADC faces the internet and you wait until the weekend, chances are someone else will own your ADC by then!
This fix also resolves a reflected XSS (cross site scripting) issue CVE-2023-3466 and a privilege escallation issue CVE-2023-3467.
Full details can be found here: https://support.citrix.com/article/CTX561482/citrix-adc-and-citrix-gateway-security-bulletin-for-cve20233519-cve20233466-cve20233467
===============
Rob VandenBrink
rob@coherentsecurity.com
HAM Radio + Enigma Machine Challenge, (Wed, Jul 19th)
For those of you with a HAM radio (receiver) setup and an interest in crypto, the MRHS (Maritime Radio Historical Society) and the Cipher History Museum have an Enigma challenge this Saturday (July 22, 2023)
They'll be sending a coded message in 5 letter groups, which you can capture and then decode with the Enigma you have collecting dust on your shelf. Or if you don't own the actual gear, you can use an Enigma emulator on your phone or an online simulator - those of course will do the job very nicely as well.
If you don't have a full HAM radio setup, you can do receive-only very nicely wiith an RTL SDR (software defined radio) if you have the right geography + antenna combination, so the barrier to entry on this is very low, as long as you are close enough to "hear" the signal.
Full details are here: https://www.radiomarine.org/mrhs-events
===============
Rob VandenBrink
rob@coherentsecurity.com
ISC Stormcast For Wednesday, July 19th, 2023 https://isc.sans.edu/podcastdetail/8578, (Wed, Jul 19th)
Exploit Attempts for "Stagil navigation for Jira Menus & Themes" CVE-2023-26255 and CVE-2023-26256, (Tue, Jul 18th)
Today, I noticed the following URL on our "first seen URLs" page:
/plugins/servlet/snjFooterNavigationConfig?fileName=../../../../etc/passwd&fileMime=$textMime
We had one report for this URL on March 28th, but nothing since then. Yesterday, the request showed up again and reached our reporting threshold.
All of yesterday's requests appear to come from a single Chinese consumer broadband IP address: %%ip:124.127.17.209%%.
The vulnerability was disclosed in March as one of two vulnerabilities in "Stagil navigation for Jira – Menus & Themes" [1]. The tool is a plugin for Jira to customize the look and feel of Jira. It is distributed via the Atlassian Marketplace.
CVE-2023-26255 and CVE-2023-26256 were both made public at the same time and describe similar directory traversal vulnerabilities. These vulnerabilities allow attackers to retrieve arbitrary files from the server. As you can see in the exploit above, the attacker attempts to download the "/etc/passwd" file. Typically, '/etc/passwd/ is not that interesting. But it is often used to verify a vulnerability. The attacker may later retrieve other files that are more interesting.
Jira is always a big target. It organizes software development and can be an entry point to a supply chain attack.
After seeing the attacks for one fo the vulnerabilities, I went back to look for attempts to exploit the second directory traversal vulnerability, and indeed, it is also being exploited. Two days earlier, we saw a small increase in requests from %%112.118.71.111%%, an IP address associated with an ISP in Hongkong.
The request used is similar in that it attempts to retrieve "/etc/passwd":
/plugins/servlet/snjCustomDesignConfig?fileName=../../../../etc/passwd&fileMime=$textMimeLooking further, I was able to find attempts to retrieve "/dbconfig.xmlpasswd" using the vulnerability:
/plugins/servlet/snjCustomDesignConfig?fileMime=$textMime&fileName=../dbconfig.xmlpasswd
Jira uses dbconfig.xmlpasswd to store database passwords [2]
As usual, be careful installing plugins for Jira. Plugins have been a significant source of vulnerabilities in the past. Jira should also not be exposed to the internet directly but needs to be protected by a VPN or other measures. It is too important and too juicy of a target to expose it. Even Jira itself had a number of vulnerabilities before installing any plugins.
It is not clear if the two scans for either vulnerability are related. Having two larger scans for a vulnerability like this within a short time span is suspicious. The scans use different user agents, but this doesn't mean that the scans were launched by different groups/individuals. Neither IP address is associated with a known threat group, as far as I know.
Requests for URLs that contain "/plugins/servlet/snj"
[1] https://github.com/1nters3ct/CVEs/blob/main/CVE-2023-26256.md
[2] https://confluence.atlassian.com/jirakb/startup-check-creating-and-editing-the-dbconfig-xml-file-881656915.html
---
Johannes B. Ullrich, Ph.D. , Dean of Research, SANS.edu
Twitter|
ISC Stormcast For Tuesday, July 18th, 2023 https://isc.sans.edu/podcastdetail/8576, (Tue, Jul 18th)
ISC Stormcast For Monday, July 17th, 2023 https://isc.sans.edu/podcastdetail/8574, (Mon, Jul 17th)
Brute-Force ZIP Password Cracking with zipdump.py: FP Fix, (Sun, Jul 16th)
In diary entry "Brute-Force ZIP Password Cracking with zipdump.py" I wrote the following:
zipdump can also generated false positives. ZIP files that can be openened with a guessed password through the zipfile/pyzipper API, may still throw an error when the full content is actually read:
This is something I will fix in an upcoming version.
I fixed this in version 0.0.27. Whenever a password is found, zipdump.py will decode the full content of the file to check for CRC32 errors.
Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com
Wireshark 4.0.7 Released, (Sat, Jul 15th)
Wireshark version 4.0.7 was released with 2 vulnerabilities and 22 bugs fixed.
Didier Stevens
Senior handler
Microsoft MVP
blog.DidierStevens.com
Infocon: green
ISC Stormcast For Friday, July 14th, 2023 https://isc.sans.edu/podcastdetail/8572, (Fri, Jul 14th)
ISC Stormcast For Thursday, July 13th, 2023 https://isc.sans.edu/podcastdetail/8570, (Thu, Jul 13th)
DShield Honeypot Maintenance and Data Retention, (Thu, Jul 13th)
Some honeypot changes can maintain more local data for analysis, ease the process of analysis and collect new data. This diary will outline some tasks I perform to get more out of my honeypots:
- add additional logging options to dshield.ini
- make copies of cowrie JSON logs
- make copies of web honeypot JSON logs
- process cowrie logs with cowrieprocessor [1]
- upload new files to virustotal
- capture PCAP data using tcpdump
- backup honeypot data
DShield Data Readily Available
The nice thing is that after setting up your DShield honeypot [2], data is available very quickly in your user portal. There is easy access to:
- firewall logs [3]
- web logs [4]
- ssh/telnet logs [5] [6]
Much of this data can also be downloaded in raw format, but may not have much data as other logs locally available on the honeypot. Sometimes having the local honeypot data can help to review data over longer periods of time.
Figure 1: Example of honeypot firewall logs in user portal
Figure 2: Example of honeypot web logs in user portal
Figure 3: Example of honeypot ssh/telnet graphs in user portal
Figure 4: Example of honeypot raw ssh/telnet data in user portal
Add additional logging options to dshield.ini
Default logging location: /var/log/dshield.log (current day)
One small change in /etc/dshield.ini will generate a separate log file that will basically have unlimited data retention until the file is moved or cleared out. The file used by default for firewall log data in /var/log/dshield.log will only show data from the current day. Simply add the line:
localcopy=<path to new log file>
Figure 5: Example modification of /etc/dshield.ini file to specify an additional logging location
In my example, I wanted to save my /logs/localdshield.log.
Note: Previous versions of the DShield honeypot used to store web honeypot data in this file as well. This has been moved to a new JSON file.
Make copies of cowrie JSON logs
Default logging location: /srv/cowrie/var/log/cowrie/ (last 7 days, 1 file per day)
The cowrie files can be very helpful since they not only store usernames and passwords attempted, but also the commands attempted by bots or users that receive console access via cowrie. In addition, information about file uploads and downloads also reside within this data. Much more information is in the raw JSON file than are in the ISC user portal if there is an interest in reviewing it.
The cowrie logs are rotated out and only the last 7 days are stored. Similar to the firewall data, I also want to store my archive of cowrie JSON files in my /logs directory by setting up an entry in my crontab [7].
# open up editor to modify crontab for a regularly scheduled task # -e option is for "edit" crontab -e # m h dom mon dow command # option will copy all *.json files to /logs/ directory daily at 12AM 0 0 * * * cp /srv/cowrie/var/log/cowrie/cowrie.json* /logs/One benefit of the cowrie log files is that the file names are based on the date of activity. This means there's no need for creative file renaming or rotations to get around accidentally overwriting data.
Figure 6: Cowrie logs files stored on the DShield honeypot
Make copies of web honeypot JSON logs
Default logging location: /srv/db/webhoneypot.json (currently no retention limit)
There is another file in this location, isc-agent.sqlite, although the data within that SQLite database is cleared out very quickly once submitted. Similar to the cowrie logs, setting up a task in crontab can help retain some of this data.
# open up editor to modify crontab for a regularly scheduled task # -e option is for "edit" crontab -e # m h dom mon dow command # option will copy all data witin the /srv/db/ folder to /logs/ directory daily at 12AM 0 0 * * * cp /srv/db/*.* /logs/Some improvements could be made to this. Since all the data is in one large file, it could be easily overwritten if logging changes. Doing a file move and adding the current date to the file would be easy enough. A new file would be created and each file would have the last 24 hours of data. There may be an issue if the file is being actively written to at the time, but would likely be rare. Either way, improvements can be made.
Process cowrie logs with cowrieprocessor
Rather than manually reviewing the cowrie JSON files or using /srv/cowrie/bin/playlog to view tty files, this python script helps me do the following:
- summarize attacks
- enrich attacks with DShield API, Virustotal, URLhaus, SPUR.us
- outline commands attempted
- upload summary data to dropbox
Just like a lot of the other options, automation of this process was completed by using crontab, performing the process a few times a day. Options are available on GitHub [1].
Upload new files to virustotal
I set up regular uploads of file data to Virustotal to help with data encrichment for cowrieprocessor. If data isn't uploaded, it's hard to have any data about the file. It also means that every now and again, a file is uploaded for the first time by one of my honeypots. It's also great to be able to share artifacts with the community.
An example python script is also available on GitHub [8].
Figure 7: Example upload of file to Virustotal [9]
Capture PCAP data using tcpdump
We've already seen quite a bit of data that can be used to analyze activity on a DShield honeypot. An additional artifact that's very easy to collect is network data directly on the honeypot. This can also be easily accomplished through crontab by scheduling tcpdump to run on boot. This is especially important since the honeypot will reboot daily and the tcpdump process will need to be restarted.
# crontab entry # restart tcpdump daily at 12AM 0 00 * * * sudo /dumps/stop_tcpdump.sh; sudo /dumps/grab_tcpdump.sh # start tcpdump 60 seconds after boot @reboot sleep 60 && sudo /dumps/grab_tcpdump.sh # /dumps/stop_tcpdump.sh file contents #Stop tcpdump command PID=$(/usr/bin/ps -ef | grep tcpdump | grep -v grep | grep -v ".sh" | awk '{print $2}') /usr/bin/kill -9 $PID # /dumps/grab_tcpdump.sh contents TIMESTAMP=`date "+%Y-%m-%d %H:%M:%S"` tcpdump -i eth0 -s 65535 port not 12222 and not host <ip used for remote access/transfers> -w "/dumps/$TIMESTAMP tcpdump.pcap"A variety of things can be done with this network data and some of this I've explored in previous diaries [10].
Backup honeypot data
I often use backups of data for use on a separate analysis machine. These backups are also password protected since there backups will contain malware and may be quarantined by antivirus products. It's also a good protection against accidentally opening one of those files on machine that wasn't intended for malware analysis. The configuration data can also help with setting up a new honeypot. In addition, it is a great opportunity to save space on a honeypot. I personally like to backup my data before DShield honeypot updates or other changes.
# backup honeypot data, using "infected" for password protection zip -r -e -P infected /backups/home.zip -r /home zip -r -e -P infected /backups/logs.zip -r /logs zip -r -e -P infected /backups/srv.zip -r /srv zip -r -e -P infected /backups/dshield_logs.zip -r /var/log/dshield* zip -r -e -P infected /backups/crontabs.zip -r /var/spool/cron/crontabs zip -r -e -P infected /backups/dshield_etc.zip /etc/dshield* zip -r -e -P infected /backups/dumps.zip -r /dumps # clear out PCAP files older than 14 days to save room on honeypot # 14 days is about 6 GB on a honeypot, about 425 MB/day find /dumps/*.pcap -mtime +14 -exec rm {} \;The process of setting this up is relatively quick, but can be improved. Some future improvements I'd like to make:
- rotate web honeypot logs to keep one day of logs per file (similar to cowrie logs)
- automate backups and send to dropbox or another location
- update virustotal upload script to use newer API and also submit additional data such as download/upload URL, download/upload IP, attacker IP (if different)
- update virustotal script to only submit when hash not seen before on virustotal
- foward all logs (cowrie, web honeypot, firewall) to SIEM (ELK stack most likely)
- automate deployment of all of these changes to simplify new honeypot setup
Please reach out with your ideas!
[1] https://github.com/jslagrew/cowrieprocessor
[2] https://isc.sans.edu/honeypot.html
[3] https://isc.sans.edu/myreports
[4] https://isc.sans.edu/myweblogs
[5] https://isc.sans.edu/mysshreports/
[6] https://isc.sans.edu/myrawsshreports.html?viewdate=2023-07-11
[7] https://man7.org/linux/man-pages/man5/crontab.5.html
[8] https://github.com/jslagrew/cowrieprocessor/blob/main/submit_vtfiles.py
[9] https://www.virustotal.com/gui/file/44591e0939a0f4894f66a3fb5d4e28fe02c934295d65fe8b280bf8a96d6a9ef5/community
[10] https://isc.sans.edu/diary/Network+Data+Collector+Placement+Makes+a+Difference/29664
--
Jesse La Grew
Handler
Loader activity for Formbook "QM18", (Wed, Jul 12th)
Introduction
In recent weeks, I've run across loaders related to GuLoader or ModiLoader/DBatLoader. I wrote about one in my previous diary last month. That loader for Remcos RAT was identified by @Gi7w0rm as GuLoader. Today I ran across another loader based on a tweet from @V3n0mStrike about recent Formbook activity.
Today's diary briefly reviews this activity based from an infection run on Tuesday 2023-07-11.
Shown above: Flow chart for this loader-based Formbook infection.
Email Distribution
After viewing the tweet from @V3n0mStrike, I searched through VirusTotal and found at least two emails with the associated .docx file attachment.
Shown above: First of two emails with the associated attachment.
Shown above: Second of two emails with the associated attachment.
Indicators of Compromise
The following are indicators of compromise (IOCs) after using the .docx attachment to kick off an infection run.
SHA256 hash: 7f4fcb19ee3426d085eb36f0f27d8fd3d0242d0aa057daa9f4d8a7cd68576045
File size: 11,197 bytes
File name: SKSR01_100723.docx
File type: Microsoft Word 2007+
File description: Word document with exploit for CVE-2017-0199
SHA256 hash: d5f04bf7472599a893de61a21acb464ee11a9b7fbb2a20e348309857ee321691
File size: 27,527 bytes
URL for this file: hxxps://e[.]vg/LyLQRAip
Redirected to: hxxp://23.94.236[.]203/wq/wqzwqzwqzwqzwqzwqzwqzwqzwqz%23%23%23%23%23%23%23%23%23%23%23%23%23%23%23%23%23%23wqzwqzwqszwqa.doc
File type: ISO-8859 text, with very long lines (6432), with CR, LF line terminators (RTF)
File description: Retrieved by above .docx file, this is an RTF to exploit CVE-2017-011882
SHA256 hash: e09040ce96631cf7c1f7be6de48f961540e6fb8db97859c9fa7ae35f7fa9d930
File size: 3,850 bytes
File location: hxxp://23.94.236[.]203/wq/IE_NET.hta
Saved file location: C:\Users\[username]\AppData\Local\Temp\IE_NETS.hta
File type: HTML document text, ASCII text, with very long lines (3682), with CRLF line terminators
File description: Retrieved by above RTF, this is an HTA to retrieve and run an EXE
SHA256 hash: 576ef869c72f3afe6f4f5101f27aeb0d479cae8e5d348eea4e43e8af8252dfd0
File size: 218,112 bytes
File location: hxxp://23.94.236[.]203/235/win.exe
Saved file location: C:\Users\[username]\AppData\Local\Temp\IBM_Centos.exe
File type: PE32+ executable (GUI) x86-64 Mono/.Net assembly, for MS Windows
File description: Loader EXE retrieved and run by the above HTA
SHA256 hash: 90615cb1ec6ca6c93dfe44f414c0d00db4e200c5011304df2c652182b4b593e3
File size: 716,404 bytes
File location: hxxps://kyliansuperm92139124[.]shop/customer/959
File type: HTML document text, ASCII text, with very long lines (64470), with CRLF line terminators
File description: example of an HTML file retrieved by the above loader EXE
Shown above: Traffic from the infection filtered in Wireshark.
Domains used for Formbook HTTP GET requests only:
www.6882b[.]com
www.bluhenhalfte[.]xyz
www.iweb-sa[.]com
www.kalndarapp14[.]com
www.latabledelepicier[.]com
www.poultry-symposium[.]com
www.printmyride[.]store
www.smartinnoventions[.]com
www.tarolstroy[.]store
www.terrenoscampestres[.]com
www.test-kobewaterworks[.]com
www.uximini[.]com
www.vidintros[.]shop
www.woman-86[.]com
www.wyyxscc5856[.]com
www.yahialocation[.]com
Domains used for Formbook HTTP GET and POST requests:
www.730fk[.]xyz
www.aamset-paris[.]com
www.alanyatourism[.]xyz
www.ambadisuites[.]com
www.atlasmarketing[.]life
www.autolifebelt[.]com
www.collingswoodfd[.]com
www.fn29in[.]xyz
www.hhfootball[.]com
www.kdu21[.]com
www.lazarus[.]team
www.london168wallet[.]monster
www.personifycoach[.]com
www.r1381[.]xyz
www.theartboxslidell[.]com
www.windmarkdijital[.]xyz
www.zsys[.]tech
Notes: I ran the infection on a Windows 7 host with Office 2007. The HTA file generated a wget request for the loader EXE, but that did not work, so I retrieved the loader using PowerShell's Invoke-WebRequest function. I saw no artifacts for persistence, and the infection stopped after I logged out. I also found no files temporarily saved to disk for data exfiltration like I've seen in previous Formbook infections.
Final Words
The two emails, associated malware, and a packet capture (pcap) of the infection traffic are available here.
For more examples of recent Formbook activity, see my 30 days of Formbook posts completed earlier this month.
--
Brad Duncan
brad [at] malware-traffic-analysis.net