IR Tales: The Quest for the Holy SIEM: Splunk + Sysmon + Osquery + Zeek

This blog post is the season finale in a series to demonstrate how to install and setup common SIEM platforms. The ultimate goal of each blog post is to empower the reader to choose their own adventure by selecting the best SIEM based on their goals or requirements. Each blog post in the series will provide Docker-compose v2, Docker-compose for Swarm, Ansible, Vagrant, and manual instructions to allow the reader to setup each platform with the deployment method of their choosing. In addition to setting up Splunk, I will cover fundamental Splunk concepts such as the Common Information Model (CIM). Lastly, I will provide step-by-step instructions to install Sysmon + Splunk Universal Forwarder on Windows, Osquery + Splunk Universal Forwarder on Ubuntu, and Zeek + Filebeat to ship logs to Splunk.

Goals

  • Learn the fundamentals of Splunk
  • Setup Splunk with Docker
  • Setup Splunk with Ansible
  • Setup Splunk with Vagrant
  • Setup Splunk with manual instructions
  • Test Splunk with Python script
  • Install/Setup of Sysmon on Windows with the Splunk Universal Forwarder
  • Install/Setup of Osquery on Ubuntu with the Splunk Universal Forwarder
  • Install/Setup of Zeek on Ubuntu with Filebeat
  • Install Splunk Add-Ons for Splunk CIM compliance

Update log

Background

What is Splunk?

Custom search commands are user-defined Splunk Search Processing Language (SPL) commands that extend SPL to serve your specific needs. Although Splunk software includes an extensive set of search commands, these existing commands might not meet your exact requirements. Custom search commands let you perform additional data analysis in Splunk Enterprise.

What is Osquery?

Osquery exposes an operating system as a high-performance relational database. This allows you to write SQL-based queries to explore operating system data. With Osquery, SQL tables represent abstract concepts such as running processes, loaded kernel modules, open network connections, browser plugins, hardware events, or file hashes.

What is Sysmon?

System Monitor (Sysmon) is a Windows system service and device driver that, once installed on a system, remains resident across system reboots to monitor and log system activity to the Windows event log. It provides detailed information about process creations, network connections, and changes to file creation time. By collecting the events it generates using Windows Event Collection or SIEM agents and subsequently analyzing them, you can identify malicious or anomalous activity and understand how intruders and malware operate on your network.

What is Zeek?

Zeek is a passive, open-source network traffic analyzer. It is primarily a security monitor that inspects all traffic on a link in depth for signs of suspicious activity. More generally, however, Bro supports a wide range of traffic analysis tasks even outside of the security domain, including performance measurements and helping with troubleshooting.

What is a Splunk universal forwarder?

The Splunk Universal Forwarder provides reliable, secure data collection from remote sources and forward that data into Splunk software for indexing and consolidation. They can scale to tens of thousands of remote systems, collecting terabytes of data.

Splunk universal forwarder vs BEATS logging clients

One of the biggest differences between Splunk UF and BEATs is the Splunk server has the ability to control the Splunk UF. For example, with Splunk you could make a change on the server to instruct all Splunk UFs to start collecting SSH logs. This instruction is pushed down to all the endpoints. However, with BEATs you would have to a configuration management tool to push this update to all BEATs agents and restart the agent.

The Splunk UF can be configured to run a script on a machine. For example, you could have a Powershell script that periodically checks if Sysmon is installed and logging to Splunk. If not, the script can install Sysmon and setup the logging to automatically start logging Sysmon data to Splunk. This is very convenient because this means on-boarding new assets can be as easy as installing the Splunk UF to setup logging.

Lastly, Splunk has a large and diverse ecosystem of applications that are ready for install. For example, let’s say you are switching from Elastic to Splunk and your endpoints are using Sysmon. Just install the Splunk Add-On for Microsoft Sysmon and it will instruct the Splunk UFs how to collect Sysmon data. There is no need to configure the Splunk UFs if the an app for the data source already exists. For more information on the differences between the two see the following blog posts: Let’s Chat About Splunk and ELK, Elasticsearch Best Practice Architecture, and Splunk vs. Elastic Stack (ELK): Making Sense of Machine Log Files

What and why Logstash?

Logstash, is an application that was built in the modern era that supports a high availability setup, configuration files are like code, works very well with JSON, and provides TLS without the need for client certificates. There are multiple methods for high availability which can be reviewed here. The configuration as code is very important because it provides simple programming concepts such as if statements, regex capabilities, ability to extract data, and the ability to transform data. Lastly, if you are converting your infrastructure from an Elastic stack or Graylog cluster to Splunk your endpoints were most likely using BEATS to ship logs. Also personally, I use Filebeats and Winlogbeats for all my blog posts so I want all future blog posts using BEATs clients to be compatible with this setup.

Splunk fundamentals

Splunk buckets

Credit: Cloudian

Understanding how Splunk stores logs is an important concept. As you can see from the illustration above Splunk has the concept of HOT, WARM, COLD, and FROZEN buckets. The colloquial terms of temperature typically correspond to how fast you can access said data in bucket. Below is an explanation of how the buckets operate as stated by Cloudian.

Warm Storage

Both hot and warm buckets are kept in warm storage. This storage is the only place that new data is written. Warm storage is your fastest storage and requires at least 800 input/output operations per second (IOPS) to function properly. If possible, this storage should use non-volatile memory express (NVMe) drivers and SSD to ensure performance. NVMe is designed specifically for SSD and can provide significantly higher performance than other interfaces.

Cold Storage

Cold data storage is your second actual storage tier and is the lowest searchable tier. Cold storage is useful when you need storage to be immediately available but don’t need high performance. For example, to meet PCI compliance requirements. Cold data storage is more cost-effective than warm since you can use lower quality hardware. Keep in mind, if you are using one large storage pool for your tiers, cold storage is a name only. Buckets still roll to cold but your performance doesn’t change.

Frozen Storage

Your lowest storage tier is frozen storage. It is primarily used for compliance or legal reasons which demand you store data for longer periods of time. By default, Splunk deletes data rather than rolling it to frozen storage. However, you can override this behavior by specifying a frozen storage location. Frozen storage is not searchable, so you can store it at a significant size reduction, typically 15% of the original. This is because Splunk deletes the metadata associated with it.

What and why Splunk Common Information Model (CIM)?

The Splunk CIM is arguably one of thee best features of Splunk. Splunk states “The Splunk Common Information Model (CIM) is a shared semantic model focused on extracting value from data. The CIM is implemented as an add-on that contains a collection of data models, documentation, and tools that support the consistent, normalized treatment of data for maximum efficiency at search time.”.

At first the CIM didn’t make sense to me because for me I needed to see it in action. Personally, the best way to show that is with Zeek/BRO logs and the Splunk CIM data model for network traffic. The Splunk network traffic CIM model defines that the key name for a source IP address is src_ip but Zeek defines this key name as id.orig_h. When you install the Splunk Add-on for Zeek aka Bro it includes the CIM model to map the key names in Zeek logs to the key names defined by the Splunk CIM data model for network traffic. So what does this mean? You can perform the following Splunk search: index="zeek" AND sourcetype="bro:conn:json" AND src_ip="x.x.x.x". Splunk takes in this query and knows the source IP address defined in the query exists in the Zeek id.orig_h field. Additionally, take notice in the screenshot below that the actual log entry is the original event generated by Zeek.

One of the most tedious aspects of being an incident responder is remembering the key names for each data source. To make matters worse, it’s even worse when 9/10 sensors use source_ip but one sensor uses src. If this is the case, if you perform the following search: source_ip: <attacker IP address> you would be missing events and possible malicious activity. The Splunk CIM uses the key name  src_ip for all network logs regardless of the original generated format.

Network diagram

Generate OpenSSL private key and public cert

  1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
  2. cd ChooseYourSIEMAdventure
  3. vim conf/tls/tls.conf and set:
    1. Set the location information under [dn]
      1. C – Set Country
      2. ST – Set state
      3. L – Set city
      4. O – Enter organization name
      5. emailAddress – Enter a valid e-mail for your org
    2. Replace example.com in all fields with your domain
    3. For alt names list all the valid DNS records for this cert
    4. Save and exit
  4. openssl req -x509 -new -nodes -keyout conf/tls/tls.key -out conf/tls/tls.crt -config conf/tls/tls.conf
    1. Generate TLS private key and public certificate

Install/Setup Splunk with Docker-compose v2.x

WARNING

The Docker-compose v2.x setup is for development use ONLY. The setup contains hard-coded credentials in configs and environment variables. For a more secure Docker deployment please skip to the next section to use Docker Swarm which implements Docker secrets or Ansible.

WARNING

  1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
  2. cd ChooseYourSIEMAdventure
  3. vim .env and set:
    1. SPLUNK_VERSION – OPTIONAL – Set the version of Splunk you want to use
    2. SIEM_PASSOWRD – Set the password for Splunk
    3. NGINX_VERSION – OPTIONAL – Set the version of NGINX you want to use
    4. Save and exit
  4. docker-compose -f docker-compose-splunk.yml build
  5. docker-compose -f docker-compose-splunk.yml up

Install/Setup Splunk with Docker-compose v3.x (Swarm)

WARNING

The Docker-compose v3.x is for development use ONLY. The purpose of this setup is to demonstrate how to the Splunk default.yml config. The Splunk admin password can not be changed it must stay the same. For a more secure deployment please skip to the next section to use Ansible.

WARNING

Create secrets

  1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
  2. cd ChooseYourSIEMAdventure
  3. SPLUNK_ADMIN_PASSWORD=$(openssl rand -base64 32 | tr -cd '[:alnum:]')
    1. Generate Splunk admin password
  4. echo $SPLUNK_ADMIN_PASSWORD
    1. Print password to screen and save password in a safe location
  5. SPLUNK_HEC_TOKEN=$(openssl rand -base64 32 | tr -cd '[:alnum:]')
    1. Generate Splunk HEC token
  6. echo $SPLUNK_HEC_TOKEN
    1. Print Splunk HEC token to screen and save token in a safe location
  7. docker run -it -e SPLUNK_PASSWORD=${SPLUNK_ADMIN_PASSWORD} -e SPLUNK_HEC_TOKEN=${SPLUNK_HEC_TOKEN} -e SPLUNK_HEC_SSL=false splunk/splunk:8.2 create-defaults > conf/docker/splunk/default.yml
    1. Generate Splunk default.yml config
  8. echo $SPLUNK_HEC_TOKEN | docker secret create splunk-hec-token -
  9. cat conf/docker/splunk/default.yml | docker secret create splunk-default-conf -
    1. Create Docker secret with the contents of default.yml

Docker start stack

  1. docker stack deploy -c docker-compose-swarm-splunk.yml splunk
  2. docker service logs -f splunk_splunk

Install/Setup Splunk with Ansible

Init playbook

  1. vim hosts.ini add IP of Splunk server under [splunk]
  2. vim group_vars/all.yml and set:
    1. base_domain – Set the domain where the server resides
    2. timezone – OPTIONAL – The default timezone is UTC+0
    3. siem_username – Ignore this setting
    4. siem_password – Set the Splunk admin password
  3. vim group_vars/splunk.yml and set:
    1. hostname – Set the desired hostname for the server
    2. splunk_version – Set the desired version of Splunk to use
    3. splunk_dl_url – Set to the URL to download Splunk
    4. beats_port – OPTIONAL – Set the port to ingest logs using BEATs clients
    5. elastic_version – OPTIONAL – Set the desired version of Logstash to use with Splunk – best to leave as default
      1. elastic_repo_version – Change the repo version to install Logstash

Run playbook

  1. ansible-playbook -i hosts.ini deploy_splunk.yml -u <username> -K
    1. Enter password

Install/Setup Splunk with Vagrant

  1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
  2. cd ChooseYourSIEMAdventure
  3. VAGRANT_VAGRANTFILE=Vagrantfile-splunk vagrant up

Manual install/Setup of Splunk

Init Linux instance

  1. sudo su
  2. timedatectl set-timezone Etc/UTC
    1. Set the system timezone to UTC +0
  3. apt update -y && apt upgrade -y && reboot

Install Splunk

  1. wget 'https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.1.2&product=splunk&filename=splunk-8.1.2-545206cc9f70-linux-2.6-amd64.deb&wget=true' -O /tmp/splunk-8.1.2-linux-2.6-amd64.deb
    1. Download Splunk
  2. dpkg -i /tmp/splunk-8.1.2-linux-2.6-amd64.deb
    1. Install Splunk
  3. /opt/splunk/bin/splunk enable boot-start --accept-license --answer-yes --no-prompt --seed-passwd <Splunk admin password>
    1. Enable Splunk to start on boot, accept license agreement, enter Splunk admin password
  4. sed -i 's/# server.socket_host = .*/server.socket_host = localhost/g' /opt/splunk/etc/system/default/web.conf
    1. Instruct Splunk webGUI to listen on localhost
  5. SPLUNK_ZEEK_HEC_TOKEN=$(openssl rand -base64 32 | tr -cd '[:alnum:]')
  6. echo ${SPLUNK_ZEEK_HEC_TOKEN} 
    1. Create a random token for the HEC
  7. mkdir /opt/splunk/etc/apps/splunk_httpinput/local
  8. chown splunk:splunk -R /opt/splunk/etc/apps/splunk_httpinput/local
    1. Create Splunk HEC local config directory
  9. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/splunk/splunk-hec.conf --output /opt/splunk/etc/apps/splunk_httpinput/local/inputs.conf
    1. Download config for HEC
  10. mkdir /opt/splunk/etc/apps/search/local
  11. chown splunk:splunk -R /opt/splunk/etc/apps/search/local
  12. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/splunk/splunk-hec-zeek.conf --output /opt/splunk/etc/apps/search/local/inputs.conf
    1. Create Splunk searching app local directory
  13. sed -i "s/{{ SPLUNK_ZEEK_HEC_TOKEN }}/${SPLUNK_ZEEK_HEC_TOKEN}/g" /opt/splunk/etc/apps/search/local/inputs.conf
    1. Download Zeek HEC config
  14. systemctl restart splunk
  15. systemctl enable splunk
  16. systemctl status splunk

Install/Setup NGINX

  1. apt install nginx -y
  2. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/nginx/nginx.conf --output /etc/nginx/nginx.conf
    1. Download main NGINX config
  3. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/nginx/splunk.conf --output /etc/nginx/conf.d/splunk.conf
    1. Download NGINX to reverse proxy Splunk
  4. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/tls/tls.conf --output /etc/ssl/splunk_tls.conf
    1. Download TLS config
    2. Go to the “Generate OpenSSL private key and public cert” section at the top for more details
  5. openssl req -x509 -new -nodes -keyout /etc/ssl/private/nginx.key -out /etc/ssl/certs/nginx.crt -config /etc/ssl/splunk_tls.conf
    1. Use OpenSSL config to generate public certificate and private key
  6. systemctl restart nginx
  7. systemctl enable nginx
  8. systemctl status nginx

Setup UFW

  1. ufw allow OpenSSH
    1. Allow SSH access
  2. ufw allow 'NGINX http'
    1. Allow HTTP
  3. ufw allow 'NGINX https'
    1. Allow HTTPS
  4. ufw allow 5044/tcp
    1. Allow Logstash
  5. ufw allow 8088/tcp
    1. Allow access to HEC
  6. ufw allow 8089/tcp
    1. Allow Splunk API access
  7. ufw allow 9997/tcp
    1. Allow Splunk agents to report logs to Splunk
  8. ufw enable

Install/Setup Logstash

  1. wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    1. Add Elastic GPG key
  2. echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
    1. Add Elastic repo
  3. apt-get update && sudo apt-get install logstash -y
    1. Install Logstash
  4. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/splunk/02-inputs-beat.conf --output /etc/logstash/conf.d/02-inputs-beat.conf
    1. Download Logstash Beats input config
  5. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/splunk/30-output-splunk-hec.conf --output /etc/logstash/conf.d/30-output-splunk-hec.conf
    1. Download Logstash Splunk HEC output config
  6. sed -i "s/{{ SPLUNK_ZEEK_HEC_TOKEN }}/${SPLUNK_ZEEK_HEC_TOKEN}/g" /etc/logstash/conf.d/30-output-splunk-hec.conf
    1. Set the HEC token in the config
  7. mkdir /etc/logstash/tls
  8. openssl req -x509 -new -nodes -keyout /etc/logstash/tls/logstash.key -out /etc/logstash/tls/logstash.crt -config /etc/ssl/splunk_tls.conf
    1. Generate self-signed public certificate and private key
  9. chown logstash:logstash /etc/logstash/tls/logstash.key /etc/logstash/tls/logstash.crt
  10. chmod 644 /etc/logstash/tls/logstash.crt
  11. chmod 600 /etc/logstash/tls/logstash.key 
    1. Set the correct permissions for TLS private key and public certificate
  12. systemctl restart logstash
  13. systemctl enable logstash
  14. tail -f /var/log/logstash/logstash-plain.log

Login into Splunk WebGUI

Open a browser to https://<IP addr of Splunk>:<Docker port is 8443, Ansible port is 443>

  1. Enter admin for username
  2. Enter <Splunk admin password> for password
  3. Select “Sign-in”

Create Splunk index

  1. Settings > Data > Indexes
  2. Select “New index” in the top right
    1. Enter zeek into name
    2. Select “save”
  3. Repeat for Osquery and Sysmon

Install Splunk TA apps for sourcetypes

  1. Log into Splunk
  2. Select “+ Find More Apps”
  3. Search for “Bro”
  4. Install Splunk Add-on for Zeek aka Bro
  5. Repeat for add-on for osquery and Splunk Add-On for Microsoft Sysmon

Setup HEC logging input

All the setups above automatically created or provide the configs to create the HEC input. This section exists to provide manual instructions on how to create a HEC token.  However, the Docker setups will need a user to manually select the index to send data and the sourcetype.

  1. Log into Splunk
  2. Settings > Data > Data inputs > HTTP Event Collector
  3. Select “Global settings”
    1. Make sure “Enabled” is selected for All tokens
    2. UNcheck “Enable SSL”
    3. Select “Save”
  4. Select “New token” in the top right
    1. Select source
      1. Enter zeek-hec into name
      2. Select “Next”
    2. Input settings
      1. Select source type based on data source
      2. Select “Search & Reporting (Searching)” for app context
      3. Select the an index
      4. Select “Review”
    3. Review
      1. Select “Submit”
  5. Settings > Data > Data inputs > HTTP Event Collector

Enable Splunk universal forwarding logging

  1. Settings > Data > Forwarding and receiving
  2. Select “+ Add new” for Configuring receiving under Receive data
    1. Enter 9997 for port
    2. Select “Save”

Ingest Sysmon logs with Splunk agent on Windows 10

Install/setup of Sysmon on Windows 10

  1. Login into Windows VM
  2. Open Powershell as Administrator
  3. cd $ENV:TMP
  4. $ProgressPreference = 'SilentlyContinue'
    1. Disable download status bar
  5. Invoke-WebRequest -Uri https://download.sysinternals.com/files/Sysmon.zip -OutFile Sysmon.zip
    1. Download Sysmon
  6. Expand-Archive .\Sysmon.zip -DestinationPath .
    1. Unzip Sysmon
  7. Invoke-WebRequest -Uri https://raw.githubusercontent.com/olafhartong/sysmon-modular/master/sysmonconfig.xml -OutFile sysmonconfig.xml
    1. Download Sysmon config
  8. .\Sysmon.exe -accepteula -i .\sysmonconfig.xml
    1. Install Sysmon driver and load Sysmon config
  9. Enter eventvwr into Powershell
  10. Expand Application and Services Logs > Microsoft > Windows > Sysmon

Install/setup of Splunk universal forwarder on Windows 10

  1. cd $ENV:TEMP
  2. Invoke-Webrequest -Uri 'https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=windows&version=8.1.2&product=universalforwarder&filename=splunkforwarder-8.1.2-545206cc9f70-x64-release.msi&wget=true' -OutFile splunkforwarder-8.1.2-x64-release.msi -MaximumRedirection 3
    1. Download Powershell
  3. msiexec.exe /i splunkforwarder-8.1.2-x64-release.msi RECEIVING_INDEXER="<Splunk FQDN or IP addr>:9997" AGREETOLICENSE=Yes /quiet
    1. Quietly install Splunk universal forwarder
  4. Invoke-WebRequest -Uri https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/splunk_agent/sysmon_windows_input.conf -OutFile 'C:\Program Files\SplunkUniversalForwarder\etc\system\local\inputs.conf'
    1. Download Splunk universal forwarder logging input config
  5. & 'C:\Program Files\SplunkUniversalForwarder\bin\splunk.exe' restart
    1. Restart the Splunk universal forwarder
  6. Splunk search: index="sysmon" eventtype="ms-sysmon-network"

Ingest Osquery logs with Splunk agent on Ubuntu 20.04

Install/setup of Osquery on Ubunut 20.04

  1. Log into VM with SSH
  2. sudo su
  3. export OSQUERY_KEY=1484120AC4E9F8A1A577AEEE97A80C63C9D8B80B
  4. sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys $OSQUERY_KEY
    1. Add Osquery GPG key for repo
  5. sudo add-apt-repository 'deb [arch=amd64] https://pkg.osquery.io/deb deb main'
    1. Add Osquery repo
  6. sudo apt-get update -y && sudo apt-get install osquery -y
    1. Install Osquery
  7. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/osquery/linux-osquery.conf --output /etc/osquery/osquery.conf
    1. Download Osquery config
    2. Copy of Palantir config
  8. sed -i 's#/etc/osquery/packs/ossec-rootkit.conf#/usr/share/osquery/packs/ossec-rootkit.conf#g' /etc/osquery/osquery.conf
  9. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/osquery/linux-osquery.flags --output /etc/osquery/osquery.flags
    1. Download Osquery flags
  10. systemctl restart osqueryd
  11. systemctl enable osqueryd
  12. systemctl status osqueryd

Install/setup of Splunk universal forwarder on Ubuntu 20.04

  1. wget -O /tmp/splunkforwarder-8.1.2-linux-2.6-amd64.deb 'https://www.splunk.com/bin/splunk/DownloadActivityServlet?architecture=x86_64&platform=linux&version=8.1.2&product=universalforwarder&filename=splunkforwarder-8.1.2-545206cc9f70-linux-2.6-amd64.deb&wget=true'
    1. Download Splunk universal forwarder
  2. dpkg -i /tmp/splunkforwarder-8.1.2-linux-2.6-amd64.deb
  3. /opt/splunkforwarder/bin/splunk enable boot-start --accept-license --answer-yes --no-prompt
    1. Enable Splunk universal forwarder to start on boot and accept license agreement
  4. /opt/splunkforwarder/bin/splunk add forward-server <Splunk FQDN or IP addr>:9997
    1. Setup where to forward logs
  5. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/splunk_agent/osquery_linux_input.conf > /opt/splunkforwarder/etc/system/local/inputs.conf
    1. Download Splunk universal forwarder logging input config
  6. /opt/splunkforwarder/bin/splunk restart
    1. Restart Splunk universal forwarder
  7. Splunk search: index="osquery" sourcetype="osquery:results"

Ingest Zeek logs with Filebeat on Ubuntu 20.04

Install/setup of Zeek on Ubuntu 20.04

  1. echo 'deb http://download.opensuse.org/repositories/security:/zeek/xUbuntu_20.04/ /' | sudo tee /etc/apt/sources.list.d/security:zeek.list
    1. Add Zeek repo
  2. curl -fsSL https://download.opensuse.org/repositories/security:zeek/xUbuntu_20.04/Release.key | gpg --dearmor | sudo tee /etc/apt/trusted.gpg.d/security_zeek.gpg > /dev/null
    1. Add Zeek repo GPG key
  3. sudo apt update -y && sudo apt install zeek jq -y
    1. Install Zeek
  4. /opt/zeek/bin/zkg install corelight/json-streaming-logs
    1. Install Corelight Zeek plugin for JSON logging
  5. echo -e "\n# Load ZKG packages\n@load packages" >> /opt/zeek/share/zeek/site/local.zeek
    1. Add entry to load ZKG packages
  6. echo -e "\n# Disable TSV logging\nconst JSONStreaming::disable_default_logs = T;" >> /opt/zeek/share/zeek/site/local.zeek
    1. Disable TSV logging
  7. echo -e "\n# JSON logging - time before rotating a file\nconst JSONStreaming::rotation_interval = 60mins;" >> /opt/zeek/share/zeek/site/local.zeek
    1. Set file log rotation to once an hour
  8. sed -i "s/interface=.*/interface=$(ip route list | grep default | awk '{print $5}')/g" /opt/zeek/etc/node.cfg
    1. Configure Zeek to monitor the default interface
    2. If your host has multiple interfaces set it to that interface
  9. /opt/zeek/bin/zeekctl install
  10. /opt/zeek/bin/zeekctl start
    1. Start Zeek
  11. /opt/zeek/bin/zeekctl status
  12. head -n 1 /opt/zeek/logs/current/json_streaming_conn.log | jq

Manual install/setup of Filebeat on Ubuntu 20.04

  1. wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    1. Add Elastic GPG key
  2. sudo apt-get install apt-transport-https -y
  3. echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
    1. Add Elastic repo
  4. sudo apt-get update -y && sudo apt-get install filebeat -y
    1. Install Filebeat
  5. mkdir /etc/filebeat/inputs.d
  6. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/filebeat/linux-filebeat.yml --output /etc/filebeat/filebeat.yml
    1. Download Filebeat config
  7. curl https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/filebeat/zeek-input.yml --output /etc/filebeat/inputs.d/zeek-input.yml
    1. Download Zeek log input config
  8. sed -i "s/{{ logstash_ip_addr }}/<Logstash FQDN or IP addr>/g" /etc/filebeat/filebeat.yml
    1. Set the remote Logstash server
  9. sed -i "s/{{ logstash_port }}/<Logstash port>/g" /etc/filebeat/filebeat.yml
    1. Set the remote Logstash BEATs port
  10. systemctl restart filebeat
  11. systemctl enable filebeat
  12. Splunk search: index="zeek" sourcetype="bro:conn:json"

Jumping off point

Lessons learned

I am currently reading a book called “Cracking the Coding Interview” and it is a great book. One interesting part of the book is their matrix to describe projects you worked on and the matrix contains the following sections which are: challenges, mistakes/failures, enjoyed, leadership, conflicts, and what would you do differently. I am going to try and use this model at the end of my blog posts to summarize and reflect on the things I learn. I don’t blog to post things that I know, I blog to learn new things and to share the knowledge of my security research.

New skills/knowledge

  • Install Zeek from pre-built package
  • Install and setup Splunk universal forwarder
  • Learned about the Splunk CIM
  • Learned how to install Splunk apps for common logging platforms for CIM compliance

Challenges

  • Using NON-Splunk tools adds an additional level of challenges to get things working

What I would have done better

  • I would have liked to explore setting up a Splunk cluster but decided it was best to keep it simple for this blog post

References

Leave a Reply

Your email address will not be published. Required fields are marked *