IR Tales: The Quest for the Holy SIEM: Graylog + AuditD + Osquery

This blog post is the second in a series to demonstrate how to install and setup common SIEM platforms. The ultimate goal of each blog post is to empower the reader to choose their own adventure by selecting the best SIEM based on their goals or requirements. Each blog post in the series will provide Docker-compose v2, Docker-compose for Swarm, Ansible, Vagrant, and manual instructions to allow the reader to setup each platform with the deployment method of their choosing. This blog post will also cover how to setup the Graylog with Elasticsearch and Mongo. In addition to setting up the Graylog I will provide instructions to install Osquery + Filebeat on Windows and AuditD + Auditbeat on Ubuntu to ship logs to Elastic.

Goals

  • Setup Graylog stack with Docker
  • Setup Graylog with Ansible
  • Setup Graylog stack with Vagrant
  • Setup Graylog with manual instructions
  • Test Graylog with Python script
  • Ingest AuditD logs into Graylog with Auditbeat
  • Ingest Osquery logs into Graylog with Winlogbeat

Update log

  • July 15th 2021 – Updated Docker and Ansible playbooks from Graylog v4.0 to v4.1
  • July 15th 2021 – Updated Docker and Ansible playbooks from Elasticsearch v7.10 to v7.13
  • August 30th 2021 – Added Vagrantfile for Graylog
  • October 24th 2021 – Updated Docker and Ansible playbooks from Graylog v4.1 to v4.2
  • December 30th 2021 – Updated Docker and Ansible playbooks from Graylog v4.2 to v4.2.4
    • This update mitigates log4j vulnerability

Background

What is Graylog?

Graylog is a leading centralized log management solution built to open standards for capturing, storing, and enabling real-time analysis of terabytes of machine data. Purpose-built for modern log analytics, Graylog removes complexity from data exploration, compliance audits, and threat hunting so you can quickly and easily find meaning in data and take action faster.

What is AuditD?

The Linux Auditing System is a native feature to the Linux kernel that collects certain types of system activity to facilitate incident investigation. The Linux Auditing subsystem is capable of monitoring three distinct items:

  • System calls: See which system calls were called, along with contextual information like the arguments passed to it, user information, and more.
  • File access: This is an alternative way to monitor file access activity, rather than directly monitoring the open system call and related calls.
  • Select, pre-configured auditable events within the kernel: Red Hat maintains a list of these types of events.

What is Auditbeat?

Collect your Linux audit framework data and monitor the integrity of your files. Auditbeat ships these events in real time to the rest of the Elastic Stack for further analysis.

What is Osquery?

Osquery exposes an operating system as a high-performance relational database. This allows you to write SQL-based queries to explore operating system data. With osquery, SQL tables represent abstract concepts such as running processes, loaded kernel modules, open network connections, browser plugins, hardware events or file hashes.

What is Filebeat?

Filebeat is a lightweight shipper for forwarding and centralizing log data. Installed as an agent on your servers, Filebeat monitors the log files or locations that you specify, collects log events, and forwards them to Logstash for indexing.

Network diagram

Generate OpenSSL public certificate and private key

  1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
  2. cd ChooseYourSIEMAdventure
  3. vim conf/tls/tls.conf and set:
    1. Set the location information under [dn]
      1. C – Set Country
      2. ST – Set state
      3. L – Set city
      4. O – Enter organization name
      5. emailAddress – Enter a valid e-mail for your org
    2. Replace example.com in all fields with your domain
    3. For alt names list all the valid DNS records for this cert
    4. Save and exit
  4. openssl req -x509 -new -nodes -keyout conf/tls/tls.key -out conf/tls/tls.crt -config conf/tls/tls.conf
    1. Generate TLS private key and public certificate

Install Graylog with Docker-compose v2.x

WARNING

The Docker-compose v2.x setup is for development use ONLY. The setup contains hard-coded credentials in configs and environment variables. For a more secure Docker deployment please skip to the next section to use Docker Swarm which implements Docker secrets.

WARNING

Spin up stack

    1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
    2. cd ChooseYourSIEMAdventure
    3. vim .env and set:
      1. GRAYLOG_VERSION – OPTIONAL – Set the version of Graylog you want to use
      2. GRAYLOG_ELATICSEARCH_VERSION – OPTIONAL – Set the version of Elasticsearch to use with Graylog
      3. SIEM_USERNAME – LEAVE AS DEFAULT this cannot be modified
      4. SIEM_PASSWORD – Set the admin password for Graylog
      5. NGINX_VERSION – OPTIONAL – Set the version of NGINX you want to use
      6. Save and exit
    4. sed -i '' "s/GRAYLOG_PASSWORD_SECRET=.*/GRAYLOG_PASSWORD_SECRET=$(openssl rand -base64 32)/g" docker-compose-graylog.yml
      1. Set GRAYLOG_PASSWORD_SECRET to a random value
    5. echo $(cat .env | grep SIEM_PASSWORD | awk -F'[/=]' '{print $2}') | tr -d '\n' | openssl sha256 | cut -d" " -f2 | xargs -I '{}' sed -i '' 's/GRAYLOG_ROOT_PASSWORD_SHA2=.*/GRAYLOG_ROOT_PASSWORD_SHA2={}/g' docker-compose-graylog.yml
      1. Based on the SIEM_PASSWORD defined in .env, this command will generate a SHA256 of that password. Next, it will set the Graylog environment variable(GRAYLOG_ROOT_PASSWORD_SHA2) to the SHA256 hash.
    6. docker-compose -f docker-compose-graylog.yml build

 

  1. docker-compose -f docker-compose-graylog.yml up
  2. docker exec -it siem-graylog-graylog /usr/share/graylog/generate_beats_input.sh
    1. Create Beats input

Install Graylog with Docker-compose v3.x (Swarm)

Unfortunately, Docker-compose version 3.X (DockerSwarm) doesn’t interact with the .env file the same as v2.X. The trade-off is a more secure deployment of the Graylog stack because secrets are not hardcoded in configs or stored in environment variables. Below we create Docker secrets that will contain these sensitive secrets to be used by the Docker containers.

Furthermore, any changes such as changing the container version or the Mongo database name in the .env file will have no effect. These settings need to be changed in the docker-compose-swarn-graylog.yml file. Lastly, another benefit of Docker Swarm is we can have multiple instances (replicas) of Graylog running for high-availability.

Generate secrets

  1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
  2. cd ChooseYourSIEMAdventure
  3. GRAYLOG_MONGO_USERNAME=graylog-mongo
    1. Set Mongo username for Graylog
  4. GRAYLOG_MONGO_PASSWORD=$(openssl rand -base64 32 | tr -cd '[:alnum:]')
    1. Generate Mongo password for Graylog user
  5. echo ${GRAYLOG_MONGO_USERNAME} | docker secret create graylog-mongo-username -
    1. Create Docker secret with Mongo username
  6. echo ${GRAYLOG_MONGO_PASSWORD} | docker secret create graylog-mongo-password -
    1. Create Docker secret with Mongo password
  7. echo "mongodb://${GRAYLOG_MONGO_USERNAME}:${GRAYLOG_MONGO_PASSWORD}@mongo:27017/graylog?authSource=admin&authMechanism=SCRAM-SHA-1" | docker secret create graylog-mongo-uri -
    1. Generate Mongo URI string for Graylog containing credentials
  8. GRAYLOG_ES_PASSWORD=$(openssl rand -base64 32 | tr -cd '[:alnum:]')
    1. Generate Elasticsearch pasword
  9. echo ${GRAYLOG_ES_PASSWORD} | docker secret create graylog-elasticsearch-password -
    1. Create Docker secret with Elasticsearch password
  10. openssl rand -base64 32 | tr -cd '[:alnum:]' | docker secret graylog-password-secret -
    1. Generate graylog-password-secret
  11. echo "http://elastic:${GRAYLOG_ES_PASSWORD}@elasticsearch:9200" | docker secret create graylog-es-uri -
    1. Generate Elasticsearch URI string for Graylog containing credentials
  12. unset GRAYLOG_MONGO_USERNAME GRAYLOG_MONGO_PASSWORD GRAYLOG_ES_PASSWORD
    1. Unset environment variables
  13. echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | openssl sha256 | cut -d" " -f2 | docker secret create graylog-root-password-sha2 -
    1. Generate SHA256 hash of admin password for Graylog

Spin up stack

  1. docker stack deploy -c docker-compose-swarm-graylog.yml graylog
  2. docker service logs -f graylog_graylog
    1. Monitor Graylog logs
    2. Wait for INFO : org.graylog2.bootstrap.ServerBootstrap - Graylog server up and running.
  3. docker exec -it $(docker ps | grep graylog_graylog | awk '{print $1}') /usr/share/graylog/generate_beats_input.sh
    1. Create Beats input
    2. Enter admin for username
    3. Enter <graylog admin password>

Install Graylog on Ubuntu 20.04 with Ansible

WARNING

This Ansible playbook will allocate half of the systems memory to Elasticsearch. For example, if a machine has 16GBs of memory, 8GBs of memory will be allocated to Elasticsearch.

WARNING

Init playbook

  1. vim hosts.ini add IP of Elastic server under [graylog]
  2. vim group_vars/all.yml and set:
    1. base_domain – Set the domain where the server resides
    2. timezone – OPTIONAL – The default timezone is UTC+0
    3. siem_username – Ignore this setting
    4. siem_password – Ignore this setting
  3. vim group_vars/graylog.yml and set:
    1. hostname – Set the desired hostname for the server
    2. graylog_version – Set the desired version of Graylog to use
    3. beats_port – OPTIONAL – Set the port to ingest logs using BEAT clients
    4. elastic_version – OPTIONAL – Set the desired version of Elasticsearch to use with Graylog – best to leave as default
      1. elastic_repo_version –  Change the repo version to install the Elastic stack –
    5. mongo_version – OPTIONAL – Set the desired version of Mongo to use with Graylog – best to leave as default
    6. mongo_admin_username – OPTIONAL – Set Mongo admin username – best to leave as default
    7. mongo_admin_password – Set the Mongo admin user password
    8. mongo_graylog_username – Set Mongo username for Graylog user
    9. mongo_graylog_password – Set Mongo password for Graylog user

Run playbook

  1. ansible-playbook -i hosts.ini deploy_graylog.yml -u <username> -K
    1. Enter password

Install Graylog with Vagrant

  1. git clone https://github.com/CptOfEvilMinions/ChooseYourSIEMAdventure
  2. cd ChooseYourSIEMAdventure
  3. VAGRANT_VAGRANTFILE=Vagrantfile-graylog vagrant up
    1. Spin up VM with Graylog

Manual install of Graylog on Ubuntu 20.04

WARNING

These manual instructions will allocate half of the systems memory to Elasticsearch. For example, if a machine has 16GBs of memory, 8GBs of memory will be allocated to Elasticsearch.

WARNING

Init host

  1. sudo su
  2. timedatectl set-timezone Etc/UTC
    1. Set the system timezone to UTC +0
  3. apt update -y && apt upgrade -y && reboot
  4. apt install apt-transport-https openjdk-8-jre-headless uuid-runtime pwgen gnupg -y
    1. Install Java and necessary tools

Install/Setup Mongo

  1. wget -qO - https://www.mongodb.org/static/pgp/server-4.2.asc | apt-key add -
    1. Add Mongo GPG key
  2. echo "deb [ arch=amd64,arm64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.2 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.2.list
    1. Add Mongo repo
  3. apt-get update -y && apt-get install -y mongodb-org
    1. Install mongo
  4. systemctl start mongod
  5. systemctl enable mongod
  6. mongo --port 27017
    1. Enter Mongo shell
  7. use admin
  8. Create admin user
    1. db.createUser({user: "admin", pwd: passwordPrompt(), roles: [{ role: "root", db: "admin" }]});
    2. Enter password
  9. Create Graylog database and user
    1. use graylog
      1. Create database
    2. db.createUser( {user: "graylog", pwd: passwordPrompt(), roles: ["readWrite","dbAdmin" ] })
      1. Create graylog user
      2. Enter password
  10. exit
  11. sed -i "s/#security:/security:\n authorization: enabled/g" /etc/mongod.conf
    1. Enable authentication on Mongo
  12. systemctl restart mongod

Install/Setup Elasticsearch

  1. wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    1. Add Elastic GPG key
  2. echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
    1. Add Elastic repo
  3. apt-get update && sudo apt-get install elasticsearch -y
    1. Install Elasticsearch
  4. sed -i "s#-Xms1g#-Xms$(echo $(( $(dmidecode -t 17 | grep 'Size: ' | awk '{print $2}') / 2 ))"M")#g" /etc/elasticsearch/jvm.options
    1. Setting the maximum size of the total heap size for Elasticsearch
  5. sed -i "s#-Xmx1g#-Xmx$(echo $(( $(dmidecode -t 17 | grep 'Size: ' | awk '{print $2}') / 2 ))"M")#g" /etc/elasticsearch/jvm.options
    1. Setting the initial size of the total heap size for Elasticsearch
  6. echo 'xpack.security.enabled: true' >> /etc/elasticsearch/elasticsearch.yml
    1. Enable X-Pack security
  7. echo 'xpack.security.transport.ssl.enabled: true' >> /etc/elasticsearch/elasticsearch.yml
    1. Enable X-Pack security transport SSL
  8. echo 'discovery.type: single-node' >> /etc/elasticsearch/elasticsearch.yml
    1. Set to a single-node mode
  9. sed -i 's/#cluster.name: my-application/cluster.name: graylog/g' /etc/elasticsearch/elasticsearch.yml
    1. Set cluster name
  10. echo 'action.auto_create_index: .watches,.triggered_watches,.watcher-history-*' >> /etc/elasticsearch/elasticsearch.yml
    1. Allow Graylog to control creating indexes
  11. systemctl start elasticsearch
  12. systemctl enable elasticsearch
  13. yes | /usr/share/elasticsearch/bin/elasticsearch-setup-passwords -s auto | grep 'PASSWORD' > /tmp/elasticsearch-setup-passwords.txt
    1. Generate Elasticsearch passwords
  14. cat /tmp/elasticsearch-setup-passwords.txt
    1. Print contents
  15. curl -u elastic:<Elastic password> http://127.0.0.1:9200

Create Graylog user on Elasticsearch

  1. elastic_es_password=$(cat /tmp/elasticsearch-setup-passwords.txt | grep 'PASSWORD elastic' | awk '{print $4}')
    1. Extract Elastic password
  2. curl -u elastic:$elastic_es_password -X POST http://localhost:9200/_xpack/security/role/graylog -H 'Content-Type: application/json' -d '{"cluster": ["manage_index_templates", "monitor", "manage_ilm"], "indices": [{ "names": [ "*" ], "privileges": ["write","create","delete","create_index","manage","manage_ilm","read","view_index_metadata"]}]}'
    1. Create Graylog role
  3. echo "PASSWORD graylog = $(openssl rand -base64 32 | tr -cd '[:alnum:]')" >> /tmp/elasticsearch-setup-passwords.txt
    1. Create Graylog password for Elastic and add it to temporary credentials file
  4. graylog_es_password=$(cat /tmp/elasticsearch-setup-passwords.txt | grep 'PASSWORD graylog' | awk '{print $4}')
    1. Extract Graylog password
  5. curl -u elastic:$elastic_es_password -X POST http://localhost:9200/_security/user/graylog -H 'Content-Type: application/json' -d "{ \"password\" : \"$graylog_es_password\", \"roles\" : [ \"graylog\" ], \"full_name\" : \"Graylog system account\", \"email\" : \"graylog_system@local\"}"
    1. Create Graylog user

Install/Setup Graylog

  1. cd /tmp && wget https://packages.graylog2.org/repo/packages/graylog-4.2-repository_latest.deb
    1. Download Graylog repo
  2. dpkg -i graylog-4.0-repository_latest.deb
    1. Install Graylog repo
  3. apt update -y && apt install graylog-server graylog-integrations-plugins -y
    1. Install Graylog
  4. echo -n "Enter Password: " && head -1 </dev/stdin | tr -d '\n' | openssl sha256 | cut -d" " -f2 | xargs -I '{}' sed -i "s/root_password_sha2 =.*/root_password_sha2 = {}/g" /etc/graylog/server/server.conf
    1. Enter admin password for Graylog
    2. Generate SHA256 hash of password and set root_password_sha2
  5. sed -i "s/password_secret =.*/password_secret =$(pwgen -N 1 -s 96)/g" /etc/graylog/server/server.conf
    1. Generate and set password_secret
    2. Create Graylog role
  6. sed -i "s#mongodb_uri =.*#mongodb_uri = mongodb://graylog:<Graylog Mongo password>@localhost:27017/graylog#g" /etc/graylog/server/server.conf
    1. Set Mongo URI with Graylog Mongo username and password
  7. sed -i 's|^#elasticsearch_hosts =.*|elasticsearch_hosts = http://<graylog_sys_username>:<graylog_sys_password>@localhost:9200|g' /etc/graylog/server/server.conf
    1. Set Elasticsearch URI with Graylog ES username and password
  8. systemctl restart graylog-server
  9. systemctl enable graylog-server

Install/Setup NGINX

  1. apt install nginx -y
    1. Install nginx
  2. wget https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/nginx/nginx.conf -O /etc/nginx/nginx.conf
    1. Download NGINX.conf
  3. wget https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/ansible/nginx/graylog.conf -O /etc/nginx/conf.d/graylog.conf
    1. Download Graylog config for NGINX
  4. wget https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/tls/tls.conf -O /etc/ssl/tls.cnf
    1. Download OpenSSL config
  5. openssl req -x509 -new -nodes -keyout /etc/ssl/private/nginx.key -out /etc/ssl/certs/nginx.crt -config /etc/ssl/tls.cnf
    1. Generate private key and public certificate for NGINX
  6. systemctl restart nginx
  7. systemctl enable nginx

Setup UFW

  1. ufw allow OpenSSH
  2. ufw allow 5044/tcp
  3. ufw allow 'NGINX HTTP'
  4. ufw allow 'NGINX HTTPS'
  5. ufw enable

Create BEATs input

  1. SSH into Graylog server
  2. mkdir -p /etc/graylog/tls
  3. openssl req -x509 -new -nodes -keyout /etc/graylog/tls/graylog.key -out /etc/graylog/tls/graylog.crt -config /etc/ssl/tls.cnf
    1. Generate private key and public certificate for Graylog
  4. chown graylog:graylog /etc/graylog/tls/graylog.crt /etc/graylog/tls/graylog.key
  5. chmod 600 /etc/graylog/tls/graylog.key
  6. chmod 644 /etc/graylog/tls/graylog.crt
    1. Set the proper permissions for private key and public certificate for Graylog
  7. cd /tmp && wget https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/docker/graylog/generate_beats_input_docker_swarm.sh -O /tmp/generate_beats_input_docker_swarm.sh
  8. chmod +x generate_beats_input_docker_swarm.sh
  9. sed -i ‘s#/usr/share/graylog#/etc/graylog#g’ generate_beats_input_docker_swarm.sh
  10. ./generate_beats_input_docker_swarm.sh
    1. Enter username
    2. Enter password

Login into Graylog WebGUI

  1. Open a browser to https://<IP addr of Graylog>:<Docker port is 8443, Ansible port is 443>
    1. Enter admin for username
    2. Enter <graylog admin password> for password
    3. Select “Sign-in”

Test Graylog pipeline

  1. cd ChooseYourSIEMAdventure/pipeline_testers
  2. virtualenv -p python3 venv
  3. pip3 install -r requirements.txt
  4. python3 beats_input_test.py --platform graylog --host <IP addr of Graylog> --api_port <Ansible - 443, Docker - 8443> --ingest_port <Logstash port - default 5044> --siem_username admin --siem_password <Graylog admin password>
  5. Login into Graylog
  6. Select “Search” at the top
  7. Enter <random message> into search

Ingest Osquery logs with Winlogbeat on Windows 10

Install/Setup Osquery v4.6.0 on Windows 10

  1. Login into Windows
  2. Open Powershell as an Administrator
  3. cd $ENV:TEMP
    1. Cd to user’s temp directory
  4. Invoke-WebRequest -Uri https://pkg.osquery.io/windows/osquery-4.6.0.msi -OutFile osquery-4.6.0.msi -MaximumRedirection 3
    1. Download Osquery
  5. Start-Process $ENV:TEMP\osquery-4.6.0.msi -ArgumentList '/quiet' -Wait
    1. Install Osquery
  6. Invoke-WebRequest -Uri https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/osquery/windows-osquery.flags -OutFile ‘C:\Program Files\osquery\osquery.flags’
    1. Download Osquery.flags config
  7. Invoke-WebRequest -Uri https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/osquery/windows-osquery.conf -OutFile 'C:\Program Files\osquery\osquery.conf'
    1. Download Osquery.conf config
  8. Restart-Service osqueryd

Install/Setup Filebeat on Windows 10

  1. Login into Windows
  2. Open Powershell as an Administrator
  3. cd $ENV:TEMP
    1. Cd to user’s temp directory
  4. $ProgressPreference = 'SilentlyContinue'
    1. Disable progress bar
    2. StackOverFlow – Powershell – Why is Using Invoke-WebRequest Much Slower Than a Browser Download?
  5. Invoke-WebRequest -Uri https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.0-windows-x86_64.zip -OutFile filebeat-7.10.0-windows-x86_64.zip -MaximumRedirection 3
    1. Download Filebeat
  6. Expand-Archive .\filebeat-7.10.0-windows-x86_64.zip -DestinationPath .
    1. Unzip Filebeat
  7. mv .\filebeat-7.10.0-windows-x86_64 'C:\Program Files\filebeat'
    1. Move Filebeat to the Program Files directory Move
  8. cd 'C:\Program Files\filebeat\'
    1. Enter Filebeat directory
  9. Invoke-Webrequest https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/filebeat/windows-filebeat.yml
  10. Using your favorite text editor open C:\Program Files\filebeat\filebeat.yml
    1. Open the document from the command line with Visual Studio Code: code .\filebeat.yml
    2. Open the document from the command line with Notepad: notepad.exe.\filebeat.yml
  11. Scroll down to the output.logstash:
    1. Replace logstash_ip_addr with the IP address of FQDN of Logstash
    2. Replace logstash_port with the port Logstash uses to ingest Beats (default 5044)
  12. .\filebeat.exe modules enable osquery
    1. Enable Osquery module
  13. (Get-Content 'C:\Program Files\filebeat\module\osquery\result\manifest.yml').Replace('C:/ProgramData/osquery', 'C:/Program Files/osquery') | Set-Content 'C:\Program Files\filebeat\module\osquery\result\manifest.yml'
    1. Replace OLD Osquery logging location with new location
  14. powershell -Exec bypass -File .\install-service-filebeat.ps1
    1. Install Filebeat service
  15. Set-Service -Name "filebeat" -StartupType automatic
  16. Start-Service -Name "filebeat"
  17. Get-Service -Name "filebeat"

Ingest AuditD logs with Auditbeat on Ubuntu 20.04

Install/Setup AuditD on Ubuntu 20.04

  1. apt update -y && apt upgrade -y
  2. apt install auditd -y
    1. Install AuditD
  3. wget https://raw.githubusercontent.com/Neo23x0/auditd/master/audit.rules -O /etc/audit/rules.d/audit.rules
    1. Download open-source ruleset
  4. systemctl restart auditd
    1. Load rules
  5. auditctl -l
    1. List newly loaded rules

Install/Setup AuditBeat on Ubuntu 20.04

  1. wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
    1. Add Elastic key
  2. apt-get install apt-transport-https -y
  3. echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-7.x.list
    1. Add Elastic repo
  4. apt update -y && apt install auditbeat -y
    1.  Install AuditBeat
  5. wget https://raw.githubusercontent.com/CptOfEvilMinions/ChooseYourSIEMAdventure/main/conf/auditbeat/auditbeat.yml -O /etc/auditbeat/auditbeat.yml
  6. sed -i 's/{{ logstash_ip_addr }}/<IP addr of Graylog>/g' /etc/auditbeat/auditbeat.yml
  7. sed -i 's/{{ logstash_port }}/<Port of Beats input - default 5044>/g' /etc/auditbeat/auditbeat.yml
  8. systemctl restart auditbeat
  9. systemctl enable auditbeat

Create Graylog indexes

  1. System > Indicies
  2. Select “Create index set” in the top right
    1. Enter Osquery for name
    2. Enter Osquery logs for description
    3. Enter osquery for index prefix
    4. Leave everything as default
    5. Select “Save”
  3. Repeat the steps above for AuditD

Create Graylog stream

  1. Select “Streams” at the top
  2. Select “Create stream” in the top right
    1. Enter Osquery-stream for name
    2. Enter Osquery stream for description
    3. Select Osquery for index set
    4. Check “Remove matches from ‘All messages’ stream”
    5. Select “Save”
  3. Repeat the steps above for AuditD
  4. Select “Manage Rules” for Osquery stream
  5. Select “Add stream rule” on the right
    1. Enter filebeat_service_type for field
    2. Select match exactly for type
    3. Enter osquery for value
    4. Select “Save”
  6. Select “I’m done” in bottom left
  7. Select “Start stream” for Osquery stream
  8. Repeat the steps above for AuditD
  9. Select “Search” at the top
  10. Enter Osquery-stream into stream selector
  11. Hit enter

Lessons learned

New skills/knowledge

  • Learned Graylog 4.0 and 4.1
  • Learned how to use the Graylog API
  • Learned how to secure Mongo
  • Implemented authentication on Elasticsearch

References

Leave a Reply

Your email address will not be published. Required fields are marked *