Making Damn Vulnerable Web Application (DVWA) almost unhackable with Cilium and Tetragon

In this universe of ever-changing landscapes and unlimited hackers, defenders are searching for a hero to help defend the Kubernetes universe. To paraphrase Nick Fury, we need to assemble a group of remarkable tools who can work together to fight battles the vulnerable apps can’t. In this blog post, we are going to assemble our own Avengers team using Cilium and Tetragon to defend the Damn Vulnerable Web Application (DVWA) against the unearthly invaders, rendering it almost unhackable. Tetragon + Cilium will provide process, file, HTTP, and network-based defenses to thwart the known evil OWASP’s top 10. DVWA is a web app that was intentionally designed to be vulnerable to OWASP’s top 10 as a training resource. Lastly, I will end with a director’s commentary on my opinions to integrate Cilium + Tetragon in an enterprise and some ideas to close the gap between developers and security.

Continue reading

Spark, JupyterHub, Minio, and Helm on Kubernetes

At work we recently got Databricks which utilizes open source technologies under the hood. This got me thinking whether I could create a Databricks equivalent with open source software in my homelab. Over the Thanksgiving holiday week, I started playing around with deploying JupyterHub, Minio, and Spark on Kubernetes with Helm. I was able to get a working proof of concept (PoC) that would allow me to read raw log data from Minio using Spark jobs initiated by Python Jupyter notebook to ingest those events into a Spark schema, write that data as a Delta table, and then query said Delta table using a Jupyter notebook.

Continue reading

A hero’s journey to learning and conquering the Kubernetes tech beast

As the title suggests, learning Kubernetes (k8s) was your typical hero’s journey for me. I set out on an adventure to conquer all there is to know about the Kubernetes beast. Alas my attempts were thwarted by the frustratingly vast chasm of fire that is Kubernetes lair. Like any good hero, I sharpened my wit and sought a wealth of knowledge from an infinite library until I happened upon a powerful Wizard who’s guidance lead me, to finally overcome the beast’s lair.

This blog post, like the millions of blog posts that already exist on the internet will provide instructions on how to set up a MicroK8s cluster; however, I will also provide helpful additions to enable you in using Kubernetes in a homelab environment. Hopefully providing this foundational services guide, or “Act 2” of learning Kubernetes will provide a stronger understanding so that you feel empowered to go and explore more applications on your own. The additions I found to be foundational in my homelab are: using your Synology device as persistent storage, creating an ingress controller/load balancer using Traefik + cert-manager, deploying our first web app (Cyberchef), and how to monitor our cluster with Prometheus + Grafana. I will also recite heroic tales of overcoming the tribulations I encountered along the way. My hope is that this blog post will help provide a bootstrapped cluster with the capabilities that will enable beginners learning Kubernetes to continue their own heroic journey.

Continue reading

Part 3: Intro to threat hunting – Hunting the imposter among us with the Elastic stack and Sysmon

This blog post series is for anyone who has ever had an interest in threat hunting but did not have the knowledge of how or where to start, what tools they need, or what to hunt for. In this blog post, I will introduce an informal threat hunting process by hunting the APT-style attack performed during the red team exercise in the previous blog post. The theme of this blog post is to demonstrate how to hunt and detect malicious activity at each stage of the Mandiant Attack Lifecycle to create a fundamental framework for hunting adversaries. This blog post is a written adaptation of my DefCon 2020 Blue Team village workshop. It will utilize the same ideas and techniques used for that workshop reiterating specifics and points for the greater InfoSec community to use.

In this blog series, we have a fictitious advanced persistent threat (APT) code-named Goofball. They have been known to steal intellectual property and the Hackinglab corporation just released a press statement about a new widget that will revolutionize the world. This blog post is going to embark on a quest to hunt for the existence of Goofball in the Hackinglab corporation network. Additionally, this quest will introduce you to an informal threat hunting process to demonstrate the tools and techniques using Sysmon and the Elastic stack. The hope is that this informal process demonstrates how to apply a threat hunting mindset to search for malicious activity in your environment but also understand your findings to investigate further. 

Continue reading

Tagged , ,

Getting started with Autopsy multi-user cluster

The purpose of this blog post is to provide multiple methods on how to install/setup an Autopsy multi-client cluster. This blog post generated an infrastructure-as-code in the form of an Ansible playbook, Docker-compose, and manual instructions for setting up a cluster. In addition, this blog post will demonstrate how to setup the Autopsy client to connect to the Autopsy cluster and how to ingest disk images.

Continue reading

Connecting to my homelab remotely with Hashicorp Boundary v0.2.0 and Auth0

The purpose of this blog post is to provide multiple methods on how to install/setup Hashicorp Boundary. This blog post generated an Ansible playbook, Docker-compose for Swarm, and manual instructions for installing Boundary on Ubuntu 20.04. In addition, this blog post will demonstrate how to setup Auth0 OIDC authentication for single-sing on. Lastly, I will end this blog post with connecting to a remote machine in my homelab via SSH using the Boundary Desktop and CLI client.

Continue reading

IR Tales: The Quest for the Holy SIEM: Splunk + Sysmon + Osquery + Zeek

This blog post is the season finale in a series to demonstrate how to install and setup common SIEM platforms. The ultimate goal of each blog post is to empower the reader to choose their own adventure by selecting the best SIEM based on their goals or requirements. Each blog post in the series will provide Docker-compose v2, Docker-compose for Swarm, Ansible, Vagrant, and manual instructions to allow the reader to setup each platform with the deployment method of their choosing. In addition to setting up Splunk, I will cover fundamental Splunk concepts such as the Common Information Model (CIM). Lastly, I will provide step-by-step instructions to install Sysmon + Splunk Universal Forwarder on Windows, Osquery + Splunk Universal Forwarder on Ubuntu, and Zeek + Filebeat to ship logs to Splunk.

Continue reading

Implementing Logstash and Filebeat with mutual TLS (mTLS)

Do you know if your Filebeat client is connecting to a rogue Logstash server? Do you know if your Logstash server is accepting random logs from random devices? If you have answered “I don’t know” to either of these questions then this blog post is for you. The purpose of this blog post is to provide instructions on how to setup Logstash and Filebeat with mutual TLS (mTLS). The step-by-step instructions in this post, will demonstrate how to create the certificate chain of trust using Vault. Lastly, I will cover the Python script I created to automate constructing this logging certificate chain of trust.

Continue reading

Gitlab CI/CD pipeline with Vault secrets

 

The purpose of this blog post is to provide instructions on how to setup Gitlab and Vault to use secrets during a CI/CD pipeline build. In addition, I will break down the JWT authorization process with an explanation of the process for Gitlab + Vault. The explanation, step-by-step instructions, and the infra-as-code provided in this post will create the foundation for future blogs that will contain a CI/CD component with Vault secrets.

Continue reading

DevOps Tales: Install/Setup Gitlab + Gitlab runners on Docker, Windows, Linux and macOS

Are you tired of manually pushing code to production? Are you always searching through your BASH history to find the commands you used to test your code? Do you wish the process to merge code into production had a defined process? Well I have the solution for you! Introducing Gitlab CI/CD pipelines! With Gitlab you can setup Gitlab runners to create a CI/CD pipeline. A CI/CD pipeline will revolutionize your workflow to push code to production.

The purpose of this blog post is to provide instructions on how to setup the necessary components (Gitlab and Gitlab runners) to create a CI/CD pipeline. One of the deliverables from this blog post is Docker composes for Swarm and non-swarm deployments of Gitlab. Additionally, there are manual instructions on how to setup Gitlab runners on Ubuntu 20.04, Ubuntu 20.04 with Docker, Windows 10, Windows 10 with Docker, and macOS Big Sur. In addition, a Docker Registry is setup and integrated into the CI/CD pipeline for custom Docker images. The instructions and the infra-as-code provided in this post will create the foundation for future blogs that will contain a CI/CD component.

Continue reading

My development server for Vault

During the COVID19 lock down instead of playing videos games to consume my free time, I decided to be proactive. I started taking Udemy courses and one of the courses was on Vault and ever since I have been incorporating Vault into my blog posts. However, each blog post requires a unique setup and I prefer to start from a clean slate for each blog post. But the turn over of new keys and adding a new root CA to my local cert store became extremely tedious. Below is my Vault development setup where I address these issues.

Continue reading

IR Tales: The Quest for the Holy SIEM: Graylog + AuditD + Osquery

This blog post is the second in a series to demonstrate how to install and setup common SIEM platforms. The ultimate goal of each blog post is to empower the reader to choose their own adventure by selecting the best SIEM based on their goals or requirements. Each blog post in the series will provide Docker-compose v2, Docker-compose for Swarm, Ansible, Vagrant, and manual instructions to allow the reader to setup each platform with the deployment method of their choosing. This blog post will also cover how to setup the Graylog with Elasticsearch and Mongo. In addition to setting up the Graylog I will provide instructions to install Osquery + Filebeat on Windows and AuditD + Auditbeat on Ubuntu to ship logs to Elastic.

Continue reading

Getting started with Hashicorp Vault v1.6.1

The purpose of this blog post is to provide multiple methods on how to install/setup Vault. This blog post generated an Ansible playbook, Docker-composes for Swarm and non-swarm, and manual instructions for installing Vault on Ubuntu 20.04. Additionally, over the past couple of months, I have been learning Vault and demonstrating different ways to incorporate Vault. This blog post will be a condensed version of the content in those blog posts and a jumping off point to those blog posts as well.

Continue reading

IR Tales: The Quest for the Holy SIEM: Elastic stack + Sysmon + Osquery

This blog post is the first in a series to demonstrate how to install and setup common SIEM platforms. The ultimate goal of each blog post is to empower the reader to choose their own adventure by selecting the best SIEM based on their goals or requirements. Each blog post in the series will provide Docker-compose v2, Docker-compose for Swarm, Ansible, Vagrant, and manual instructions to allow the reader to setup each platform with the deployment method of their choosing. This blog post will cover how to setup the Elastic stack formerly known as ELK. In addition to setting up the Elastic stack I will provide instructions to install Sysmon + Winlogbeat on Windows and Osquery + Filebeat on Ubuntu to ship logs to Elastic.

Continue reading

Getting started with FleetDM

The purpose of this blog post is to provide multiple methods on how to install/setup FleetDM, how to deploy Osquery, and demonstrate how to use features of FleetDM + FleetCTL. This blog post generated an Ansible playbook, Docker-composes for Swarm and non-swarm, Vagrant to create a VM, and manual instructions for installing FleetDM on Ubuntu 20.04. Additionally, there are Ansible playbooks for deploying the Osquery agent on Windows and Ubuntu with manual instructions as well. Lastly, I will end by demonstrating how to use the FleetDM WebGUI and FleetCTL tool to manage FleetDM and interact with your Osquery agents.

Continue reading

Create a custom Splunk search commands with Python3

This blog post will demonstrate how to create a custom Python search command for Splunk and will demystify common roadblocks such as: how to create a custom search command with Python, how to store secrets for a custom search command, and how to install external Python libraries. With each roadblock discussed we will also cover the solution as code examples and hands-on exercises. To do this, we must first start with an introduction to the architecture of a custom Python search command.

Continue reading

Integrating Vault secrets into Jupyter Notebooks for Incident Response and Threat Hunting

The industry has gravitated towards using Jupyter notebooks for automating incident response and threat hunting. However, one of the biggest barriers for any application/automation is the ability to store secrets (username+passwords, API keys, etc) to access other services. This blog post will demonstrate how to use Vault to store secrets and integrate the ability to retrieve secrets from Vault with Jupyter Notebooks to assist in automating your security operations.

Continue reading

Demystifying the Kolide Fleet API with CURL, Python, Fleetctl, and Ansible

A common question in the #Kolide channel in the Osquery Slack is how to use the Kolide Fleet API. Kolide Fleet is written in GoLang and utilizes the GoKit framework to build the application. Therefore, almost every action that can be performed via the WebGUI is an API call. This blog post is going to demonstrate how to use the Kolide Fleet API to perform actions such as creating a live query and obtaining the results using Python websockets, obtaining the Osquery enroll secret, and creating a saved query. In addition to the API, this blog post will demonstrate how to use the Fleetctl command-line tool to perform the same actions. Lastly, this blog post includes an Ansible playbook to automate deploying Oquery agents and registering them with Kolide.

Continue reading

Creating a Windows 10 64-bit VM on Proxmox with Packer v1.6.3 and Vault

This blog post is going to demonstrate how to implement a new feature added to Packer in version 1.6.3. This new feature provides the ability to mount multiple ISOs on Proxmox VMs because Proxmox “doesn’t” support virtual floppy drives. Since Proxmox doesn’t support virtual floppy drives you can’t supply an Autounattend.xml file to automate the installation and initial configuration of Windows. By converting an Autounattend.xml file to an ISO we can now mount the ISO to install the OS and the ISO containing the necessary file to automate the installation. Lastly, I will be using Vault to store my sensitive values required by Packer to create this VM.

Continue reading

Setup my GoLang Osquery-file-carving server with Kolide

Facebook released an awesome open-source tool named Osquery that is being maintained by a thriving community supported by the Linux Foundation and several product leaders such as Kolide, TrailOfBits, and Uptycs. However, Facebook did not release the server component of Osquery and that has led to the creation of many projects: Kolide, Uptycs, Doorman, OSCRTL, and SGT just to name a few. Furthermore, not all projects have the ability to support the Osquery file carve functionality, more specifically the open-source version of Kolide Fleet. This project set out on a mission to provide an open-source Osquery file carving server for file uploads and downloads that could be used with Kolide.

This blog will provide a deep dive into the architecture of this project, design decisions, and lessons learned as an evolving incident response engineer. This project has been a 6-month long effort that resulted in the creation of 4 blog posts, 3 Udemy certificates/courses, and 3 separate Github repos. The collection of these experiences and research has led to the creation of this project. My hope is that this project benefits the community and provides an additional capability to Osquery that may not be supported by all fleet managers.

Continue reading