RC3 Fall 2016 CTF Infrastructure

 

web-banner

 

In this blog post I will be walking you through how I setup my club’s CTF infrastructure on AWS. I take great pride as the RC3 CTF infrastructure captain (with a bit of an inflated ego 🙂 ) that my infrastructure as a whole never had any downtime! Additionally, our CTF attracted a 1,000 users over the course of a weekend, which was a great stress test for my infrastructure.

This post consists of the following AWS services which are EC2, S3, VPCs, Route 53, RDS, and IAM. Our infrastructure utilized software and services such as CentOS, Ubuntu, HAProxy, Let’s Encrypt, CTFd, Bro, and Nginx/uwsgi. Please keep in mind this guide is a sys admin guide and not a security guide. Some of the security measures implemented in the infrastructure have been left out of this guide to thwart individuals from taking advantage of this build in the future. Without further ado, here we go on the wild ride of creating a CTF cloud computing infrastructure in Amazon’s Web Services (AWS) :).

DISCLAIMER

It has been over six months since I have done this project. This guide should NOT BE used forbetum because I have not included our entire setup. Additionally, as we moved closer to the event I stopped documenting things and just got things running. For the most part this guide provides a very GOOD overview of how to setup a CTF infrastructure in AWS.

Infrastructure Layout

screen-shot-2016-11-26-at-2-27-12-am

 

AWS Services

  • Elastic Cloud Computing(EC2) – EC2 is a web service that provides resizable compute capacity in the cloud.
  • Route53 – Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service
  • Simple Storage Service(S3) – S3 is storage for the Internet. It is designed to make web-scale computing easier for developers
  • Virtual Private Cloud(VPC) – A VPC lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define. You have complete control over your virtual networking environment, including selection of your own IP address range, creation of subnets, and configuration of route tables and network gateways.
  • Relational Database Service(RDS) – RDS makes it easy to set up, operate, and scale a relational database in the cloud.
  • Identity and Access Management(IAM) – IAM enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups, and use permissions to allow and deny their access to AWS resources.

Infrastructure Build

AWS Services Setup

Identity and Access Management(IAM)

Access to your AWS management console for your infrastructure is one of the most important things to secure! Our infrastructure had multiple e-board members working on it and they all needed access. IAM gives the ability to create multiple users, groups, and provide granular permissions to each. As the AWS sys admin you should know how to set proper permissions and know who has permissions to do certain actions. As a warning DO NOT work from the AWS root account there is NO NEED! Also enable two factor on root and admin accounts.

  1. Login into the AWS management console
  2. Select “IAM” under “Security & Identity”
  3. Select “Customize” under “IAM users sign-in link:”
    1. This will allow you to create a login portal for just your AWS management console.
    2. This customized link is the ink you give all your IAM usersscreen-shot-2016-11-26-at-12-55-18-am
    3. The root account will access the link provided in step number 1.
  4. Select “Groups” then select “Create a new group”
    1. Admin group
      1. Enter a name for the group
      2. Select “Administrator Access”, next stepscreen-shot-2016-11-26-at-12-48-07-am
      3. Select “Create group”
    2. Billing group
      1. Enter a name for the group
      2. Select “Billing”, next stepscreen-shot-2016-11-26-at-12-48-38-am
      3. Select “Create group”
  5. Return to the IAM dashboard
  6. Select “Users” then select “Add user”
    1. Admin users
      1. Enter a username for user name
      2. Select “AWS Management Console” for Access Type
      3. Select “Autogenerated password” for console password
      4. Select “User must create a new password at next sign-in”screen-shot-2016-11-26-at-12-50-24-am
      5. Select “Add users to group” from
      6. Select admin groupscreen-shot-2016-11-26-at-12-51-24-am
      7. Select “Create user”
    2. Billing Group
      1. Enter a username for user name
      2. Select “AWS Management Console” for Access Type
      3. Select “Autogenerated password” for console password
      4. Select “User must create a new password at next sign-in”
      5. Select “Add users to group” from
      6. Select billing group
      7. Select “Create user”

Namecheap and Route 53

My club, RC3, currently owns the following domain “rc3.club” on Namecheap’s fantastic DNS service. However, we needed to control the subdomain “ctf.rc3.club” from AWS’s Route 53 service. I found out that we could create a subdomain on Namecheap and point all NS records for that subdomain at Route 53’s nameservers! This made managing DNS for “ctf.rc3.club” a thousand times easier and the time to propagate new records was minimial. Additionally, once we had Namecheap pointing at Route 53 for our subdomain we created a hosted zone. Once the zone was created we could create A records, CNAMES records and any additional DNS records needed. Once we had DNS setup we could host a website for our event on ctf.rc3.club.

  1. Login into your Namecheap account or your nameserver provider
    1. The following instructions are for Namecheap but should be fairly similar with other providers
  2. Select “Manage” for the domain you wish to add a subdomain too
  3. Select “Advance DNS” under your domain
  4. Select “Add new record”
    1. Select “NS Record” for type
    2. Enter “ctf” for host
      1. This is where you enter your subdomain but I choose ctf.
    3. Enter “ns-1220.awsdns-24.org” into nameserver
    4. Repeat steps above with the following nameservers
      1. ns-467.awsdns-58.com.
      2. ns-1644.awsdns-13.co.uk.
      3. ns-544.awsdns-04.net.
  5. Select “Save all changes”screen-shot-2016-11-22-at-7-23-39-pm

Setup an e-mail catcher

Namecheap has an awesome feature for e-mail called “catch-all”. We currently have a Gmail account for our club but I wanted a separate e-mail for the CTF event. The “catch-all” feature allows you to catch certain e-mail aliases to redirect them to certain e-mails or use a wildcard to catch all e-mails to a particular e-mail account. I decided to be lazy and setup a wildcard catch-all rule for our domain and provide CTF partipants with [email protected] for the event.

  1. Select “Domain” under your domain
  2. Scroll down to the “Redirect Mail” section
  3. Select “Add catch-all”
    1. You can setup a forwarder for a specific e-mail alias but we want all e-mails to go to one account
    2. Enter a valid e-mail address into the “forward to”
  4. Select the check mark to save the settings.

Simple Storage Service and static page

By this point in the article you know we don’t have an infrastructure set up so we didn’t have anywhere to host our website. However, we wanted to post on CTFtime about our event to have people sign-up but we didn’t have a landing for our event. AWS S3 has this great feature that allows you to host static websites. We went forward with creating a static webpage for our event which included all the information about the event and a sign-up link for a Mailchimp event subscriber list.

Mailchimp

  1. Create an account on Mailchimp
    1. Why, because it’s free and easy, if your infrastructure isn’t up yet!
    2. Login select “Lists”  and then select “Create lists”.
    3. Enter pertinent information for your e-mailing list as seen below.screen-shot-2016-11-22-at-8-38-06-pm
    4. Select “Create a signup form” and then select “General form” for form type.
    5. Add all details you would like to collect from the user for this form.

Setup S3 Bucket for static pate

  1. Login into your AWS management console and select “S3” under “Storage and Content Delivery”.
  2. Select “Create Bucket” for the name of the S3 bucket should be the name of subdomain(ctf.rc3.club)
  3. Upload all files pertaining to the static webpage.
  4. Check all the files and select “Make Public” under “Actions”.
  5. In the top right select “properties” and expand the “Enable website hosting” section.
  6. Enter “index.html” for  Index document
  7. Enter “error.html” for error document.screen-shot-2016-11-23-at-12-10-47-am
  8. Now as you can see above in the photo the endpoint is the weblink to get to static webpage.
    1. But that link is gross and we want to point “ctf.rc3.club” at our static webpage and we can with the power of Route 53.
  9.  Go to Route 53 and select your hosted zone for for your domain, mine is “ctf.rc3.club”.
  10. Select “Create Record Set” and set the record type to “A”.
  11. Select “yes” for “alias”, then click inside alias target and a drop down will appear and select “ctf.rc3.club” under S3 website endpoints.
  12. Select “Save Record Set” at the bottom to make your domain point at the static webpage.

screen-shot-2016-11-23-at-12-17-41-am

Virtual Private Cloud Setup

This is where things become fun and spicy woot woot! Before continuing to read this section please familiarize yourself with the infrastructure layout above so that it makes sense when building.  This infrastructure has multiple subnets in the VPC which are public, infra, pwn, web, and misc. So for example if a web challenge needed an EC2 instance we would put that EC2 instance into the web subnet within the VPC. The reason for this is to control each digital environment for a category of challenges so that if someone gained access to the box they couldn’t attack all boxes within the VPC, only the boxes within the subnet. Additionally, this allows us to setup Bro to monitor traffic and look for anomalies in the traffic.

VPC Setup

  1. Select “VPC” under Networking
  2. Select “Elastic IPs” on the left
    1. Select “Allocate new address”
    2. Select “Allocate”
  3. Select “Start VPC Wizard”
    1. Select “VPC with Public and Private subnets”
    2. Enter “192.168.0.0/16” into IP CIDR Block
    3. Enter “rc3CTFvpc” for VPC name
    4. Enter “192.168.0.0/24” for the public subnet
    5. Select “No preference” for availability zone
      1. Unless you need to 🙂
    6. Enter “rc3PUBLICsubbet” for subnet name
    7. Enter “192.168.10.0/24” for private infrastructure subnet
    8. Select “No preference” for availability zone
    9. Enter “rc3INFRAsubnet” for private subnet name
    10. Click in Elastic IP Allocation ID and select the newly created Elastic IP
    11. Select “Create VPC”screen-shot-2016-10-19-at-12-29-49-am
  4.  Select “Subnets” on the left
    1. Repeat these steps for pwn, and misc.
    2. Select “Create subnet”
    3. Enter “rc3WEBsubnet” for name
    4. Select “(192.168.0.0/16)rc3CTFvpc”
    5. Select “no preference” for availability zone
    6. Enter “192.168.20.0/24” for CIDR block
    7. Select “Yes” to createscreen-shot-2016-10-19-at-12-01-01-am

Create Internet Gateway

  1. Select “Internet Gateway” on the left inside the VPC section
  2. Select “Create an internet gateway” at the top
    1. Enter a name for name tag, select, select “create”
    2. Select the newly created internet gateway and select “Attach to VPC” at the top
    3. Select “rc3CTFvpc” for VPC
    4. Select “Attach”

Create NAT Gateway

  1. Select “NAT gateway” on the left inside the VPC section
  2. Select “Create a NAT gateway” at the top
    1. Select “rc3INFRAsubet” for subnet
    2. Select “Create  new EIP”
    3. Select “Create a NAT Gateway”

CTF Infrastructure Setup

HAproxy

HAproxy actually stands for high availability proxy and is a layer 4 and above aware proxy for the networking OSI model. Meaning HAproxy has the ability to handle TCP/HTTP load balancing, acts as a gate keeper to our infrastructure, does SSL termination, and provides port forwarding for ingress and egress traffic for our VPC.

Assign Public IP to HAProxy

  1. Create an AWS EC2 instance connected to “rc3PUBLICsubnet” within “rc3CTFvpc”
  2. Within the EC2 menu select “Elastic IPs” on the left under “Network and Security”
  3. Select “Allocate New Address” at the top
  4. Enter “rc3CTFhaproxy1” for instance and select “associate”

Setup Route 53 DNS

If you have a static webpage setup you may wish to skip this part until CTFd is setup.

  1. Login into AWS Management console
  2. Select “Route 53” under Networking
  3. Select “ctf.rc3.club” hosted zone
  4. Select “Create Record Set”
    1. Enter “” into name
      1. Ignore “scoreboard” in the picture below
    2. Select “A -IPv4” for Type
    3. Select “No” for Alias
    4. Enter “<IP address of HAproxy1>” for valuescreen-shot-2016-11-08-at-11-02-51-am
    5. Select “Save record set”
  5. Select “Create Record Set”
    1. Enter “www” into name
    2. Select “CNAME” for Type
    3. Select “No” for Alias
    4. Enter “ctf.rc3.club” for value
    5. Select “Save record set”screen-shot-2016-11-23-at-1-42-10-am

Install/Setup Let’s Encrypt and domain certs

  1. Login into HAproxy1
  2. Yum update -y
  3. yum upgrade -y 
  4. yum install vim net-tools -y
  5. yum install epel-release
  6. yum install certbot
  7. certbot certonly -d ctf.rc3.club -d www.ctf.rc3.clubscreen-shot-2016-11-08-at-10-59-21-am

screen-shot-2016-11-08-at-10-59-07-am

  1. mkdir /etc/haproxy/certs
  2. DOMAIN=’ctf.rc3.club’
  3. sudo -E bash -c 'cat /etc/letsencrypt/live/$DOMAIN/fullchain.pem /etc/letsencrypt/live/$DOMAIN/privkey.pem > /etc/haproxy/certs/$DOMAIN.pem'

Install/Setup HAProxy

  1. sudo -i
  2. setenforce 0
  3. yum install -y haproxy
  4. cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
  5. cat > /etc/haproxy/haproxy.cfg << 'EOF'
    global
    log 127.0.0.1 local2
    chroot /var/lib/haproxy
    pidfile /var/run/haproxy.pid
    maxconn 4000
    user haproxy
    group haproxy
    daemon
    stats socket /var/lib/haproxy/stats
    defaults
    log global
    option dontlognull
    timeout connect 10s
    timeout client 1m
    timeout server 1m
    maxconn 3000# --- CTFd proxy ---
    frontend www-https
    mode http
    option httplog
    bind 0.0.0.0:80
    bind 0.0.0.0:443 ssl crt /etc/haproxy/certs/
    redirect scheme https code 301 if !{ ssl_fc }
    acl letsencrypt-request path_beg -i /.well-known/acme-challenge/
    use_backend ctfd-scoringengine if letsencrypt-request
    default_backend ctfd-scoringengine
    backend ctfd-scoringengine
    mode http
    balance roundrobin
    option httplog
    option httpchk HEAD / HTTP/1.0
    server  ; check fall 3 rise 2
    server  ; check fall 3 rise 2# --- Web 100 SSH ---
    listen web100_ssh
    mode tcp
    option tcplog
    bind 0.0.0.0:2222
    server
    timeout client 1h
    timeout server 1h# --- Web 100 ---
    listen web100
    mode http
    option httplog
    bind 0.0.0.0:3000
    server
    timeout client 1h
    timeout server 1h
  6. systemctl enable haproxy
  7. systemctl start haproxy

Rational Database Setup(MariaDB)

  1. Select “RDS” under database
  2. Select “Get started now”
  3. Select “MariaDB” then “Select”
  4. Select “MariaDB” under “Production”
    1. Select “general-public-license” for license model
    2. Select “10.0.24” for DB Engine Version
    3. Select “db.t2.small” for DB instance class
    4. Select “Yes” for Multi-AZ deployment
    5. Select “General purpose SSD” for storage type
    6. Enter “40” for Allocated StorageScreen Shot 2016-11-05 at 3.41.39 PM.png
    7. Enter “rc3CTFmariadb” for Instance identifier
    8. Enter “superadminDB” for username
    9. Enter passwordScreen Shot 2016-11-05 at 3.45.16 PM.png
    10. Select “rc3CTFvpc” for VPC
    11. Select “Create a new DB subnet group” for subnet group
    12. Select ‘NO” for publically accessibleScreen Shot 2016-11-06 at 6.12.13 PM.png
    13. Select “Create a new security group” for VPC security groups
    14. Enter “CTFd” for database name
    15. Enter “3306” for database port
    16. Select “default.mariadb10.0” for DB parameter Group
    17. Select “default:mariadb-10-0” for option groupScreen Shot 2016-11-05 at 4.07.34 PM.png
    18. Select “7” days for backup retention period
    19. How long backups are kept
    20. Select “No preferences” for backup windows
    21. Select “yes” for enabled enhanced monitoring
    22. Select “default” for monitoring role
    23. Select “60” for granularity
    24. Select “NO” for auto minor version upgrades
    25. Select “Launch DB instance”
  1. Go to “RDS” under databases
  2. Select “Instances” on the left
  3. Expand the “rc3ctfmariadb” MariaDB instance
  4. Select security group for the DB instance
    1. Select “Inbound” tab and select “edit”
    2. Select all thes current rules and delete them
    3. Select “Add rule”
    4. Set “MySQL/Aurora” for type, “TCP” for protocol, “3306” for port, and enter “192.168.10.0/24” for Source
    5. 192.168.10.0/24 is the rc3INFRAsubnetScreen Shot 2016-11-06 at 10.33.09 PM.png
    6. Save
  1. REPEAT THE SAME FOR OUTBOUND
    1. Login into mysql instance
    2. CREATE USER ‘<ctfd user>’@’192.168.10.%’ IDENTIFIED BY ‘<PASSWORD>’;
      1. 192.168.10.0 is the rc3INFRAsubnet
    3. GRANT INSERT,SELECT,UPDATE,DELETE ON CTFd.* TO ‘<ctfd user>’@’192.168.10.%’;
    4. FLUSH PRIVILEGES;
    5. exit;

Install/Setup CTFd + Nginx + uwsgi + Let’s Encrypt

  1. Select “Create new instance”
  2. Enter “CentOS 7” into AWS marketplace for AMI
  3. Select “CentOS 7 (x86_64) – with Updates HVM”
  4. Select “t2.medium” for instance type
  5. Select “(192.168.0.0/16)rc3CTFvpc” for network
  6. Select “(192.168.10.0/26)rc3INFRAsubet for subnet
  7. Check “Enable termination protection”Screen Shot 2016-10-19 at 12.48.20 AM.png
  8. Enter “40” for GiB
  9. Select “General purpose SSD” for volume type
  10. Enter “rc3CTFDbox1” for value
  11. Add SSH inbound rule via 22 from your IP
  12. Add HTTP inbound via HTTP from 0.0.0.0/0
  13. Launch Instance

Install/Setup CTFd + uwsgi

    1. yum update && yum upgrade -y
    2. yum install epel-release -y
    3. yum install nginx vim libffi-devel mariadb-devel -y
    4. yum install python-pip python-devel gcc git -y
    5. yum install mysql -y
    6. pip install MySQL-python
    7. cd /opt
    8. git clone https://github.com/isislab/CTFd.git
    9. mv RC3_CTFD/ CTFd
    10. cd CTFd/
    11. pip install -r requirements.txt
    12. pip install uwsgi
    13. chown nginx:nginx -R /opt/CTFd
    14. cat > /opt/CTFd/ctfd.ini << 'EOF'

[uwsgi]
# Where you've put CTFD
chdir = /opt/CTFd
# If SCRIPT_ROOT is not /
#mount = /ctf=wsgi.py
# SCRIPT_ROOT is /
mount = /=wsgi.py

# You shouldn't need to change anything past here
plugin = python
module = wsgi

master = true
processes = 1
threads = 1

vacuum = true
manage-script-name = true
wsgi-file = wsgi.py
callable = app
die-on-term = true

# If you're not on debian/ubuntu, replace with uid/gid of web user
uid = nginx
gid = nginx
EOF

      1. vim /opt/CTFd/CTFd/config.py
        1. Set to ‘SQLALCHEMY_DATABASE_URI = “mysql://<username>:<password>@<hostname>/<database name>”’
        2. save,exit

Setup SystemD process

      1. cat > /etc/systemd/system/ctfd.service << EOF

[Unit]
Description=uWSGI instance to serve myproject
After=network.target

[Service]
User=nginx
Group=nginx
WorkingDirectory=/opt/CTFd

#Only use IF you setup virtualenv
#Environment="PATH=/home/user/myproject/myprojectenv/bin"
ExecStart=/bin/uwsgi -s /opt/CTFd/uwsgi.sock -w 'CTFd:create_app()'

[Install]
WantedBy=multi-user.target
EOF

        1. systemctl enable ctfd.service
        2. systemctl start ctfd.service
        3. systemctl status ctfd.service

Install/Setup Nginx

        1. vim /etc/nginx/nginx.conf
            1. Add


          user nginx;
          worker_processes auto;
          error_log /var/log/nginx/error.log;
          pid /run/nginx.pid;

          # Load dynamic modules. See /usr/share/nginx/README.dynamic.
          include /usr/share/nginx/modules/*.conf;

          events {
          worker_connections 1024;
          }

          http {
          log_format main '$remote_addr - $remote_user [$time_local] "$request" '
          '$status $body_bytes_sent "$http_referer" '
          '"$http_user_agent" "$http_x_forwarded_for"';

          access_log /var/log/nginx/access.log main;
          sendfile on;
          tcp_nopush on;
          tcp_nodelay on;
          keepalive_timeout 65;
          types_hash_max_size 2048;

          include /etc/nginx/mime.types;
          default_type application/octet-stream;
          # Load modular configuration files from the /etc/nginx/conf.d directory.
          # See http://nginx.org/en/docs/ngx_core_module.html

          #include for more information.
          include /etc/nginx/conf.d/*.conf;

        2. save,exit
        3. cat > /etc/nginx/conf.d/ctfd.conf << 'EOF'

server {
listen 80;
location / {
include uwsgi_params;
uwsgi_pass unix:/opt/CTFd/uwsgi.sock;
}
}
EOF

Install/Setup SELinux + Nginx(with uwsgi)

        1. yum install policycoreutils -y
        2. systemctl enable nginx
        3. systemctl start nginx
        4. curl localhost
          1. It will fail but that is needed for the next step
        5. grep nginx /var/log/audit/audit.log | audit2allow -M nginx
        6. semodule -i nginx.pp
        7. curl locahost
          1. Should return the login page for CTFd

Setup NTP

        1. yum install ntp
        2. sudo timedatectl set-timezone America/New_York
        3. date
          1. To confirm
            Screen Shot 2016-11-06 at 11.24.56 PM.png

Setup CTFd

        1. Browse to CTFd engine
        2. Enter “<name of CTF>” for CTF Name
        3. Enter “<admin username>” for admin username
        4. Enter “<e-mail>” for admin e-mail
        5. Enter “<password>” for ctf password
        6. Select “Click here to setup your CTFd”
        7. Select “Admin” menu in the top right then “config”
          1. Check “Only users with verified emails
          2. Select “update”
        1. Select “Admin” menu in the top right  then “config”
          1. Go to the time section
          2. Enter start time
            1. Friday November 18th 9pm
          3. Enter end time
            1. Sunday November 20th 11;59pm
        2.  Select “Update”
          Screen Shot 2016-11-06 at 11.24.06 PM.png

Setup Bro collector on HAProxy

Setup EBS Volume collection bin

        1. Select “EC2” under “Compute”
        2. Select “Volumes” on the left
        3. Select “Create volume”
          1. Select “Magnetic” for volume type
          2. Enter “100” for Size Gib
          3. Select “us-east-1e” for availability zone
            1. Select availability zone of haproxy1
        1. Select “Create”Screen Shot 2016-11-13 at 2.01.20 AM.png
        1.  Select newly created volume
        2. Go to Actions and then select “Attach volume”
          1. Enter “haproxy” for instance and hit attach

Format and mount drive on HAproxy

        1. Login into HAproxy1 via ssh
        2. Fdisk -l to list all drives

Screen Shot 2016-11-13 at 2.05.22 AM.png

        1. mkfs.ext4 <drive location>
          1. Current one is /dev/xvdf
        2.  mkdir /media/dataDisk
        3. Mkdir /media/dataDisk/BroSpool
        4. Mount /dev/xvdf /media/dataDisk
        5. Echo “/dev/xvdf /media/dataDisk ext4 defaults 0 2” >> /etc/fstab

Install/Setup Bro

        1. yum install cmake make gcc gcc-c++ flex bison libpcap-devel openssl-devel python-devel swig zlib-devel -y
        2. cd /opt
        3. git clone –recursive git://git.bro.org/bro
        4. cd bro/
        5. ./configure
        6. make
        7. make install
        8. export PATH=/usr/local/bro/bin:$PATH
        9. broctl install
        10. broctl start
        11. systemctl enable crond
        12. systemctl start crond
        13. echo “0-59/5 * * * * /usr/local/bro/bin/broctl cron” >> /etc/crontab
        14. mkdir /media/dataDisk/BroLogs
        15. sed -i ‘s#LogDir = /usr/local/bro/logs#LogDir = /media/dataDisk/BroLogs#g’ /usr/local/bro/etc/broctl.cfg
        16. sed -i ‘s#SpoolDir = /usr/local/bro/spool#SpoolDir = /media/dataDisk/BroSpool#g’ /usr/local/bro/etc/broctl.cfg
        17. echo “0.0.0.0/0” >> /usr/local/bro/etc/networks.cfg

Post mortem and lessons learned

Explaining AWS load balancers are only for HTTP/S

For those who are unaware HTTP is a stateless protocol meaning it’s a simple question and answer system. The webserver treats each request as an independent question and doesn’t keep track of information, status, or a session ID about the connection. The AWS Load Balancers work off of this stateless protocol so it will not support TCP stateful protocols like SSH. For instance, an SSH server keeps track of information, status, and a session ID for each connection. PLEASE DO NOT try and use an AWS load balancer like HAproxy to act as your frontend for the VPC.

VPCs are only layer 3 aware

I learned this one the hard way so hopefully I can clear up any misconceptions. From the documentation about AWS VPCs it seems as if they are layer 4 aware. For those unaware, layer 4 of the OSI model is the part of the stack that keeps track of protocol type, ports, and sessions. Therefore you are not able to use a VPC as a port forwarding mechanism. HAproxy is layer 4 and above aware, therefore it is your best bet for port forwarding mechanism.

Plan for the worse

No infrastructure is perfect so plan for the worse! Always have redundant systems ready to go just incase things go down. For example, we had a second HAproxy running idle just incase HAproxy1 went down. If it went down I would set the DNS record to HAproxy2, analyze HAProxy1 and hopefully bring it back up.

Take SNAPSHOTS OF everything you do!!!! Take snapshots as often as you take a sip of coffee, something will break and it’s often easier to revert back to a working point and make changes then diagnosing an issue.

Always put challenges behind nginx and wsgi

PLEASE PLEASE PLEASE PLEASE for the love of all things good put web challenges behind nginx!!!! For our competition we had several flask application for challenges and flask is not designed to handle hundreds of users and dirbuster attacks all at once. At the very least have the flask app listen on 127.0.0.1 on port 5000 and setup Nginx as a reverse proxy to the local flask app. For the first hour of our competition we had e-board literally restarting the flask apps because they kept going down. To initially stop the bleeding I set each flask app behind Nginx set up as a reverse proxy. Then I went around around and put connection limits and requests per second limits on all Nginx servers to thwart dirbuster scans. Finally, I went around to each flask app and setup uwsgi+Nginx to handle the load efficiently.  As a final note I personally prefer Nginx over Apache but the same principal applies here.

Thoughts for next year

        • Use a Docker swarm
          • Docker containers are cheaper then Ec2 instances
          • Docker swarm provides load balancing
          • Docker swarm also provides scaling of containers. Meaning if one container dies there are other to handle the load.

Resources/Sources

 

Leave a Reply

Your email address will not be published. Required fields are marked *