As a penetration tester, you need to be good at taking notes. Documentation allows you to be more efficient during your testing and the quality of your notes will directly lead to the quality of the report you hand your client (whether that's an internal stakeholder or a third-party). I generally use XMind Zen (recently re-branded to take over the XMind name wholesale) during my engagements as I've found a mind map most easily matches how my brain organizes information.
That is to say, organized chaos.
This is an example from the mind map I use to organize my HackTheBox machines. For each machine, I list my enumeration and gathered information, as well as what exploits I've tried and what is successful (and what has failed). This screenshot is a simple example, but I can't show a screenshot from one of my 'real' engagements :)
At the end of an engagement, I'd like to start writing up my notes in a more organized fashion and have a globally searchable repository of information I have researched to assist me on future engagements.
This article will look at how to install and configure Gollum, my wiki platform of choice, and includes a discussion of cost optimizations. It should take about 30 minutes to set everything up in the AWS Console.
- Selecting a Wiki platform
- Self-Hosting Instructions
So, I want to host my own wiki of pentest knowledge. I spent some time looking at what platforms and tools people recommend. I settled on Gollum for a few reasons. It supports Markdown out-of-the-box, in
kramdown style with GitHub-flavored syntax. But, it goes beyond Markdown and allows me to add UML diagrams and mark up my pages with annotations. I can even add mathematical notation simply, if I ever needed to. This is all without needing any configuration outside of a standard install of the project. I'm a software engineer when I'm not running penetration tests, but tinkering with lots of add-ons and subsequent installations is not what I want for this project. I want something simple and I want to get up and running in minutes.
Here's an example of the final product - one of my wiki pages with some notes on opening a reverse shell on a target.
Everything is organized in a tree off of the home page:
And I can globally search if I need to look something up:
Keep in mind I have just set this up so I need to transfer a lot of my knowledge into the wiki.
Let's go over how I am self-hosting this and how I am restricting access to authorized users - just myself at this point.
Why restrict access? Besides documenting my research and cheat sheet tricks, I will be documenting some actual penetration tests on this wiki as well (under the Engagements heading in the Home page image). The engagements I run for my job are not my property, but I am preparing for some industry certifications and routinely go onto platforms like HackTheBox. I want to be able to document my work and refer back to it on the same platform as my research, but do not want those engagements publicly visible.
So, authorization restrictions are an important requirement for me. The following architecture and setup instructions do not require authorization - and it will be clear which pieces you can remove if you would like to emulate this setup but do not need authorization controls.
I will host a small server in AWS fronted by a load balancer so I can serve it from my personal domain with an ACM certificate.
For the authorization component, I don't want to modify the Gollum source. Instead, I will use an ALB to forward requests to AWS Cognito and have my user management there. If a user is authenticated from Cognito, they will access the site. If not, they will be presented with a nice login page:
Not needing to modify any code or configuration on the server itself to set up user authentication is a nice feature. I am going to step through all of the steps in this article.
A Cloudcraft view of my architecture:
An Application Load Balancer (ALB) is connected with Cognito to serve traffic to my t2.micro EC2 instance, which is backed by an Elastic File Service (EFS) mount for the Gollum page files to persist. We'll talk about pricing at the end of the article.
Let's start from the file system and move up to the internet-facing load balancer.
Each page of the Gollum wiki is a new file on the file system. I don't want to set up an EBS volume and deal with Data Lifecycle Management and EBS snapshots. Instead, an EFS file system will be a pretty cheap way to get persistent storage.
Navigate to the EFS console and create a new file system. Leave the network access settings at the defaults (mount targets on all availability zones) and move to the file system settings.
Here, we want to enable the lifecycle policy. I set it to 7 days, which is perfect for my use case. You can set the policy up to 30 days.
Bursting throughput should be sufficient for most use cases. Similarly, general purpose performance mode should be sufficient.
Finally, make sure to enable encryption at rest on the EFS device. You can use the default
Now for client access.
Disable root access and and enforce in-transit encryption. Make sure to press the "set policy" button to apply these settings! You can leave the access points section alone.
Now you can move forward and create the EFS instance.
With the EFS instance set up we can create the EC2 instance. Head over to the EC2 instance creation wizard and choose your AMI of choice. I'm going ahead with Amazon Linux 2.
Similarly, choose your instance type. For me, that is
The instance details are where we configure EFS.
Scroll down to the EFS section and add a file system. Select the EFS system we created and note the mount point - in this case,
/mnt/efs/fs1. The user data script is automatically populated with the necessary steps to mount the EFS file system at the specific mount point.
Go ahead and enforce V2 of instance metadata, if you need it.
Move ahead to the storage options. Since our wiki will be created in the EFS mount, we can leave the instance size small at 8 GB. Be sure to enable encryption - you can use the default EBS key.
For the instance's security group, enable SSH (TCP 22) to your home IP subnet. We will go back to this security group and add network input from the ALB's security group once we have created it, but that is all for now.
We can now launch this EC2 instance. Save the pem key to SSH into this server. When we SSH into it, we can access the EFS instance at
We can now go to the Load Balancer creation wizard. Select an Application Load Balancer.
Go ahead and set two listeners - one for HTTP traffic on port 80, the other for HTTPS traffic on port 443.
On the security settings page, I attached an ACM certificate for my HTTPS listener. If you do not have an ACM certificate, the "Request a new certificate from ACM" link has a pretty good wizard to walk you through it.
For the load balancer security group, set HTTP (TCP 80) and HTTPS (TCP 443) sources to Anywhere (0.0.0.0/0, ::/0).
Gollum runs on port 4567 by default and the home page will exist at
/Home. So this is how we will set up the target group.
Next, register our targets by selecting the instance. Don't forget to select "Add to registered" to register the instance.
Now go ahead and create the ALB.
Go back to your Security Groups. We will update the EC2 instance's security group to forward traffic from our load balancer.
Since Gollum will be serving from port 4567 on the EC2 instance, we need to allow traffic to 4567 from the load balancer's security group.
I'm cutting off the security group name here, but AWS will give you a dropdown of other security groups so you can easily click on the right one.
Now our instance will be reporting as unhealthy. Let's set up Gollum on the server.
SSH onto the EC2 instance. Gollum has several installation possibilities. The simplest* is via a Ruby gem.
*: Simple Gollum installation, although there are several dependency steps.
Let's install RVM and then Ruby.
# RVM dependencies sudo yum install curl gpg gcc gcc-c++ make patch autoconf automake bison libffi-devel libtool patch readline-devel sqlite-devel zlib-devel openssl-devel; gpg2 --keyserver hkp://pool.sks-keyservers.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3 7D2BAF1CF37B13E2069D6956105BD0E739499BDB;
# RVM and Ruby installation curl -sSL https://get.rvm.io | bash -s stable; source /home/ec2-user/.rvm/scripts/rvm; rvm install ruby-2.6;
Now we install Gollum. There is an additional dependency in the Gollum gem - we need CMake 3.x. CMake 2.x is all that is present on Amazon Linux 2, so we need to build CMake from source.
We do so with the following:
sudo yum remove cmake; wget https://cmake.org/files/v3.10/cmake-3.10.0.tar.gz; tar -xvzf cmake-3.10.0.tar.gz; cd cmake-3.10.0; # bootstrap and make will take around 10 minutes all told ./bootstrap; make; sudo make install;
Now we install gollum.
gem install gollum;
Gollum needs a Git directory at the page where it serves the wiki. So we set that up on the EFS mount.
sudo chown ec2-user:ec2-user /mnt/efs/fs1 cd /mnt/efs/fs1; git init;
Now we set up Gollum to run via systemd.
sudo vim /etc/systemd/system/gollum.service;
We want this service file to look like the following. We run the service as the
ec2-user, not as root.
[Unit] Description="Pentest Wiki" [Service] User=ec2-user Group=ec2-user Type=Simple ExecStart=/home/ec2-user/.rvm/gems/ruby-2.6.5/wrappers/gollum /mnt/efs/fs1 --allow-uploads page --critic-markup --user-icons identicon --h1-title [Install] WantedBy=multi-user.target
We run Gollum with several of its configuration options. The only required part of the
ExecStart is that we use the path to the gollum wrapper executable and point Gollum to run from the EFS mount:
Now we start our service:
sudo systemctl daemon-reload; sudo systemctl start gollum.service; sudo systemctl status gollum.service;
If we navigate to our wiki (https://wiki.artis3nal.com/ for me), we should see Gollum's prompt to create a new Home page.
Now we're good to go! Our wiki is created. If you create the home page, the ALB should mark the EC2 instance as healthy after a few seconds.
If you want Cognito authorization like me, there are just a few more steps.
Head back into the AWS Console to create a new User Pool. I am going to step through the settings.
You can’t change the sign-in and attribute options on this page after you’ve created your user pool. Make sure that you’ve decided on the settings that you want.
I am going to allow users to authenticate with a verified email address and additionally require a phone number for new users. The phone number is to allow MFA.
Set your password requirements on the next page. On the page after that you can set your MFA options. We can leave the message customization settings alone.
The next important page is "App clients."
Select "add an app client." The only auth flow configuration we need is the refresh token, although, although SRP was checked by default as well. I left it alone, as all we need is the refresh token. We leave the triggers alone and move forward with creating the user pool.
There are two final steps to set in the User Pool.
You need to configure a domain name.
I entered my domain prefix of choice for an Amazon Cognito domain.
Now we need to configure App client settings.
The key point here is to set a correct callback URL. The domain should either be the DNS
A record hostname of your load balancer or your custom domain. In my case, I set my
wiki.artis3nal.com domain. Regardless, you must use
https:// and set the path as
Also important is checking
Authorization code grant under Allowed OAuth Flows and
openid as the Allowed OAuth Scope.
Finally, we hook this into our ALB. Head over to your load balancer list and move over to your load balancer's listener tab.
On the HTTP 80 rule, make sure your traffic redirects to port 443.
Now go into the HTTP 443 listener rules. We are going to add a new rule on top of the default rule, so it is evaluated first.
We set the rule to evaluate on any path on our host, so
Finally, we attach Cognito. We add an "Authenticate" action and select the Cognito User Pool objects we have just created. Under "Advanced Settings," we want to confirm the "Action on unauthenticated request" is set to "Authenticate (client reattempt)" and the scope is "openid."
Now we save the rule and try to visit our wiki again. Now we are prompted to authenticate with Cognito.
We sign in, and then...
Great! Everything is hooked up and running correctly. We are done.
As it stands today, my EC2 instance is on-demand, which brings me to $24.85/month. Oof.
These budget charts come out of Cloudcraft, but I double-checked them against AWS's pricing documentation. I don't have a relationship with Cloudcraft, they just make a great AWS visualization tool.
Storage costs in EFS are inconsequential. The content of the Wiki is a bunch of text files, with some images uploaded to embed into some of the pages. It's that tiny red sliver in the chart. I have the budget set for 1 GB of data, which is 30 cents/month on standard access. I don't expect to hit 1GB of data on the wiki for some time, but going to ~100MB, where I expect to be for some time, we are comparing 3 cents to 1 cent, so for budget planning I rounded up.
The EFS pricing documentation shows that standard access data is $0.30/GB while infrequent access data is $0.025/GB + $0.01/GB transferred. I have set up a 7-day lifecycle on EFS, so files that have not been accessed for 7 days are moved into infrequent access (IA) storage. That brings 1 GB of storage down to $0.04/month - $0.03 for the data storage and $0.01 for 100 MB of data transfer over the month. Since I expect to only occasionally access my wiki - there are usually weeks between pentest engagements - I can expect a good chunk of my files to stay in IA.
This comes out to savings of 86.67% on the storage. Nice! But that isn't the number I'm concerned with.
t2.micro server is pretty small, but there's still some significant savings to be made here. It is running on an on-demand server, costing me $8.35/month.
I am planning for this wiki to assist me for years to come. I can use reserved instances to cut down my costs significantly.
If I pay upfront for a 1-year reserved instance, I can bring my monthly EC2 costs down to $4.92. That is 41.08% savings.
But we can do better.
If I am in this for the long haul, I can pay upfront for a 3-year reserved instance, bringing my monthly EC2 bill to $3.19 - 61.80% savings.
But I will be using this wiki infrequently (compared to its uptime), and if it's unavailable for brief moments I can always take a break, stretch my legs, and come back to note-taking in several minutes.
Which means we can just put this on a spot instance. That has no yearly commitment to it and brings my EC2 costs to $2.51/month (compared to the last 30 days of historical pricing on spot instances, which are usually pretty stable). This means I am looking at savings of 69.82%. Not bad.
The largest expense by far is the ALB.
According to AWS's load balancing pricing, ALBs clock in at:
$0.0225 per Application Load Balancer-hour (or partial hour)
$0.008per LCU-hour (or partial hour)
Load balancer-hour is simple to compute - a straight count of how many hours your ALB exists. In a month, that is 720 hours.
Load Balancer Capacity Units (LCU) are another matter.
They are a super weird and confusing measurement of how much traffic you expect to receive. They are a measurement of new connections + active connections + processed bytes + rule evaluations.
A single LCU contains:
- 25 new connections per second
- 3,000 active connections per minute
- 1 GB of processed bytes per hour for EC2 instances (other resources have different allotments)
- 1,000 rule evaluations per second
If you max any of these single measurements, you get dinged with additional LCUs.
A rule evaluation includes something like, oh, requiring Cognito authentication to resolve an endpoint. But it also includes the default ALB rule - forward request to EC2 instance. So you can think of every user request to the server resolving to 1+ rule evaluations. And if your homepage requests 10 other resources like CSS, JS, images, and whatnot from your server? Each is a separate request through your ALB! New rule evaluations! Hooray.
I am a single user, so I budgeted out the lowest possible LCU metric I could set. At
0.0001 LCUs, I'm looking at $16.20/month.
0.01 LCUs brings me to $16.26/month, so not a big jump to round down in this budget estimate, since I really have no idea how the LCU will be calculated for my usage of the wiki.
Can I do any better? Let's say I drop Cognito as a requirement, which is the reason I need an ALB as opposed to one of the other load balancers.
A network load balancer (NLB) comes out to:
$0.0225 per Network Load Balancer-hour (or partial hour)
$0.006 per LCU-hour (or partial hour)
Well, ok according to Cloudcraft the cost comes out to $16.20/month, just like an ALB. No reason to switch.
The classic load balancer (CLB?) is more expensive than the other two:
$0.006 per LCU-hour (or partial hour)
$0.008 per GB of data processed by a Classic Load Balancer
Assuming 0.5 GB of data transferred per month (way higher than I expect it to be), I am looking at $18/month. Moving the dial to 0.001 GB/month is also $18/month. I checked Cloudcraft's numbers with the calculators on AWS's LB pricing page and they agreed - $18.25/month.
So, an ALB is actually the most cost-effective choice... if I need a load balancer, that is.
What about AWS Cognito? How expensive is it to hook that up to the ALB?
According to Cognito's documentation, it is free up to 50,000 monthly active users. My lonesome will do just fine. Note that this pricing schedule is for AWS Cognito usage with a User Pool or social identity providers. There are separate pricing calculations to make if you use an OIDC, SAML, or other federated identity provider. I didn't look into those.
Also note that the Cognito "free tier" does not expire 12 months after your AWS account is created like the rest of AWS's services. You will always* have 50,000 MAUs for free. It's great how simple the free tier is! /s
*: Until they change the pricing model.
All together, I'm pleased with my wiki setup. It's hosted on my domain - https://wiki.artis3nal.com. It is locked to anyone except those users I configure in Cognito to have access. Gollum is easy to use, and I find myself enjoying documenting my knowledge on the platform.
My final monthly costs, after I expand my account spot instance limit and implement those changes, become:
All told, I'm saving 24.51% from the original bill. Not bad. But, the ALB is a thorn in my side. This is still way too expensive for my use case. The article title does say pentests and profit, after all. Little did you know, dear reader, that I meant AWS's profit, not yours.
I am going to think about alternative authorization controls I can implement to remove the need for Cognito and therefore the ALB. Some plan that, ideally, retains the
wiki subdomain on my site.
I realized this design is pretty much exactly what Tailscale is for. I am already using their Wireguard VPN mesh, so I can onboard the EC2 instance and set the Tailscale 100.x reserved IP as the
A record in my DNS records. In that way, any of my machines on my Tailscale network can access my wiki, but it will be unresolvable by anyone else. That allows me to remove the ALB entirely, eliminating that cost and bringing my monthly bill to $2.56/month. That's an 85.36% improvement on our earlier optimizations and an 89.70% improvement from the original bill. Much more manageable. I will write a follow-up article describing how I re-architect the system to access my wiki via Tailscale. Tailscale is really easy to integrate.
Maybe there's a way I can use the Route53 database to manage my user authentication...
My next steps:
- Set a cron job to back up the git directory on the EFS mount to a private GitHub repo. (new article, TBA)
- Set up Tailscale on the EC2 instance and route DNS traffic to the Tailscale IP. Drop the ALB entirely. (new article, TBA)
- Convert the AWS Console steps into a Terraform module. (new article, TBA)
- Convert the Gollum provisioning steps into an Ansible role. (new article, TBA)