DEV Community

Cover image for Setting up reconftw
darkmage
darkmage

Posted on

Setting up reconftw

In the process of bug bounty hunting, we all discover new tools. I've been seeing reconftw making the rounds lately, so I decided to get started with it and see if it does anything that I am not already doing.

It is pretty cool so far! It isn't perfect, but it is giving me the idea that I was headed in the right direction with my own automation tools.

At a high level, here are the steps we will be taking to get our dedicated recon box up and running.

  1. First, spin up a new box:

    • need 25gb+ of storage
    • debian linux
    • copy over ssh key (mura.pub)
    • copy over first10min.sh and run it
    • update sudo to never timeout (needed for reconftw's port scanning for long jobs involving many targets)
      • without this, you'll have to enter your password for each new target when reconftw gets to the port scanning phase
  2. Install reconftw

  3. Configure reconftw

    • amass API keys
    • theHarvester API keys
    • github access token
    • notify telegram API key

1. First, spin up a new box:

Sign up at vultr.com with my referal code to get $100 worth of VPS credits for free.

  • You'll need a debit or credit card in order to do this. Vultr used to accept virtual credit cards but stopped doing that, and both virtual and pre-paid cards are not elligible for their free credit promotion it seems.

https://www.vultr.com/?ref=8973037-8H

Once you sign up and link a credit card, you'll have $100 worth of free credits!

Go to 'Products' and roll over the blue and white '+' button in the top right of the window. Click 'deploy new server'.

Click 'cloud compute'

Click 'regular performance' (or whatever really, you've got $100, go hog wild!)

For server location, you might want to spin up a server physically located near you. Some server locations also don't have some options available.

However, you don't need much more than the $5/month option for server size, which has a 25 GB SSD, 1 virtual CPU, 1 GB of RAM, and 1 TB of bandwidth every month.

For this tutorial, I am using 'Debian 11 x64' linux.

I frequently use the $3.50/month boxes for much smaller things, like helping to moderate an IRC server, or running scans with smaller tools, but unfortunately, reconftw uses a lot of disk space. Attempting to install on the smaller box results in running out of disk space.

However, 25 GB seems to be fine.

Turn off 'auto backup'.

Turn off 'enable IPv6'.

Enter a server hostname and label, then click 'deploy now'!

Once your server is installed, you'll be able to SSH into the box as root and continue our initial setup.

If you click on your newly deployed server in the Vultr products dashboard, you'll be able to see both the IP address of your new box, as well as the default root password. Copy that password.

Box setup

SSH into your new box and change your root password immediately then exit the server:

ssh root@your-new-ip-address
passwd
exit
Enter fullscreen mode Exit fullscreen mode

Now, lets scp or "secure copy" our SSH key over to the server so we can set up passwordless login. If you do not have an SSH key setup, I recommend doing so, but I am not covering that here as it is out-of-scope for this tutorial.

If you'd like to set this server up without SSH keys, that is fine, but you'll need to modify the script I am about to present you.

scp your SSH key over:

scp ~/.ssh/mykey.pub root@your-new-ip-address:keyname.pub
Enter fullscreen mode Exit fullscreen mode

Then, log back into the server over SSH, using your new password.

ssh root@your-new-ip-address
Enter fullscreen mode Exit fullscreen mode

Now, open a new textfile and paste the following script over. Replace the username and keyname with your desired username and the name of your public key file.

USERNAME="dm";
KEYNAME="mura.pub";

apt-get update;
apt-get upgrade -y;
apt-get install -y tmux zsh git rsync build-essential htop fail2ban python3-pip;
useradd $USERNAME;
mkdir -p /home/$USERNAME/.ssh;
chmod 700 /home/$USERNAME/.ssh;
usermod -s /usr/bin/zsh $USERNAME;
cat $KEYNAME >> /home/$USERNAME/.ssh/authorized_keys;
rm $KEYNAME;
chmod 400 /home/$USERNAME/.ssh/authorized_keys;
chown $USERNAME:$USERNAME /home/$USERNAME -R ;
passwd $USERNAME;
usermod -aG sudo $USERNAME;
sed -i s/PermitRootLogin/#PermitRootLogin/ /etc/ssh/sshd_config;
sed -i s/PasswordAuthentication/#PasswordAuthentication/ /etc/ssh/sshd_config;
echo "PermitRootLogin no" >> /etc/ssh/sshd_config; 
echo "PasswordAuthentication no" >> /etc/ssh/sshd_config; 
echo "AllowUsers $USERNAME" >> /etc/ssh/sshd_config; 
service ssh restart;
ufw allow 22 && ufw disable && ufw enable; 
echo "Defaults:USER timestamp_timeout=-1" >> /etc/sudoers;
exec su -l $USERNAME;
Enter fullscreen mode Exit fullscreen mode

I usually save this in a file literally named a, and then I make the script executable and then run it:

chmod +x a
./a
Enter fullscreen mode Exit fullscreen mode

By the end of this script, we will have taken care of a number of items, such as:

  • disabling root login over ssh
  • set up SSH-key-based passwordless login
  • whitelisted only our user for login
  • pre-install several items
  • disabled sudo timeout (for reconftw port scans which ask for authentication)

We could add in installing reconftw to that script but I am saving it for a few steps.

I usually rsync over my personal .zshrc, .vimrc, and .tmux.conf files and .vim folder at this point from my local box:

rsync -avPr .zshrc .vim .vimrc .tmux.conf your-new-user@your-new-ip-address:.
Enter fullscreen mode Exit fullscreen mode

At this point, on your new server, I would start tmux up:

tmux
Enter fullscreen mode Exit fullscreen mode

This way, we can detach from the terminal window if we'd like to.


2. Install reconftw

Once in tmux, in your home folder, it is finally time to install reconftw:

git clone https://github.com/six2dez/reconftw
cd reconftw/
./install.sh
Enter fullscreen mode Exit fullscreen mode

You can be confident at this point that there will be no problems during install.
You might be asked to enter your sudo password during install.
The install process will take a while so, as the installer suggests, go grab a coffee :D

Maybe 30 minutes later, your install should be complete and we will be ready to continue setup.


3. Configure reconftw

amass API keys

If you have an amass install configured already, this part is easy. You can simply copy the data_sources section of your amass config file into the location on your server that reconftw will look for it:

~/.config/amass/config.ini

There are a ton of API keys possible to add to your config. I only have a handful on my config:

  • BinaryEdge
  • Censys
  • Shodan
  • Github
  • VirusTotal

theHarvester API keys

Similar setup with theHarvester, just need to place API keys in the appropriate file and format:

~/Tools/theHarvester/api-keys.yaml

github access token

~/Tools/.github_tokens

notify telegram API key

This is one of the cooler parts of this whole setup IMO.

reconftw has the ability to send notifications to Slack, Discord, and Telegram.

To setup Telegram, first you have to create a Telegram bot.

Sign in to your Telegram account (create one if you haven't yet), then message the "BotFather" to create a bot.

Once your bot is created, you'll need to acquire an API key and a chat_id to put into the config file:

~/.config/notify/provider-config.yaml

First you get your API key by creating a new bot with BotFather. They will provide the API_TOKEN needed in the next step.

Message your bot /start to get the bot started.

Then you get your chat_id:

curl https://api.telegram.org/bot<API_TOKEN>/getUpdates
Enter fullscreen mode Exit fullscreen mode

Copy the values for the API_TOKEN and chat_id into the provider-config.yaml file, and comment out the sections for slack and discord.

Your config should look like this:

telegram:
  - id: "tel"
    telegram_api_key: "nnnnnnnnnn:mmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmmm"
    telegram_chat_id: "xxxxxxxxxx"
    telegram_format: "{{data}}"
Enter fullscreen mode Exit fullscreen mode

It is pretty exciting to be able to set up a big scan on a target and then literally walk away from the computer and continue to get updates as to the status of the scan on my phone.


Of course, now that everything is setup, you are free to start scanning targets!

./reconftw.sh -d target.com -r 
Enter fullscreen mode Exit fullscreen mode

Update: 05-19-2022

I haven't posted this yet, and I forgot when I wrote it...lightyears ago mentally, but either way, in cleaning up my sublime text tabs, I had some more thoughts regarding how reconftw does things, and how they can be leveraged in your own custom bug hunting.

I'm currently hunting an SSRF, and taking a look at how reconftw handles using interactsh is interesting:

interactsh-client &>.tmp/ssrf_callback.txt &
COLLAB_SERVER_FIX=$(cat .tmp/ssrf_callback.txt | tail -n1 | cut -c 16-)
COLLAB_SERVER_URL="http://$COLLAB_SERVER_FIX"
#...
cat gf/ssrf.txt | qsreplace ${COLLAB_SERVER_FIX} | anew -q .tmp/tmp_ssrf.txt
ffuf -v -H "${HEADER}" -t $FFUF_THREADS -w .tmp/tmp_ssrf.txt -u FUZZ 2>/dev/null | grep "URL" | sed 's/| URL | //' | anew -q vulns/ssrf_requested_url.txt 
#...
sleep 5;
[ -s ".tmp/ssrf_callback.txt" ] && cat .tmp/ssrf_callback.txt | tail -n+11 | anew -q vulns/ssrf_callback.txt && NUMOFLINES=$(cat .tmp/ssrf_callback.txt | tail -n+12 | sed '/^$/d' | wc -l) 
[ "$INTERACT" = true ] && notification "SSRF: ${NUMOFLINES} callbacks received" info
Enter fullscreen mode Exit fullscreen mode

I like how interactsh-client is backgrounded, the url is extracted as an environment variable, and any callbacks get written to file. That's all cool af and worth doing on your own.

However, a couple notes:

  • qsreplace, a popular query parameter replacement tool in the bug bounty community, replaces ALL parameter values with your replacement value, which is not as useful as my own replacement to qsreplace, qs, which can be installed by doing pip3 install queryswap. https://pypi.org/project/queryswap/.

qs works similar to qsreplace. It accepts urls from stdin, and prints to stdout the results. Instead of replacing ALL values, qs replaces each value, one at a time, and preserves the other parameters' values, once per value.

So, if you have 1 url with 3 parameters, you'll get 3 urls returned.
If you have 1 url with no parameters, you'll get 0 urls returned.

Many urls will not respond without some sort of arbitrary value on a particular parameter, so they are worth preserving and not blindly destroying during testing, as you'll otherwise overlook potentially useful endpoints and/or parameter/value combinations.

As of this writing, I am digging into an SSRF that I've discovered using qs, and it is on a private program with big payouts, so it is worth taking a look at in your own hunts.

  • Only 5 seconds worth of time is given to wait for the SSRF callback to be checked. In practice, not only have I been reading stories about arbitrarily long lengths of time required between a request being fired and the callback pinging, but my own experience is that there is sometimes staggering between when a request first fires and when the callback gets hit.

I've been whittling down a stack of 800,000+ URLs down to 2500, and it has not been easy. The endpoints don't always fire, the callbacks aren't always pingable, and any number of other factors can be implicated in the difficulties of narrowing down exactly which of these requests is triggering the callback.

One technique I'm using is to split a file in two. Let's say we have a file called found0.txt with 5000 lines:

split -l 2500 found0.txt

You'll get xaa and xab. Now, let's say you've already hit found0.txt with python3 -m qs d3v.mycallback.com and you've got it running in either a burp collaborator or interactsh session (I need to research these more as other options might be necessary).

Well, fire up another callback (get another collaborator url and/or run another interactsh session), copy that url. Let's say it is d4v.mycallback.com.

sed -i s/d3v/d4v/ xab

Now you've got two files, xaa and xab, each set up with parameters corresponding to two unique callbacks. If you start firing off requests by cating each file into httpx or curl or whatever, and, in my experience, and it isn't always this way, but you might have to loop this process for a while, and between the two files, there's at least one successful callback that triggers, this process will tell you which of the two files it was in because when the callback fires, the host prefix will correspond to either d3v or d4v, indicating which file triggered the callback.

I've narrowed down a potential SSRF/RCE from 800,000+ URLs over the last 3 days down to 2500 using this method. Patience is a virtue. I am reminded of bug bounty reports involving weeks of trial and error, so I should feel lucky that I am able to write on this as I get closer to my goal.

I wish you all luck and god speed!


If you enjoyed this walkthrough / tutorial on how I have set up reconftw and you found it useful, please consider supporting me in a number of different ways:

Top comments (0)