DEV Community

Cover image for Fixing the firewall with UFW for Amazon EC2 (when you accidentally blocked port 22)
Denis Sinyukov
Denis Sinyukov

Posted on • Updated on • Originally published at coderden.dev

Fixing the firewall with UFW for Amazon EC2 (when you accidentally blocked port 22)

Scenario

In case you have locked UFW (Uncomplicated Firewall) on your Amazon EC2 instance and then logged out. When you try to connect to this EC2 instance via PuTTY or just via SSH, it does not work because you forgot to add SSH (port 22) to the UFW rules.
EC2 Instance list
This article will help you fix this problem and continue to use your volume.

Introduction

  1. To check the current status and output the UFW rules use the command sudo ufw status verbose
  2. To enable UFW, use the following command: sudo ufw enable
  3. To allow incoming SSH connections sudo ufw allow ssh
  4. To deny incoming SSH connections sudo ufw deny ssh ufw status command Accidentally close the SSH port. ufw deny ssh command Check the result. Connection timeout: port 22

Solution

We need another running EC2 instance to fix the broken EC2 instance.
Run second instance

  1. Stop the broken EC2 instance and detach the volume. Stop EC2 instance

Note that if you do not have an Elastic IP, when you stop EC2 instance, the IP address will change

Detach the volume of EC2 instance.
Detach broken volume

  1. Connect the volume from the broken EC2 to another EC2 instance.

It is important that your second machine is in the same Availability zone.

Volume list
Attach broken volume
After you can see list of volumes:
Attached volume list

  1. Now connect to the new EC2 instance via ssh. Connect via ssh to the second instance: ufw 2.

To display information about the disks and the partitions created on them, use the command: sudo lsblk

Mounted disks

Create a folder called fixec2 (it can be any name you prefer).

cd /mnt && mkdir fixec2

Mount the volume to the fixec2 folder using the following command:

sudo mount /dev/xvdf1 ./fixec2 && cd fixec2

Note: newer Linux kernels may rename your device to /dev/xvdf (which was the case for me)

The edited volume

  1. After a successful mount, go to fixec2/etc/ufw and edit ufw.conf.
  2. Set it to ENABLED=no and save the changes.
  3. Unmount the volume using the following command: sudo umount/dev/xvdf
  4. Go back to the AWS console, detach the volume and attach it to the broken EC2 instance. Attach volume to EC2
  5. Run this broken EC2 instance, now it is no longer broken, and you will be able to use SSH on that instance as before.

During the SSH security setup we may accidentally or intentionally block SSH for an instance. And that wouldn't be a reason to re-migrate all the files to a new instance. With a little bit of diligence, you will be fine.

Top comments (0)