Photo by Saksham Gangwar on Unsplash
I maintain my own mail-server using postfix & dovecot on an amazon ec2 linux instance. Sure, I could use amazon's SES service, but not only am I cheap, but a geek. Such a service is a commitment: next to the obvious security challenge, one get's to become familiar with things like dmarc and spf, as well as protocols like imap.
It started a few days ago: My server became slower and slower. It took me a while to suspect something going on, as traffic to some of my sites was high recently and I am using a nano-instance. So my first instinct was to scale the instance. Due diligence let me to first check some logs, though.
What I saw was astonishing. From various different IP addresses associated with Irish and Russian server farms my server was targeted with brute-force attempts both to see what email addresses exist (checking for rejected recipient addresses) as well as SASL login attempts. As soon as the quota for unsuccessful attempts was reached, the next attempt came from a different IP.
Although my fort held, the massive traffic brought my server to it's knees. Service interruptions and slow delivery was the least of my problems. At the end of the day, it was only a question of time: brute-force always works. It's just a question of how long it takes. And needless to say, with aws I pay for computing.
First counter: block IPs for a longer period after the second unsuccessful login attempt and limit simultaneous connections from the same client
It seemed like a logical first step, but you probably already guessed it: the remote IPs just changed more frequently. I suppose whatever script they use is adaptive.
Have you ever wondered why your bounces take so long to be reported back to you? I always wondered why I receive a "failed to deliver" notification hours after I tried to reach someone. But in this context, it all of a sudden made sense. Your script wants to know whether the email "firstname.lastname@example.org" exists? How about I just accept that email for now. I was aware that I had to take into account that legitimate senders must eventually be informed, but first I had to see if this actually works.
Well, so the behavior didn't adapt, but it also didn't stop raining incommings on my server. As a matter of fact, now it was assumed that emails were found so login attempts on these (non-existing accounts) started to ramp up. Uff. So traffic got even worse.
Hold on! I am thinking defensive. Attack is the best defense, isn't it? So what's the endgame here? The attack is likely targeted at using a mail server to send out spam. And the mails will likely include links to malicious pages using email addresses bought in the dark net. With many options to report sites to blacklists and to inform about leaked emails, wouldn't it be interesting to know what those emails contain and simply report any hyperlink in them directly without these emails ever reaching the victims?
My plan was simple: Let's create the accounts they are trying to log in with on the fly (or at least almost, am fighting with some delays here and there) and use a rainbow table to give these accounts the most common passwords possible. They will be able to generate a successful login fast. However, instead of letting these accounts send out emails, let's limit their quota to 0 and put those emails in a folder for further processing. However, in order for this to work, we must influence the smpt response appropriately to keep the impression that the mail has been sent out.
NOTE: While I would love to go into more detail here, I cannot share some modifications until I am certain such activity isn't traceable or poses any other kind of risk or negative effect.
Not even a day later they successfully logged in. To my surprise, no emails have been sent out yet. I don't know the business model, but maybe at that point these credentials are sold and another group or individual does the actual mailings. However, that gives me some time. I still need to process the emails and automatically report the links and recipients.
So here is the long-term plan: If this works out, I want to lift such methods to an open source level and enable webmasters to join forces. What I don't know at this point is how expensive this is going to get for me. I applied common limits but who knows how much data is going to be sent to my server. I'll let you know...