Author: Ville Tirronen
As programmers at Typeable, our main goal is to provide value for our customers. However, I just finished spending some customer money and a full day adding CSRF protection to our login page with, hopefully, zero visible effect anywhere. It can occasionally be difficult to see the value of such security efforts, so I thought it could be useful to detail what potentially could happen without CSRF protection and why this small change actually adds a lot of value.
Firstly, what is a CSRF attack? The original idea of the World Wide Web was to have interlinked content, where one page could, for example, show images from another and link to other pages without any restrictions. Under the hood, the web browser works by loading different resources by making requests to servers. Servers decode the requests and respond with the sending the appropriate content. The web is stateless and originally the servers do not know the purpose of any request. The server understands that the browser is asking for a picture of the Eiffel tower but it does not know that the browser intends to put this picture into rendering of a Wikipedia article.
For static content, this scheme makes the web easy to build. But we don't use the web for static content so much anymore. In the context of web applications, this behaviour is no longer valid. The server must know why a request is made. If the browser is asking for the bank server to transfer money, the server must know that this is because the user has pressed the appropriate button on the bank's web site and not because the user has navigated to some other, malicious, page designed to emit the same request.
This is the gist of a Cross-Site Request Forgery (CSRF) attack. If the attacker can lure your user to their web site while that user is logged in on your web site, they can forge requests to your site using that user's credentials. That is, theoretically, they can activate any functionality that is available to the user. And although most browsers and servers are wise to this possibility, the out of the box mitigations are not airtight and it is the application authors responsibility to ensure that only intended requests are served.
From the attackers perspective, the CSRF comes with a lot of caveats. Most importantly, the attackers can't see the result of CSRF requests due to browser based mitigations. The attackers must work blind. Secondly, the attacker must be able to guess what the request should look like, making it often hard to exploit routes that are not available to a common user. Considerations such as these can be the reason why CSRF is not always considered the top priority.
However, there are lots of things a clever attacker can do with CSRF vulnerabilities. Here are eight obvious ones for various circumstances:
Well, this is a boring and abstract threat. But, still, look at any web app. Surely there is something nasty one could do with CSRF? Like, send an embarrassing message on behalf of a hapless user or book someone a flight to Tajikistan? Or if the victim is an admin, perhaps the attackers can also grant themselves super-user status?
Although the attacker can't see the actual response, they can usually time how long it takes. Timing attacks are very difficult to defend against and we only take that bother in critical places, like with the login page.
But no one really defends for timing attacks in, say, search functionality. As an example, let's suppose that you're running a dating service and the attacker makes the victim's, say Sarah's, browser request the routes
messages/search?query=blurghafest. If the first one reliably takes longer than the second (and it doesn't need to be much longer), you might have blood. Or at least some remote drama and angry customer.
This can also be a convenient way to covertly check if someone has an account for some specific service. Just make a CSRF on a route that is slow and only available for logged in users. It will take longer if the user is logged in.
Sign out pages are often not CSRF-protected, because it is important that users can log out in all situations. This includes situations where the user has thrown out their CSRF cookie using developer tools. Prioritising the ability to log out makes sign out pages a good target for CSRF shenanigans.
For example, it wouldn't be at all fun if a scalper logs all your web forum users out from your web shop immediately when a new and rare gaming console becomes available.
Sign in pages are often also special cases in CSRF protection schemes. Usually, CSRF is being tracked in the user's session data and as there is no session yet available on the sign-in page, so the sign-in page can be left without protection.
In this scenario, the attacker first signs out the user and then tries to log them back in again. Now, to log the user back in, the attacker would need the user's account and password details, which the attacker hopefully does not have. What the attackers can do instead is that they log the user back in using the attackers credentials. Why would they do that?
Suppose, for example, that the website is something that needs credit card details. When the user gets unwittingly signed on to the attacker's account, there won't be any credit card details there. So, to do what the user was supposed to be doing, like attempting to buy the ps5, they must re-enter their credit card number. But if they enter the number now, it gets associated with the attacker's account from which the attacker can conveniently steal it.
In most cases, updating user account details is done with a single request. If you can CSRF that request you can potentially change things like the user's email. Now, if that happens, the attacker can simply go to the login page and initiate a password reset process and steal the user's account.
And if changing the email doesn't work, then the attacker can try to change things like phone or credit card numbers, which the system support uses to identify users when they call in. Perhaps the attacker can now fool the support people to give them access to the user's account?
CSRF can turn self-XSS, which is not a big issue, into XSS, which is a huge issue.
A Self-XSS is a lot less worrying security issue. Here users can potentially insert scripts into the web site, but those scripts are visible only using that users own account, but not by other users. So, at most, a user can harm themselves.
To give an example of a real-world Self-XSS vulnerability, I found a few years back that my University's enrollment system had a course description page that you could write scripts in. This was thought to be inconsequential since only the teachers could write on the page and they'd be unlikely to add harmful scripts there.
If the attacker is also a user of the web app they are attacking, then CSRF is a very good way to cover up their tracks. Want to insert an XSS script in a forum post, but don't want to get caught doing it using your own account? Then CSRF some other user and use his browser and credentials to insert the XSS.
Unless the user has truly exceptional perception they'll never know where they got that CSRF hit from.
Does your service allow users to store files? Now, many types of files, like .docx, pdf, jpeg etc. have historically been used to conceal malware due to bugs in the programs reading them.
If the service is vulnerable to CSRF the attacker can then upload corrupt files to users accounts. And the users will open them on their phones and laptops.
Hopefully, I've convinced you that my half-day of implementing CSRF protection was actually a great investment for our customer. And that, if you haven't already, now is a great time to go and add CSRF protection to your user site.