The verbs of the web are GET and POST. In theory there’s also PUT, DELETE, and PATCH but in practice POST often does those jobs.
I’m always surprised when front-end developers don’t think about these verbs (or request methods, to use the technical term). Knowing when to use GET and when to use POST is crucial to having a solid foundation for whatever you’re building on the web.
Luckily it’s not hard to know when to use each one. If the user is requesting something, use GET. If the user is changing something, use POST.
That’s why links are GET requests by default. A link “gets” a resource and delivers it to the user.
<a href="/items/id">
Most forms use the POST method becuase they’re changing something—creating, editing, deleting, updating.
<form method="post" action="/items/id/edit">
But not all forms should use POST. A search form should use GET.
<form method="get" action="/search">
<input type="search" name="term">
When a user performs a search, they’re still requesting a resource (a page of search results). It’s just that they need to provide some specific details for the GET request. Those details get translated into a query string appended to the URL specified in the action
attribute.
/search?term=value
I sometimes see the GET method used incorrectly:
- “Log out” links that should be forms with a “log out” button—you can always style it to look like a link if you want.
- “Unsubscribe” links in emails that immediately trigger the action of unsubscribing instead of going to a form where the POST method does the unsubscribing. I realise that this turns unsubscribing into a two-step process, which is a bit annoying from a usability point of view, but a destructive action should never be baked into a GET request.
When the it was first created, the World Wide Web was stateless by design. If you requested one web page, and then subsequently requested another web page, the server had no way of knowing that the same user was making both requests. After serving up a page in response to a GET request, the server promptly forgot all about it.
That’s how web browsing should still work. In fact, it’s one of the Web Platform Design Principles: It should be safe to visit a web page:
The Web is named for its hyperlinked structure. In order for the web to remain vibrant, users need to be able to expect that merely visiting any given link won’t have implications for the security of their computer, or for any essential aspects of their privacy.
The expectation of safe stateless browsing has been eroded over time. Every time you click on a search result in Google, or you tap on a recommended video in YouTube, or—heaven help us—you actually click on an advertisement, you just know that you’re adding to a dossier of your online profile. That’s not how the web is supposed to work.
Don’t get me wrong: building a profile of someone based on their actions isn’t inherently wrong. If a user taps on “like” or “favourite” or “bookmark”, they are actively telling the server to perform an update (and so those actions should be POST requests). But do you see the difference in where the power lies? With POST actions—fave, rate, save—the user is in charge. With GET requests, no one is supposed to be in charge—it’s meant to be a neutral transaction. Alas, the reality of today’s web is that many GET requests give more power to the dossier-building servers at the expense of the user’s agency.
The very first of the Web Platform Design Principles is Put user needs first:
If a trade-off needs to be made, always put user needs above all.
The current abuse of GET requests is damage that the web needs to route around.
Browsers are helping to a certain extent. Most browsers have the concept of private browsing, allowing you some level of statelessness, or at least time-limited statefulness. But it’s kind of messed up that private browsing is the exception, while surveillance is the default. It should be the other way around.
Firefox and Safari are taking steps to reduce tracking and fingerprinting. Rejecting third-party coookies by default is a good move. I’d love it if third-party JavaScript were also rejected by default:
In retrospect, it seems unbelievable that third-party JavaScript is even possible. I mean, putting arbitrary code—that can then inject even more arbitrary code—onto your website? That seems like a security nightmare!
I imagine if JavaScript were being specced today, it would almost certainly be restricted to the same origin by default.
Chrome has different priorities, which is understandable given that it comes from a company with a business model that is currently tied to tracking and surveillance (though it needn’t remain that way). With anti-trust proceedings rumbling in the background, there’s talk of breaking up Google to avoid monopolistic abuses of power. I honestly think it would be the best thing that could happen to Chrome if it were an independent browser that could fully focus on user needs without having to consider the surveillance needs of an advertising broker.
But we needn’t wait for the browsers to make the web a safer place for users.
Developers write the code that updates those dossiers. Developers add those oh-so-harmless-looking third-party scripts to page templates.
What if we refused?
Front-end developers in particular should be the last line of defence for users. The entire field of front-end devlopment is supposed to be predicated on the prioritisation of user needs.
And if the moral argument isn’t enough, perhaps the technical argument can get through. Tracking users based on their GET requests violates the very bedrock of the web’s architecture. Stop doing that.
Top comments (0)