Introduction
In this series of articles I want to describe private communication solutions between a client and a server. What does private mean? It means that I want to access or manipulate my own private data that is stored on a server, and it should be only me that can perform such actions. Other people should be rejected by the server if they try to access my personal data. Why would somebody want my data, why should I care if my data gets into somebody else's hands? Imagine how would you feel if your online bank account is accessed by a person and they steal your money; or write posts on your social media account; or access a website where you paid for having access and they use its services. Bottom line is that we need some ways to protect our data.
The server needs to know that the request for a specific data is made by the owner of that data (or by someone who is entitled by the owner to make requests on their behalf, but this is another story which I’ll cover later in the series). This process is called authentication. On the other hand, there is another process called authorization, that determines what a user is allowed to do after they are authenticated.
I want to explore the ways of creating an authenticated connection, by describing a step-by-step evolution of solutions. I’ll start with a basic scenario, then discussing the issues and introducing improvements.
The Problem
Scenario: A client makes an HTTP request to a server, without sending any private information.
Workflow:
- A client initiates an HTTP request to a server
- The server knows nothing about the identity of the client
- The server responds with public data
Issues:
- Obviously, the server doesn’t know who made the request and won’t be able to send back any private data.
- Of course, we need to find a way to inform the server who we are.
Improvement:
- Let’s say I’ll add my user ID to the request with the intention to tell the server who I am.
Sending the User ID in the request
Positives:
- The server has a hint at the client’s possible identity.
Issues:
- The network traffic can be intercepted and the user ID can be exposed (I’ll cover how the network can be intercepted later in the series)
- IDs can be guessed or leaked (usually IDs follow a sequential or predictable pattern)
- The server doesn’t really know that the request came from the legitimate owner of that ID.
Improvements:
- Use a secure communication through the HTTPS protocol (I’ll cover this topic later in the series). This way no one will "see" what I send to the server.
- Create a secret password associated with the account that is known only by me.
Note: Throughout the entire article I will assume that all communication use HTTPS.
Create a secret password associated with the account
This mechanism implies that the user defines their own password which will be associated with their account on its creation. The user ID is generated automatically by the server, but the password is something that is created by the user and known by the user only.
Note: The passwords have to be stored on the server and it must be so in a secure way (hashed and salted), never in plain text! More detailed information on this topic could be found at https://auth0.com/blog/adding-salt-to-hashing-a-better-way-to-store-passwords/ or https://cheatsheetseries.owasp.org/cheatsheets/Password_Storage_Cheat_Sheet.html.
Now I can send my user ID and my password in the request to the server and it will detect that it is me, and so, it can respond with my private data or perform the requested action.
Positives:
- The server authenticates me.
Issues:
- The passwords could be weak and vulnerable to brute-force attacks.
- They can be stolen through phishing or data breaches.
Improvements:
- Force the passwords to have a minimum length and be as random as possible.
- Add a second layer of authentication (2FA or MFA) - I will discuss this topic later in the series.
Now we have 2 pieces of information to send to the server so it can acknowledge our identity, which is a significant step in making the communication private. But here is a catch: should we send the user ID and the password over and over for each subsequent request?
Dealing with subsequent requests
Even if we use HTTPS, which encrypts the information sent over the network, repeatedly sending the password for every request exposes some real dangers:
- An attacker could make the exact same request with different password combinations until they guess the correct one. Of course, this process requires a lot of time, but as long as the password doesn’t change, the chances to guess the password increase.
- Even with HTTPS, some potential interception points do exist, like malware on the user’s device, or someone inside a local network monitoring communications.
What is the solution then? To create temporary identifiers that are very hard to guess, have an expiration time, and are associated with the user’s identity. These keys are generated by the server and sent to the client after a successful authentication, and the client will send them back to the server for subsequent requests, so the server will know the identity of the client. This is just a high-level description of the concept.
Evolution of temporary identifiers
Session identifiers
In the early days, the industry developed session IDs. The core idea was to generate a random identifier and associate it with the user information. These two pieces of information (the random identifier and the user data) were stored on the server side and the identifier was sent back to the client via cookies, which was a mechanism already in place in the browsers. Subsequent requests to the server automatically included this cookie (this is how the browser works by default), so the usage was straightforward. Limitations became apparent in distributed systems and API-driven architecture due to the complexity of managing and synchronizing the session IDs across different servers.
API keys
To address some limitations, API keys emerged, which were long, random strings that granted access to specific APIs. They helped with scalability, but they focused on simply identifying an application, not a specific user, they didn’t have a proper standardized structure, and they lacked flexibility and fine-grained authorization.
Web Tokens
The need for identifying the user based on these keys led to the development of self-contained tokens. Early implementations lacked a common standard, but then the JSON Web Token (JWT) specifications were adopted, which provided a standardized way to structure, sign, and verify tokens. This approach was characterized by statelessness and proved to be more consistent and interoperable for authentication, allowing APIs to validate user identity and permissions without relying on server-side session data.
To address the security concern of not exposing the tokens for a long period of time, a new type of token was introduced, namely refresh token, that is used to generate new access tokens, which have a relatively shorter lifespan.
Session IDs
Definition
Session IDs are a random and unique list of characters that the server generates after it receives and successfully validates the user’s credentials.
Workflow
- The client sends the credentials to the server.
- The server validates them and if they are valid, a session ID is generated.
- The server has a session management system that stores the ID (in memory, in a file, in a database, etc…) along with the user’s details.
- The server responds to the client with that ID.
- The client stores the ID.
- The client will send that ID with future requests to the server.
- The server validates the received session ID, extracts the associated user information and processes the request.
Implementation
This mechanism typically relies on browser cookies to handle the transmission of the ID between the client and the server.
Let me explain how things happen under the hood on the server-side. For a robust and secure session cookie management, the server often sets two response headers, Set-Cookie and Strict-Transport-Security:
Set-Cookie: sessionid=12345; Secure; HttpOnly; SameSite=...
- Secure: Instructs the browser to only send the cookie over HTTPS connections
- HttpOnly: Prevents client-side JavaScript from accessing the cookie. This mitigates some cross-site scripting (XSS) attacks that could attempt to steal cookies.
- SameSite: Helps defend against cross-site request forgery (CSRF) attacks. The appropriate choice of SameSite values depends on your application's cross-origin behavior needs.
Strict-Transport-Security: max-age=31536000; includeSubDomains
This header tells the browser to always use HTTPS for the specified domain and potentially subdomains, even if the user tries to access it via a non-secure HTTP link (more details about this header can be found on MDN docs).
These being said, after the user successfully authenticates, the browser will store the session ID from the server in a cookie.
Note: For more information about cookies you can refer to the IETF specs page, or to the MDN Cookie docs.
Same-domain vs Cross-domain requests
Let’s say that our application is hosted on a domain. When a user enters their credentials and submits the login form, a request is sent to an API endpoint. This endpoint could be located on the same domain, on a subdomain, or on a different domain, depending on the application architecture, even though the first two options are more commonly used. How are the cookies set and sent in these scenarios, knowing that cookies have some restrictions, which will explore right away?
Same domain
Let’s assume the following:
- our application is served from
https://www.myangularapp.com
- the login endpoint is located at
https://www.myangularapp.com/api/login
(the same domain as previous) - other API endpoints are accessed at
https://www.myangularapp.com/api/
(again, the same domain)
For this case when all requests are made to the same domain (or subdomain) the browser automatically includes any existing cookies set by the server, so no additional configuration is needed.
The server creates the session ID cookie when we make the login request and the browser stores it as a cookie. Any other subsequent requests to https://www.myangularapp.com
or any other subdomains will automatically include this session cookie.
This approach is very popular due to its simplicity and security, the deployment is made on a single server and there is not need for additional cookie configuration. However, for larger and more complex applications, scalability could be negatively impacted. Additionally, a major drawback this approach introduces is the single point of failure: if the server goes down, the application will be unavailable, or, a vulnerability in the API could compromise the entire application.
Different subdomains
Let's now assume this:
- our application is served from
https://www.myangularapp.com
- the api endpoints (including the login) are located at
https://api.myangularapp.com
, so we have a different subdomain
When we make the login request, without additional configuration, the origin server creates the session ID cookie for its domain (https://api.myangularapp.com
), and the browser will send that cookie when it makes a request to that domain or its subdomains, but not to https://www.myangularapp.com
, which is like a sibling subdomain. In order to be able to send the cookie to another subdomain, the server should define the Domain attribute value for the cookie to be myangularapp.com
. This will result in the browser to be able to send the cookie to any subdomains.
But why would we need multiple subdomains for an application? The main reasons are scalability and isolation. Each subdomain could be responsible with a specific part of the app and have its own resources. But this introduces additional work: a server from a subdomain would be unaware of the session cookies created by another server, so a session management system needs to be developed. This is itself another complex topic that mostly involves software architects, but I shortly list some options: centralized session store, session replication, or session on each subdomain.
Cross-origin requests
There could be cases when requests need to be made to a different origin. Due to the CORS mechanism and the same-origin policy, we need some extra configuration on both the server and the client side to handle the cookie transmission. More technical documentation about these mechanisms can be found on MDN CORS docs. By default, browsers will not send cookies in requests to other origins. To allow sending and receiving cookies, both the server and the client need some extra configuration.
The servers on different origins needs at minimum to set these response headers:
Access-Control-Allow-Origin: <our_application_domain>
Access-Control-Allow-Credentials: true
On the client side, when we make the request, we need to declare it as "credentialed", so the cookies can be sent to the server:
- for the native
fetch
function, we need to set the option{credentials: 'include'}
- if we use the native
XMLHttpRequest
constructor, we need to setwithCredentials = true
on the request instance. - in Angular apps, we have to set
{ withCredentials: true }
option within everyHttpClient
request method.
Benefits and limitations of session-based authentication
Like any other system, there is no one-size-fits-all solution, and every approach needs to be analyzed in its context. Let's explore a summary of the main benefits and limitations of session-based authentication.
Benefits
- Simple implementation: once the session ID is received by the browser, the subsequent requests automatically send the ID to the server (with the exceptions with the cross-domain or subdomain requests I've mentioned above)
- Reduced server load for small-scale applications
This solution is best suitable for:
- Small projects with simple architecture
- Applications with primarily server-rendered pages
- Frontend and backend residing on the same domain
- The need of revoking the session immediately (in sectors like banking, critical healthcare)
Limitations
- By definition, this mechanism is stateful, and the server needs a robust session management system to handle an increasing number of active sessions.
- Non-browser clients (e.g. mobile apps) that don’t support cookies would need additional logic to handle how the ID is transmitted back to the server.
- Cookies are strictly tied to the domain that set them and are not shareable by default with other domains, unless the
Domain
attribute is explicitly configured. - Working with distributed backends servers or microservices would require additional complexity to handle the session IDs
- Potential risks like CSRF (Cross-Site Request Forgery), Session hijacking, XSS (Cross-Site Scripting) vulnerabilities leading to cookie theft (I'll cover these in another article).
Use cases / Examples in Angular
Systems that use session cookies
There are numerous web frameworks that heavily rely on session-based authentication using cookies: Laravel, CodeIgniter, Symfony, Ruby on Rails, Django, Spring MVC, ASP.NET MVC, platforms like Magento, Shopify, WordPress, or legacy applications.
Logging out
We’ve discussed how to login, but at some point we also want to logout. For this we usually make a request to a logout endpoint, similar to the login. The server needs to invalidate the session ID on its side and then to clear the cookie, by specifying an expiry date in the past, so the browser will automatically clear the session cookie.
Set-Cookie: sessionid=; Expires=Thu, 01 Jan 1970 00:00:00 UTC; Path=/;
Authentication failures
For certain reasons the server could respond with HTTP 401 UNAUTHORIZED status code, or any other responses indicating that the user is not authenticated. Such cases can happen due to an expired or invalid session ID. In our frontend Angular app we can handle these errors by redirecting the user to the login page to submit their credential again. For this we can use an interceptor:
See it in action
Here are some useful videos about the topic I've discussed so far:
Top comments (3)
Hi Cezar Pleșcan,
Top and very useful bro !
Thanks for sharing
Very interesting! When Level Up Your Angular Code: A Transformative User Profile Editor Project (Part 2)??
Greetings!!
I appreciate your feedback! The part 2 is on my priority list, should be ready in about two weeks.