This is the 9th part of a series of articles under the name "Play Microservices". Links to other parts:
Part 1: Play Microservices: Bird's eye view
Part 2: Play Microservices: Authentication
Part 3: Play Microservices: Scheduler service
Part 4: Play Microservices: Email service
Part 5: Play Microservices: Report service
Part 6: Play Microservices: Api-gateway service
Part 7: Play Microservices: client-web service
Part 8: Play Microservices: Integration via docker-compose
Part 9: You are here
The source code for the project can be found here:
- OAuth 2.0
- Transport layer
- Security in microservices
Before delving into the details of the basics and security protocols, lets have look at our microservice application. We use the structure of our application to demonstrate the concepts and use cases of the security patterns more clearly.
Here is a brief introduction to some terms that are helpful for understanding security concepts in the network communication.
- OSI Model: A conceptual framework that defines how different network protocols and services interact and operate in layers.
- End user: An individual or entity that utilizes and interacts with a computer system or network.
- Client app: Software application that sends requests for services or resources from a server.
- Server app: Software application that responds to and fulfills client requests by providing services or resources.
- Authentication: Verifying the identity of a user, device, or system to ensure they are who they claim to be.
- Authorization: Granting or denying access rights and permissions to users, devices, or systems based on their authenticated identity and assigned privileges.
- JWT: JSON Web Token; a compact and self-contained data format for securely transmitting information between parties.
- Certificates: Digital documents that verify the authenticity of an entity and enable secure communication, often using public key cryptography.
- Self-signed certificate: A certificate signed by the entity it belongs to, not by a trusted third-party Certificate Authority (CA).
- Hashing: Process of converting data into a fixed-length string of characters through a mathematical algorithm, commonly used for data integrity and password storage.
- RSA: A widely used public-key cryptosystem for secure data transmission and encryption/decryption of sensitive information.
TLS is a transport layer security protocol (the successor of the Secure Sockets Layer (SSL) protocol) and it secure the network communication in two ways:
- Encrypting the traffic against eavesdroppers on the internet
- Validating the authenticity of the counterparty
In this protocol, client applications prioritize verifying the identity of the server application. For instance, when entering an address like google.com in the web browser's address bar, it becomes crucial to ensure a secure connection with the legitimate google.com website, avoiding any potential connection to a different or malicious site. Here is a schematic flow of how a typical TLS connection is established:
In TLS (Transport Layer Security), the trust is established through a trusted certificate authority. This authority possesses a public-private key pair. The client applications have access to the public key of this certificate authority.
To secure the server's identity, the server generates its own public-private key pair. It then creates a Certificate Signing Request (CSR) that includes relevant information such as the website's domain name and the server's public key. This CSR is sent to the certificate authority for verification.
Upon receiving the CSR, the certificate authority validates the information provided, signs it using its private key, and issues a certificate for the server. This certificate contains the original information submitted by the server, along with a digital signature. This signature can be verified using the public key of the certificate authority.
These steps are essential prior to initiating a TLS handshake between the client application and the server, ensuring a trusted and secure connection. The next steps are as follows:
- The client say hello to the server
- Server replies and send it's certificate
- The client verifies the signature of the certificate using the CA public key.
- If verified, the client generate an ESA key, encrypt it with the public key of the server and send it back to the server. These key can only be read by the server, because it is encrypted with the public key of the server and can only be decrypted using the private key of the server.
- Subsequent messages communicated between client and the server will be encrypted with that ESA key. because only the client and the server have this key, only these two can decrypt the messages.
Consider a scenario where both communicating parties desire to authenticate each other's identities. This is where the mTLS (Mutual Transport Layer Security) protocol comes into play, offering a solution. mTLS is a transport layer security protocol (like TLS) and it secure the network communication in two ways:
- Encrypting the traffic against eavesdroppers on the internet
- Validating the authenticity of the counterparty
In this protocol, the foundation of trust lies once again in a trusted certificate authority. For successful implementation, both communicating parties are required to register their identities with this trusted certificate authority. The handshake process resembles that of TLS, with the distinction that both sides must exchange their certificates with each other.
In TLS and mTLS our source of trust is a trusted certificate authority that issues certificates for communication parties. In this case, the party for which the certificate is issued is a company or organization, not an individual. Now imagine cases where our goal is to authenticate individuals or client apps without using the certificate mechanism. (In this case, issuing a certificate for an individual is meaningless). This is where the protocols like oauth 2.0 comes into play. This protocol runs on application layer and depending on use cases, there are several versions of it.
In this scenario, the client app (Not a user or individual) which is a machine, is the owner of a resource. The identity authentication is done for a client app and not a user. The schematic flow for OAuth 2.0 for this use case is as follows. see Client Credentials Flow:
- The identity of the client application is authenticated.
- The client secret and client ID should be kept confidential.
- The client secret and ID are transmitted securely using TLS or mTLS.
Please note that in this scenario, the client app is not in a public access domain and the client secret and the client id should be kept secret(For example a web application that runs on a server, not a mobile application that is downloaded to the user's devices).
Use case 2: User is the owner of a resource and the user request resource from a registered client app in private domain.
In this scenario, User is the owner of a resource. The user must request for access from a trusted client app. In this case, the client app should not be accessible publicly so that its client id and client secret can be kept safe (For example a web application that runs on a server, not a mobile application that is downloaded to the user's devices). The schematic OAuth 2.0 flow for this scenario is as follows. see Authorization Code Flow.
- Identity of the client app is authenticated and unauthorized client apps cannot connect to resource server.
- Identity of the user is authenticated
- Client app have no access to user's credentials and this can provide mechanism for 3rd party login like login via google, etc.
- All credentials are transmitted securely using TLS or mTLS.
An implementation commonly used for this scenario is third-party authentication provider login. For example, when you want to add login via google to your web app, this flow is the one you need to go through. See here for more info.
In this scenario the user is the owner of the resource, but it is not necessary to request access to the resource via a trusted and registered client app. In other words, all client apps are trusted and we do not authenticate them in this scenario (We only authenticate the user). In our sample microservice application, we do the exact thing for authentication of the user. The schematic diagram for this flow can be shown as below. see Resource Owner Password Flow.
- Only user's identity is authenticated.
- Used for highly trusted client applications.
- Not recommended.
Use case 4: User is the owner of the resource and it requests the resource from a client app in a public domain (Like mobile or desktop app).
In this scenario, The user do the request from an application on public domain (for example a mobile app or a desktop app). In this case, the app cannot hide client id and client secret and therefore we do not authenticate the client application identity. We cannot use Authorization Code Flow in this scenario because If the auth code somehow be stolen (By changing the redirect url for example), it can be exchanged with an access token. That's why Authorization Code Flow with Proof Key for Code Exchange (PKCE) has been introduced.
- Only identity of the user is authenticated.
- To prevent hijacking of auth code and exchange it with an access token, code verifier/code challenge mechanism is used. Client app generates a cryptographic random code. This is the code verifier and the client save it in a safe place. Then hash it using an algorithm like s256. The hashed code is called the code challenge. The client app request auth code by sending code challenge. The auth server redirect to login page and after login, it redirects the auth code to the address defined by the client app. If by any means, this auth code revealed to an attacker, it is useless without the code verifier and it cannot be exchanged with an access token.
When it comes to securing a microservice, there are four fundamental areas to consider:
- Access to microservices from outside(known as north-south traffic)
- Access between microservices (known as east-west traffic)
- Microservice container contents
- Deployment environment
The methods that we can incorporate to secure accessing our microservice application from outside can be summarized as follows:
- Enabling TLS
- Using API gateways
- Using rate limiters
- Using OAuth 2.0 protocol to implement authentication and authorization. Depending on your priorities OAuth protocol can be implemented in several ways.
- Implementing security at each microservice level: In this approach we follow zero-trust security model which says never trust, always verify. In this approach Auth service is responsible for issuing and signing the tokens. Then all other services do authentication and authorization using the public key of the auth service.
- Implementing authentication in api-gateway and do authorization in all services: In this approach, api-gateway do the authentication and then attach authorization headers to the requests. Other services trust the api-gateway and do only the the authorization. This approach can be somehow safe if we use mTLS for communication between services. In this case, Other services can be sure that they are talking to the api-gateway.
- Another approach can be doing both authentication and authorization in one service. Aside from security problems that this approach can have, doing authorization for all other services inside one service require a strong coupling between all services and the authorizer service which by nature is against philosophy of microservice architecture.
The methods that we can use to secure microservices inter-connections is as follows:
- Enabling TLS
- Enabling mTLs
- Implementing Kubernetes networking policies
TLS and mTLS is used for authentication of services in the edge. For example, When using mTLS, Service A can be sure that it is talking to service B and vice versa. Another layer of security that we can add too this layer is to add authorization. For example, When issuing a certificate for service A (In a self-signed certificate), we determine the access and role for that service too. In this case, service B can check both the identity of service A and whether service A has the permission to do some operation or not. Networking policies are a specific Kubernetes tool for implementing east-west security. You can set per-pod criteria that define which other pods are allowed to communicate with the target. This way, Services that have no connection can be separated in the network layer.
The security of microservices extends beyond network security. What if a malicious code is running inside your container? Or You are using a package with a security hole? using hardened base images or assembling your own from scratch will help to ensure there is nothing dangerous lurking within. Also Automated security testing techniques like DAST, SAST, and IAST can be used to detect possible flaws in your code.
Hardening the environment that hosts your deployment is the final element of microservices security. Basic cloud security hygiene measures (e.g., limiting user privileges and regularly rotating access tokens) are vital, but there are also specific best practices for distributed systems like Kubernetes. The Kubernetes RBAC system should be used to configure access for each user and service account in your cluster. Restricting accounts to the bare minimum privileges their functions demand will also improve your security posture.