DEV Community

Robin Cher
Robin Cher

Posted on

Moving to Pomerium Identity Aware Proxy

Introduction

This is one of several learnings we encountered as part of Engineering Suite in Government Digital Services, GovTech SG. The focus for us in Engineering suite are creating productivity tools that supports our SG Tech Stack initiative.In this article, I will be sharing about how an Identity-Aware Proxy (IAP) enhanced my team’s security posture while making our developer's life happier

What is Identity Aware Proxy (IAP) ?

A proxy that enables secure access to upstream services. It can act as an identity attestation or perform delegated authorisation for these services.

It represents a paradigm shift from the traditional security model where the focus was securing the network perimeter through IP/Ports. With the proxy approach, not only are users verified, but the application requests can be terminated, examined, and authorised as well. IAP relies on application level access controls, whereby configured policies access user or application intent, not just ports and IPs.

Additionally, it supports the mantra of "Always verify, never trust" of a Zero-trust security model. You can read more about IAP as described by Google.

Image credit from Google IAP Overview

Alt Text

Our Scenario

Due to the Covid pandemic, most organizations have shifted to a remote working environment. It is by no means an easy feat to get everyone connected securely from their home, and the risk profile these days have changed. The last thing we want to do is to implement extra layers of security restrictions and constraints, resulting in a terrible developer's experience with marginal security gain. We wanted a solution that enables our users to access our workloads from untrusted networks with the use of yet another VPN. Traditionally, accessing the development toolchain required a different VPN service as the applications are hosted by a different team and it was impossible to leverage on their VPN service to front the team’s DEV environment.

We looked at a few IAP solutions, and eventually decided to proceed withPomerium mainly because it is open-sourced and our team have prior experience working with it. It is also very easy to deploy a full setup in a Kubernetes cluster

Decomposing Pomerium

Pomerium Proxy

It mainly interject and direct request to Authentication service so as to establish an identity from the idP. Additionally, it will process polices to determined internal/external route mapping.

Pomerium Authenticate (AuthN)

It handles authentication flow to the idP

Pomerium Authorize (AuthZ)

Processes policy to determine permissions for each service and handle authorization checks

Pomerium Cache

Stores session and identity data in persistent storage, and stores idP access and refresh tokens.

Implementation

The components required:

  • Kubernetes cluster with ingress
  • Azure Active Directory

In this article, i will be deploying Pomerium to a single Node in AWS EKS Cluster with NGINX ingress, and the perimeter are fronted by an Application Load Balancer. Azure Active Directory will be configured as the identity provider.

System Context

An overview on how pomerium are set-up, whereby we need to secure both an internal service that reside within the same cluster, and an external service that are hosted outside.

Alt Text

Additionally, will be listing down some of the good improvements that can be done in the later section.

Pomerium Configurations

Explaining Pomerium configuration in details.

Main Configurations

# Main configuration flags : https://www.pomerium.io/docs/reference/reference/
insecure_server: true
grpc_insecure: true
address: ":80"
grpc_address: ":80"

Enter fullscreen mode Exit fullscreen mode

Leave this as default. This is to configure how Pomerium's services discovered each other. For this experiment , we can allow insecure traffic.

Workloads URL

## Workload URL
authenticate_service_url: https://authenticate.yourdomain.com

authorize_service_url: http://pomerium-authorize-service.default.svc.cluster.local

cache_service_url: http://pomerium-cache-service.default.svc.cluster.local

Enter fullscreen mode Exit fullscreen mode

This is to define the route for required service. Pay special authentication to authenticate_service_url, which is a publicly discoverable URL that will be redirected to the web browser, while the other 2 are internal Kubernetes hostnames.

IDP Details

idp_provider: azure
idp_client_id: xxxxx
idp_client_secret: "xxx"
idp_provider_url : xxxx
Enter fullscreen mode Exit fullscreen mode

Enter the details that you retrieve from the identity provider.

I will strongly recommend you not to hard code any secrets into the yaml files for production use. Alternative, you can create Kubernetes secrets and inject these as environmental variables.

For the list of supported identity provider and steps to generate the above details, please visit here

Policy

policy:
  - from: https://httpbin.yourdomain.com
    to: http://httpbin.default.svc.cluster.local:8000
    allowed_domains:
      - outlook.com
Enter fullscreen mode Exit fullscreen mode

Policy contains route specific settings, and access control details. For this example, Pomerium will intercept all request to https://httpbin.example.com, and then redirect to the internal dns hostname of the httpbin workload.

Your final pomerium-config.yaml should look something like this

# Main configuration flags : https://www.pomerium.io/docs/reference/reference/
insecure_server: true
grpc_insecure: true
address: ":80"
grpc_address: ":80"

## Workload URL
authenticate_service_url: https://authenticate.yourdomain.com
authorize_service_url: http://pomerium-authorize-service.default.svc.cluster.local
cache_service_url: http://pomerium-cache-service.default.svc.cluster.local

idp_provider: azure
idp_client_id: REPLACE_ME.apps.googleusercontent.com
idp_client_secret: "REPLACE_ME"


policy:
  - from: https://httpbin.yourdomain.com
    to: http://httpbin.default.svc.cluster.local:8000
    allowed_domains:
      - outlook.com
Enter fullscreen mode Exit fullscreen mode

Deploying

Pomerium configmap

We will create a configmap based on the configuration above, and then it will be mounted by the Pomerium workloads.

kubectl apply -f pomerium-config.yaml
Enter fullscreen mode Exit fullscreen mode

Create random secret

kubectl create secret generic shared-secret --from-literal=shared-secret=$(head -c32 /dev/urandom | base64)
kubectl create secret generic cookie-secret --from-literal=cookie-secret=$(head -c32 /dev/urandom | base64)
Enter fullscreen mode Exit fullscreen mode

Nginx controller

kubectl apply -f ingress-nginx.yaml
Enter fullscreen mode Exit fullscreen mode

Mock Httpbin Services

Deploy a test internal httpbin service

kubectl apply -f httpbin-internal.yaml
Enter fullscreen mode Exit fullscreen mode

Deploy Pomerium Workloads

kubectl apply -f pomerium-proxy.yml
kubectl apply -f pomerium-authenticate.yml
kubectl apply -f pomerium-authorize.yml
kubectl apply -f pomerium-cache.yml
Enter fullscreen mode Exit fullscreen mode

Test

  1. Ensure Pomerium workloads are up and running

Alt Text

  1. Accessing Httpbin

From your web-browser, enter https://httpbin.yourdomain.com

You should be redirected to the IDP sign-in page.

Alt Text

So what happened here? Pomerium proxy interject your request based on the policy defined above, and then redirect to the identity provider's authentication page.

Enter your credentials, and you will be able to access Httpbin's page.

Additional use case

Promerium will be able to proxy services which reside outside of the EKS cluster by leveraging on EKS externalServiceName(insert link). By doing so, Services that reside outside of the EKS cluster will be able to have a service FQDN within EKS which will then be routable by Promerim.

Impact

Being able to protect our development and staging environments without another set of VPN greatly enhances the productivity of our engineering team. With Azure AD integration, we can carry out device posture checks, via conditional access policies, on the developer machine being used to access our protected environment. This improves the team’s security posture as a devices within a VPN boundary is no longer a guarantee of safety in today’s cybersecurity context.

Operationally, running the proxy on K8 facilitates capacity scaling in order to meet any sudden surge in demand. A properly instrumented K8 cluster can also help the DevOps team closely monitor all incoming network traffic.

References

  1. Pomerium Workload Recipe @ Github
  2. Pomerium Official Website

Top comments (0)