DEV Community

Alexandre Viau for Flare

Posted on

Connect services across Kubernetes clusters using Teleproxy

Teleproxy is a shell script that lets you quickly replace a Kubernetes deployment by a single pod that forwards incoming traffic to another pod running in a destination Kubernetes cluster.

The tool is based on telepresence. It is used at Flare Systems to keep our development setup light and still be able to quickly connect our test apps to a more realistic “staging” environment.

See the code at https://github.com/flared/teleproxy.

Ideal for minimal Minikube setups

Most of Flare Systems’ development setup is based around Minikube, a tool that lets you run Kubernetes locally as a single-node cluster.

While Minikube is great, we quickly ran into performance issues. Devs don’t necessarily have the resources to run all the services they need to test the software component they are working on, or maybe they’d rather have more than 30 minutes of battery life! They may also want to interface with a database that contains more data than the one that we ship in the local development environment.

It would be great if there was a tool that allowed you to quickly swap the database that runs locally inside Minikube with a proxy that points to a database running in another cluster. This would allow for all services running in Minikube to instantly connect with another database with little to no configuration changes. This is exactly what teleproxy allows you to do.

Using teleproxy to swap a kubernetes deployment with a proxy

Say you have local deployment called someservice with pods listening to port 8080 running in your local cluster and you want to replace it with a proxy to another deployment running in a destination cluster, you would run the following command:

tele-proxy \
    --source_context=minikube \
    --source_deployment=someservice \
    --source_port=8080 \
    --target_context=staging \
    --target_pod=someservice-77697866c6-vsk59 \
    --target_port=8080
Enter fullscreen mode Exit fullscreen mode

How it works

Teleproxy is based on telepresence. All it does is it runs kubectl port-forward in telepresence’s replacement pod. If you don’t already know how telepresence works, the following deployment diagram should help. It follows traffic from a client pod, which uses the service that we are replacing, to the target pod, which is an equivalent pod running inside another cluster.

Image description

  1. The traffic originates from the client, it probably targets someservice using the deployment's Kubernetes service.

  2. The traffic is received by telepresence’s incluster container. Telepresence has scaled down the someservice deployment and has replaced the pods by this single incluster proxy. It forwards any incoming traffic to the telepresence local pod which is running outside of the cluster.

  3. The traffic is received by telepresence’s local container, which forwards it to the teleproxy container.

  4. The traffic is received by teleproxy and is forwarded to the destination pod in Cluster B trough kubectl port-forward. This container is able to run a port-forward to your destination cluster because it mounts your local kubectl config, some specific environment variable and contains common tools for authenticating against a kubernetes cluster such as the AWS and Google Cloud CLIs.

Debugging Teleproxy

If you start from a working telepresence setup, the only complexity that is added by teleproxy is that the teleproxy container must be able to connect to your target cluster. Depending on how you regularly connect to that cluster, you may need to mount configuration files or add environment variables to the teleproxy container.

We have configured teleproxy for our own use and have gotten it working with both GKE and AWS EKS, this required:

  • mounting ~/.aws , ~/.kube
  • Installing the AWS CLI and Google Cloud CLI
  • Setting up compat symlinks for OSX users.

There is probably more to do, and we are willing to merge anything that makes sense.

Appendix

Top comments (0)