Kubernetes is an undeniable powerhouse in container orchestration, but its complexity can leave developers feeling lost at sea. This blog post explores how Backstage can act as a lighthouse, guiding developers through the murky waters of Kubernetes.
The Kubernetes Challenge
There's no denying that Kubernetes packs a punch. However, its intricate nature can create a significant learning curve for developers. As a platform engineer, the goal is to provide tools that make Kubernetes more user-friendly and reduce this barrier to entry.
Backstage: A Developer Portal Platform
Enter Backstage: a platform specifically designed for building developer portals. These portals function as a central hub for various development activities, including:
- Continuous Integration (CI) pipeline information
- Access to documentation
- Monitoring of Kubernetes deployments
- The Software Catalog: A Centralized Source
One of Backstage's core strengths is the software catalog. This catalog acts as a single source of truth for service information, including:
- Ownership details
- Git repository location
- Relationships between different services
The beauty of the software catalog lies in its adaptability and expandability. Developers can create custom plugins or leverage existing open-source options to tailor the catalog to their specific organizational needs.
Backstage in Action: Everyday Kubernetes Tasks
The blog post dives into two practical use cases for the Backstage Kubernetes plugin. The first one addresses common developer questions, like "on which cluster is a particular service running?". Backstage eliminates the need to navigate to the Kubernetes dashboard for these basic inquiries.
The second use case tackles troubleshooting errors. Backstage aggregates crash logs from all the pods within a service and offers a convenient link to a log aggregation platform for further analysis.
Boosting Developer Productivity
In essence, Backstage empowers developers by providing a centralized platform for viewing and managing their Kubernetes services. This streamlined approach can significantly enhance developer productivity.
Backstage comes to the rescue as a platform specifically designed for building developer portals. These portals act as a central hub, consolidating information about various development activities, including:
- Continuous Integration/Continuous Delivery (CI/CD) Pipelines: Monitor and track your CI/CD pipelines for efficient deployments.
- Documentation: Keep your team on the same page with readily accessible documentation.
- Monitoring Kubernetes Deployments: Gain insights into the health and performance of your Kubernetes deployments directly through Backstage.
The Power of the Software Catalog
One of Backstage's core strengths is the software catalog. This catalog serves as a central repository for service information, including:
- Ownership: Clearly identify who owns and maintains each service.
- Git Repository Location: Simplify access to the codebase for each service.
- Relationships Between Services: Understand how different services interact and depend on each other.
The beauty of the software catalog lies in its adaptability. You can leverage open-source plugins or create custom plugins to tailor the catalog to your specific needs.
Backstage in Action:
To begin with, let's set up a namespace in Kubernetes to segregate services in a multi-tenant environment. We can either use the kubectl create namespace command directly or create a Namespace definition file and apply it using kubectl apply.
apiVersion: v1
kind: Namespace
metadata:
name: backstage
This YAML file defines a namespace named "backstage".
Once the namespace is set up, we can move on to configuring PostgreSQL for our Backstage application. Firstly, we'll create a Kubernetes Secret to store the PostgreSQL username and password. This is done to ensure security and is used by both the PostgreSQL database and Backstage deployments.
apiVersion: v1
kind: Secret
metadata:
name: postgres-secrets
namespace: backstage
type: Opaque
data:
POSTGRES_USER: YmFja3N0YWdl
POSTGRES_PASSWORD: aHVudGVyMg==
These values are base64-encoded to maintain secrecy. After creating the Secret, we apply it to the Kubernetes cluster.
Next, PostgreSQL requires a persistent volume to store data. We define a PersistentVolume along with a PersistentVolumeClaim to ensure data persistence.
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-storage
namespace: backstage
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2G
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
hostPath:
path: '/mnt/data'
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-storage-claim
namespace: backstage
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
This creates a local volume with a capacity of 2 gigabytes. After defining the storage, we apply it to the Kubernetes cluster.
Now, we move on to deploying PostgreSQL itself. We define a Deployment descriptor for PostgreSQL, specifying its image, environment variables, and volume mounts.
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13.2-alpine
imagePullPolicy: 'IfNotPresent'
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secrets
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgresdb
subPath: data
volumes:
- name: postgresdb
persistentVolumeClaim:
claimName: postgres-storage-claim
This Deployment ensures that PostgreSQL is up and running. We apply it to the Kubernetes cluster, and once deployed, we can verify its status.
After setting up PostgreSQL, we proceed to create a Kubernetes Service for it. This Service allows other pods to connect to the PostgreSQL database.
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: backstage
spec:
selector:
app: postgres
ports:
- port: 5432
We apply this Service to the Kubernetes cluster, and now PostgreSQL is ready to handle connections from other pods.
With PostgreSQL set up, we can now proceed to deploy the Backstage instance. This involves creating secrets, a deployment, and a service for Backstage similar to what we did for PostgreSQL.
Similar to PostgreSQL, we first create a Kubernetes Secret to store any configuration secrets needed for Backstage, such as authorization tokens.
apiVersion: v1
kind: Secret
metadata:
name: backstage-secrets
namespace: backstage
type: Opaque
data:
GITHUB_TOKEN: VG9rZW5Ub2tlblRva2VuVG9rZW5NYWxrb3ZpY2hUb2tlbg==
After creating the secret, we apply it to the Kubernetes cluster.
Now, we define a Deployment descriptor for Backstage, specifying its image, environment variables, and ports.
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- name: backstage
image: backstage:1.0.0
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- secretRef:
name: backstage-secrets
This Deployment ensures that the Backstage instance is up and running. We apply it to the Kubernetes cluster, and once deployed, we can verify its status.
After deploying Backstage, we create a Kubernetes Service to handle connecting requests to the correct pods.
apiVersion: v1
kind: Service
metadata:
name: backstage
namespace: backstage
spec:
selector:
app: backstage
ports:
- name: http
port: 80
targetPort: http
This Service ensures that other pods can connect to the Backstage instance. We apply it to the Kubernetes cluster.
Now, our Backstage deployment is fully operational! We can forward a local port to the service to access it locally.
$ sudo kubectl port-forward --namespace=backstage svc/backstage 80:80
With this setup, we can access our Backstage instance in a browser at localhost.
Let's delve into the additional steps and considerations for a production deployment of Backstage on Kubernetes.
- Set up a more reliable volume: The PersistentVolume configured earlier uses local Kubernetes node storage, which may not be suitable for production environments. It's recommended to replace this with a more reliable storage solution such as a cloud volume or network-attached storage.
- Expose the Backstage service: The Kubernetes Service created for Backstage is not exposed for external connections from outside the cluster. To enable external access, you can use - Kubernetes ingress or an external load balancer.
- Update the Deployment image: To update the Kubernetes deployment with a newly published version of your Backstage Docker image, you need to update the image tag reference in the deployment YAML file and then apply the changes using
kubectl apply -f
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: backstage
namespace: backstage
spec:
replicas: 1
selector:
matchLabels:
app: backstage
template:
metadata:
labels:
app: backstage
spec:
containers:
- name: backstage
image: your-updated-image:tag
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 7007
envFrom:
- secretRef:
name: postgres-secrets
- secretRef:
name: backstage-secrets
Configure app and backend URLs: Ensure that the URLs configured in your app-config.yaml file match the URLs you're forwarding locally for testing or the URLs you've set up for production environments.
app:
baseUrl: http://localhost
backend:
baseUrl: http://localhost
Update these URLs according to your deployment environment.
Authentication provider configuration: If you're using an authentication provider, ensure that its address is correctly configured for the authentication pop-up to work properly. Update the relevant configurations in your Backstage application.
By addressing these additional steps and considerations, you can have a robust and production-ready deployment of Backstage on Kubernetes.
Conclusion
By integrating Backstage with Kubernetes, you can empower your developers with a centralized platform for viewing and managing their deployments. Backstage simplifies access to crucial information and streamlines workflows, ultimately boosting developer productivity and reducing friction in your Kubernetes environment.
Top comments (0)