Crossplane is implemented as a Kubernetes add-on and extends any cluster with the ability to provision and manage cloud infrastructure, services, and applications. Crossplane uses Kubernetes-styled declarative and API-driven configuration and management to control any piece of infrastructure, on-premises or in the cloud
A Developer-Friendly API
As Crossplane builds upon Kubernetes, it’s no surprise that a developer creates a resource to request a particular service. For Kubernetes developers, this is straightforward.
Crossplane includes support for AWS, Azure, GCP, and Alibaba, and the community is adding support for many other providers.
Production-Ready With the Help of K8s
It uses Kubernetes controllers and the concept of continuous reconciliation to run the platform. If something breaks (which it will), Crossplane will examine and fix the state.
Open Source & Open Governance
Crossplane is not only Open Source (Apache 2.0), but as part of the CNCF, it’s also openly governed.
Terraform may keep its state locally or in e.g. AWS S3 which blocks other engineers from applying changes to the configuration but Crossplane does not store it anywhere. The YAML manifest is the form of state, which allows multiple engineers to collaborate.
Terraform uses a monolithic ‘apply’ process - there’s no recommended way to modify only one piece of infrastructure within a configuration. In Crossplane every piece of infrastructure is an API endpoint that supports create, read, update, and delete operations.
Integration and Automation
Terraform is a command line tool - not a control plane. Because it is a short lived, one-shot process it will only attempt to reconcile your desired configuration with actual infrastructure when it is invoked. Crossplane, on the other hand, is built as a series of long lived, always-on control loops. It constantly observes and corrects an organisation’s infrastructure to match its desired configuration whether changes are expected or not.
Let’s take a look at how Crossplane allows us to provision cloud resources.
- For macOS / Linux / Windows use the following:
You can find the resources in abhivaidya07/crossplane_gke
- Use Helm 3 to install the latest official stable release of Crossplane.
kubectl create namespace crossplane-system helm repo add crossplane-stable https://charts.crossplane.io/stable helm repo update helm install crossplane --namespace crossplane-system crossplane-stable/crossplane
- Check Crossplane Status.
helm list -n crossplane-system kubectl get all -n crossplane-system
- Use the following command to install crossplane CLI
curl -sL https://raw.githubusercontent.com/crossplane/crossplane/master/install.sh | sh
- Verify the installation.
kubectl crossplane --version
Providers extend Crossplane to enable infrastructure resource provisioning. In order to provision a resource, a Custom Resource Definition (CRD) needs to be registered in your Kubernetes cluster and its controller should be watching the Custom Resources those CRDs define. Provider packages contain many Custom Resource Definitions and their controllers.
- Install GCP Provider with
kubectl crossplane install provider crossplane/provider-gcp:v0.18.0
- Wait until package become healthy.
watch kubectl get pkg
- Wait for the pods to come in running state.
kubectl get pods -n crossplane-system
- Clone the Repository
git clone https://github.com/abhivaidya07/crossplane_gke.git
- Run the script to create and download the Service Account and key file respectively.
Note: Change the PROJECT_ID and SA_NAME according to usage
- Use the following command to create Kubernetes secret from service account JSON file created in previous step.
kubectl create secret generic gcp-creds -n crossplane-system --from-file=creds=./creds.json
- Run the script to create the ProviderConfig object to configure credentials for GCP Provider.
Note: Change the PROJECT_ID according to usage
- Update the YAML file with following changes:
In Cluster Resource
a. network : The name of the Google Compute Engine network in which the cluster should be created. If left unspecified, the default network will be used.
b. subnetwork : The name of the Google Compute Engine subnetwork in which the cluster should be created.
c. location : The name of the Google Compute Engine zone or region in which the cluster resides.
In Nodepool Resource
a. config.machineType (if required) : The name of a Google Compute Engine machine type (e.g. n1-standard-1). If unspecified, the default machine type is n1-standard-1.
b. config.diskSizeGb (if required) : Size of the disk attached to each node, specified in GB. The smallest allowed disk size is 10GB. If unspecified, the default disk size is 100GB.
c. config.diskType (if required) : Type of the disk attached to each node (e.g. 'pd-standard' or 'pd-ssd') If unspecified, the default disk type is 'pd-standard'
d. initialNodeCount (if required) : The initial node count for the pool.
e. locations : The list of Google Compute Engine zones in which the NodePool's nodes should be located.
- Create GKE Cluster and NodePool by applying the YAML file
kubectl apply -f gke.yaml
- Check the status of GKE Cluster
kubectl get cluster.container
- Check the status of GKE NodePool
kubectl get nodepool.container
- Wait until Cluster and Nodepool are in RUNNING state
$ kubectl get cluster.container NAME READY SYNCED STATE ENDPOINT LOCATION AGE crossplane-cluster True True RUNNING xx.xxx.xxx.xx us-central1-c 5m22s
$ kubectl get nodepool.container NAME READY SYNCED STATE CLUSTER-REF AGE crossplane-np True True RUNNING crossplane-cluster 5m15s
Yippee !!! You have successfully configured GKE Cluster using Crossplane
THANK YOU :)