When you wanted to deploy CockroachDB cluster— by default, CockroachDB has 2 modes for the security: secured
and unsecured
. You need to choose one of them when deploy the cluster. For moAfter I did some several debug (I try anything that I think can fix the issue) and Google some articles. I found that somebody already experienced the same error with me before . He created a thread in CockroachDB forum, if you want to check the forum, read here
With secure
mode, I tried to implement corresponding with Docs. Unfortunately, I experienced an issue, mostly because of the certificate. This is the common error I got:
*
* ERROR: SSL authentication error while connecting.
*
* initial connection heartbeat failed: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for node, not our-cockroach-pods.default.svc.cluster.local"
*
E191217 07:57:07.082417 1 cli/error.go:233 SSL authentication error while connecting.
After I did some several debug (I try anything that I think can fix the issue) and Google some articles. I found that somebody already experienced the same error with me before. He created a thread in CockroachDB forum, if you want to check the forum, click here
So the conclusion about this error caused by EKS which is not including the Subject Alternative Names when it issues the certificate. So any domain/service name we input won't be recognized.
I found the solution in an article posted by Alex Robinson and already merged in official Cockroach Github. You can check Alex's post here and official post contain the solution as well, check here
I will post it again in here and give some guidance to make it easier for everybody who need/wants to get the solution directly.
Before I post the solution, this is the prerequisites you need to have before you make the solution
After you fulfill the prerequisites, Create this yaml first for initial setup secure CockroachDB:
apiVersion: v1
kind: ServiceAccount
metadata:
name: cockroachdb
labels:
app: cockroachdb
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: cockroachdb
labels:
app: cockroachdb
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: cockroachdb
labels:
app: cockroachdb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cockroachdb
subjects:
- kind: ServiceAccount
name: cockroachdb
namespace: default
---
apiVersion: v1
kind: Service
metadata:
# This service is meant to be used by clients of the database. It exposes a ClusterIP that will
# automatically load balance connections to the different database pods.
name: cockroachdb-public
labels:
app: cockroachdb
spec:
ports:
# The main port, served by gRPC, serves Postgres-flavor SQL, internode
# traffic and the cli.
- port: 26257
targetPort: 26257
name: grpc
# The secondary port serves the UI as well as health and debug endpoints.
- port: 8080
targetPort: 8080
name: http
selector:
app: cockroachdb
---
apiVersion: v1
kind: Service
metadata:
# This service only exists to create DNS entries for each pod in the stateful
# set such that they can resolve each other's IP addresses. It does not
# create a load-balanced ClusterIP and should not be used directly by clients
# in most circumstances.
name: cockroachdb
labels:
app: cockroachdb
annotations:
# Use this annotation in addition to the actual publishNotReadyAddresses
# field below because the annotation will stop being respected soon but the
# field is broken in some versions of Kubernetes:
# https://github.com/kubernetes/kubernetes/issues/58662
service.alpha.kubernetes.io/tolerate-unready-endpoints: "true"
# Enable automatic monitoring of all instances when Prometheus is running in the cluster.
prometheus.io/scrape: "true"
prometheus.io/path: "_status/vars"
prometheus.io/port: "8080"
spec:
ports:
- port: 26257
targetPort: 26257
name: grpc
- port: 8080
targetPort: 8080
name: http
# We want all pods in the StatefulSet to have their addresses published for
# the sake of the other CockroachDB pods even before they're ready, since they
# have to be able to talk to each other in order to become ready.
publishNotReadyAddresses: true
clusterIP: None
selector:
app: cockroachdb
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: cockroachdb-budget
labels:
app: cockroachdb
spec:
selector:
matchLabels:
app: cockroachdb
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cockroachdb
spec:
serviceName: "cockroachdb"
replicas: 3
selector:
matchLabels:
app: cockroachdb
template:
metadata:
labels:
app: cockroachdb
spec:
serviceAccountName: cockroachdb
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- cockroachdb
topologyKey: kubernetes.io/hostname
containers:
- name: cockroachdb
image: cockroachdb/cockroach:v19.2.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 26257
name: grpc
- containerPort: 8080
name: http
livenessProbe:
httpGet:
path: "/health"
port: http
scheme: HTTPS
initialDelaySeconds: 30
periodSeconds: 5
readinessProbe:
httpGet:
path: "/health?ready=1"
port: http
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 5
failureThreshold: 2
volumeMounts:
- name: datadir
mountPath: /cockroach/cockroach-data
- name: certs
mountPath: /cockroach/cockroach-certs
env:
- name: COCKROACH_CHANNEL
value: kubernetes-secure
command:
- "/bin/bash"
- "-ecx"
# The use of qualified `hostname -f` is crucial:
# Other nodes aren't able to look up the unqualified hostname.
- "exec /cockroach/cockroach start --logtostderr --certs-dir /cockroach/cockroach-certs --advertise-host $(hostname -f) --http-addr 0.0.0.0 --join cockroachdb-0.cockroachdb,cockroachdb-1.cockroachdb,cockroachdb-2.cockroachdb --cache 25% --max-sql-memory 25%"
# No pre-stop hook is required, a SIGTERM plus some time is all that's
# needed for graceful shutdown of a node.
terminationGracePeriodSeconds: 60
volumes:
- name: datadir
persistentVolumeClaim:
claimName: datadir
- name: certs
secret:
secretName: cockroachdb.node
defaultMode: 256
podManagementPolicy: Parallel
updateStrategy:
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 100Gi
After that, Try to follow this instruction sequentially (some adjustment needed like DB Name, Cluster Name, File location, etc)
mkdir certs
mkdir my-safe-directory
cockroach cert create-ca --certs-dir=certs --ca-key=my-safe-directory/ca.key
cockroach cert create-client root --certs-dir=certs --ca-key=my-safe-directory/ca.key
kubectl create secret generic cockroachdb.client.root --from-file=certs
cockroach cert create-node --certs-dir=certs --ca-key=my-safe-directory/ca.key localhost 127.0.0.1 cockroachdb-public cockroachdb-public.default cockroachdb-public.default.svc.cluster.local *.cockroachdb *.cockroachdb.default *.cockroachdb.default.svc.cluster.local
kubectl create secret generic cockroachdb.node --from-file=certs
kubectl create -f bring-your-own-certs-statefulset.yaml
kubectl exec -it cockroachdb-0 -- /cockroach/cockroach init --certs-dir=/cockroach/cockroach-certs
For client's pods, (you need Client pods to create access for cockroachDB) execute this yaml configuration first
apiVersion: v1
kind: Pod
metadata:
name: cockroachdb-client-secure
labels:
app: cockroachdb-client
spec:
serviceAccountName: cockroachdb
containers:
- name: cockroachdb-client
image: cockroachdb/cockroach:v2.0.5
# Keep a pod open indefinitely so kubectl exec can be used to get a shell to it
# and run cockroach client commands, such as cockroach sql, cockroach node status, etc.
command:
- sleep
- "2147483648" # 2^31
volumeMounts:
- name: client-certs
mountPath: /cockroach-certs
volumes:
- name: client-certs
secret:
secretName: cockroachdb.client.root
defaultMode: 256
And then execute this command whenever you want to access our cockroachDB through the client
kubectl exec -it cockroachdb-client-secure -- ./cockroach sql --url="postgres://root@cockroachdb-public:26257/?sslmode=verify-full&sslcert=/cockroach-certs/client.root.crt&sslkey=/cockroach-certs/client.root.key&sslrootcert=/cockroach-certs/ca.crt"
It will be a different case with unsecure mode. It will lead straight to setup Kubernetes cluster for cockroachDB with AWS EKS, You just need to follow the instruction from official cockroachDB docs, Checkhere how doing it.
You need some adaptation if you want to run some command. It'll be filled differ if you're used to using cockroachDB inside a VM or server before. Because, when you do some command execution, you can't do it directly anymore. You need to do it with kubectl (some command will be harder when used kubectl).
Here are some of my tricks when we wanna do something directly to our cockroachDB with EKS.
When you wanna try dumping databases, try this command:
kubectl run your-pods-for-cockroachdb-name -it --image=cockroachdb/cockroach:v19.2.1 --rm --restart=Never -- dump database_name --insecure --host=yours-kubernetes-cockroachdb-service
When you try execute some cockroachDB command, try this:
kubectl run your-pods-for-cockroachdb-name -it --image=cockroachdb/cockroach:v19.2.1 --rm --restart=Never -- sql --insecure --host=yours-kubernetes-cockroachdb-service --execute="your-cockroachdb-command";
Real Example to do it:
kubectl run your-pods-for-cockroachdb-name -it --image=cockroachdb/cockroach:v19.2.1 --rm --restart=Never -- sql --insecure --host=yours-kubernetes-cockroachdb-service --execute="show databases"
Viola, Congrats! You success accessing your first secured/unsecured cockroachDB inside AWS EKS
Top comments (0)