DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Korifi : API Cloud Foundry V3 expérimentale dans Kubernetes …

Cloud Foundry est une plateforme cloud open-source en tant que service (PaaS) sur laquelle les développeurs peuvent créer, déployer, exécuter et mettre à l’échelle des applications. Pour plus d’informations sur ce qu’est Cloud Foundry et sur la manière dont elle peut aider les développeurs à créer des applications natives du nuage et les opérateurs de plate-forme à gérer ces applications à grande échelle.

Cloud Foundry Components

J’avais d’ailleurs écrit plusieurs articles à ce sujet précedemment :

Le projet Cloud Foundry For Kubernetes ( cf-for-k8s ) a prouvé l’efficacité du modèle Cloud Foundry sur Kubernetes selon ses concepteurs :

GitHub - cloudfoundry/cf-for-k8s: The open source deployment manifest for Cloud Foundry on Kubernetes

En effet, Cloud Foundry For Kubernetes ( cf-for-k8s ) associe la populaire API de développement CF à Kubernetes, Istio et d’autres technologies Open source. Le projet vise à améliorer la productivité des développeurs pour les organisations utilisant Kubernetes. cf-for-k8s peut être installé sur n’importe quel environnement en quelques minutes.


https://blogs.sap.com/2021/01/15/back-to-the-future-cloud-foundry-on-kubernetes/

À la suite de la mise à jour 23/08/21, ce projet a été déprécié pour concentrer les efforts sur Korifi :

GitHub - cloudfoundry/korifi: Cloud Foundry on Kubernetes

Korifi fournit une plateforme d’application native de Kubernetes en réimplémentant les API de base de Cloud Foundry et en les soutenant par un ensemble de ressources et de contrôleurs personnalisés de Kubernetes.

korifi/architecture.md at main · cloudfoundry/korifi

Création d’une instance Ubuntu 22.04 LTS dans Azure avec Docker :

ubuntu@korifi:~$ curl -fsSL https://get.docker.com | sh -

Client: Docker Engine - Community
 Version: 20.10.22
 API version: 1.41
 Go version: go1.18.9
 Git commit: 3a2c30b
 Built: Thu Dec 15 22:28:04 2022
 OS/Arch: linux/amd64
 Context: default
 Experimental: true

Server: Docker Engine - Community
 Engine:
  Version: 20.10.22
  API version: 1.41 (minimum version 1.12)
  Go version: go1.18.9
  Git commit: 42c8b31
  Built: Thu Dec 15 22:25:49 2022
  OS/Arch: linux/amd64
  Experimental: false
 containerd:
  Version: 1.6.14
  GitCommit: 9ba4b250366a5ddde94bb7c9d1def331423aa323
 runc:
  Version: 1.1.4
  GitCommit: v1.1.4-0-g5fd4c4d
 docker-init:
  Version: 0.19.0
  GitCommit: de40ad0

================================================================================

To run Docker as a non-privileged user, consider setting up the
Docker daemon in rootless mode for your user:

    dockerd-rootless-setuptool.sh install

Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.

To run the Docker daemon as a fully privileged service, but granting non-root
users access, refer to https://docs.docker.com/go/daemon-access/

WARNING: Access to the remote API on a privileged Docker daemon is equivalent
         to root access on the host. Refer to the 'Docker daemon attack surface'
         documentation for details: https://docs.docker.com/go/attack-surface/

================================================================================

ubuntu@korifi:~$ sudo usermod -aG docker ubuntu
ubuntu@korifi:~$ newgrp docker
ubuntu@korifi:~$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Enter fullscreen mode Exit fullscreen mode

suivi de Kind :

Quick Start

ubuntu@korifi:~$ wget -c https://github.com/kubernetes-sigs/kind/releases/download/v0.16.0/kind-linux-amd64 && chmod +x kind-linux-amd64 && sudo mv kind-linux-amd64 /usr/local/bin/kind
ubuntu@korifi:~$ kind

kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
  kind [command]

Available Commands:
  build Build one of [node-image]
  completion Output shell completion code for the specified shell (bash, zsh or fish)
  create Creates one of [cluster]
  delete Deletes one of [cluster]
  export Exports one of [kubeconfig, logs]
  get Gets one of [clusters, nodes, kubeconfig]
  help Help about any command
  load Loads images into nodes
  version Prints the kind CLI version

Flags:
  -h, --help help for kind
      --loglevel string DEPRECATED: see -v instead
  -q, --quiet silence all stderr output
  -v, --verbosity int32 info log verbosity, higher value produces more output
      --version version for kind

Use "kind [command] --help" for more information about a command.
Enter fullscreen mode Exit fullscreen mode

et d’un cluster Kubernetes associé localement dans l’instance Ubuntu :

ubuntu@korifi:~$ cat <<EOF | kind create cluster --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
EOF
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.25.2) 🖼 
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
ubuntu@korifi:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.25.2/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
ubuntu@korifi:~$ kubectl cluster-info --context kind-kind
Kubernetes control plane is running at https://127.0.0.1:38829
CoreDNS is running at https://127.0.0.1:38829/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Enter fullscreen mode Exit fullscreen mode

Installation de MetalLB pour bénéficier localement d’un service de LoadBalancing avec le plan d’adressage par défaut dans Docker pour Kind :

LoadBalancer

ubuntu@korifi:~$ docker network inspect -f '{{.IPAM.Config}}' kind
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 map[]}]

ubuntu@korifi:~$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.7/config/manifests/metallb-native.yaml

namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
secret/webhook-server-cert created
service/webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created

ubuntu@korifi:~$ kubectl wait --namespace metallb-system \
                --for=condition=ready pod \
                --selector=app=metallb \
                --timeout=90s

pod/controller-84d6d4db45-bph5x condition met
pod/speaker-pcl4p condition met

ubuntu@korifi:~$ cat conf.yaml 

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: example
  namespace: metallb-system
spec:
  addresses:
  - 172.18.255.200-172.18.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system

ubuntu@korifi:~$ kubectl apply -f conf.yaml 

ipaddresspool.metallb.io/example created
l2advertisement.metallb.io/empty created

ubuntu@korifi:~$ kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-565d847f94-kst2l 1/1 Running 0 6m59s
kube-system pod/coredns-565d847f94-rv8pn 1/1 Running 0 6m59s
kube-system pod/etcd-kind-control-plane 1/1 Running 0 7m17s
kube-system pod/kindnet-275pd 1/1 Running 0 6m59s
kube-system pod/kube-apiserver-kind-control-plane 1/1 Running 0 7m17s
kube-system pod/kube-controller-manager-kind-control-plane 1/1 Running 0 7m19s
kube-system pod/kube-proxy-qw9fj 1/1 Running 0 6m59s
kube-system pod/kube-scheduler-kind-control-plane 1/1 Running 0 7m17s
local-path-storage pod/local-path-provisioner-684f458cdd-f6zqf 1/1 Running 0 6m59s
metallb-system pod/controller-84d6d4db45-bph5x 1/1 Running 0 4m51s
metallb-system pod/speaker-pcl4p 1/1 Running 0 4m51s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7m19s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7m18s
metallb-system service/webhook-service ClusterIP 10.96.186.139 <none> 443/TCP 4m51s
Enter fullscreen mode Exit fullscreen mode

Utilisation des variables d’environnement suivantes :

ROOT_NAMESPACE="cf"
KORIFI_NAMESPACE="korifi-system"
ADMIN_USERNAME="kubernetes-admin"
BASE_DOMAIN="apps-172-18-255-200.nip.io"
Enter fullscreen mode Exit fullscreen mode

avec ce domaine Wildcard :

ubuntu@korifi:~$ nslookup apps-172-18-255-200.nip.io
Server: 127.0.0.53
Address: 127.0.0.53#53

Non-authoritative answer:
Name: apps-172-18-255-200.nip.io
Address: 172.18.255.200
Enter fullscreen mode Exit fullscreen mode

Installation d’Helm et du client CloudFoundry :

ubuntu@korifi:~$ curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Preparing to install helm into /usr/local/bin
helm installed into /usr/local/bin/helm

ubuntu@korifi:~$ helm ls

NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
ubuntu@korifi:~$ curl -L "https://packages.cloudfoundry.org/stable?release=linux64-binary&version=v8&source=github" | tar -zx

ubuntu@korifi:~$ sudo mv cf* /usr/local/bin/

ubuntu@korifi:~$ cf

cf version 8.5.0+73aa161.2022-09-12, Cloud Foundry command line tool
Usage: cf [global options] command [arguments...] [command options]

Before getting started:
  config login,l target,t
  help,h logout,lo    

Application lifecycle:
  apps,a run-task,rt events
  push,p logs set-env,se
  start,st ssh create-app-manifest
  stop,sp app delete,d
  restart,rs env,e apply-manifest
  restage,rg scale revisions

Services integration:
  marketplace,m create-user-provided-service,cups
  services,s update-user-provided-service,uups
  create-service,cs create-service-key,csk
  update-service delete-service-key,dsk
  delete-service,ds service-keys,sk
  service service-key
  bind-service,bs bind-route-service,brs
  unbind-service,us unbind-route-service,urs

Route and domain management:
  routes,r delete-route create-private-domain,create-domain
  domains map-route       
  create-route unmap-route     

Space management:
  spaces create-space,csp set-space-role
  space-users delete-space unset-space-role
  apply-manifest                        

Org management:
  orgs,o set-org-role
  org-users unset-org-role

CLI plugin management:
  plugins add-plugin-repo repo-plugins
  install-plugin list-plugin-repos    

Commands offered by installed plugins:

Global options:
  --help, -h Show help
  -v Print API request diagnostics to stdout

TIP: Use 'cf help -a' to see all commands.
Enter fullscreen mode Exit fullscreen mode

de _cert-manager _:

Installation

ubuntu@korifi:~$ kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.10.1/cert-manager.yaml

namespace/cert-manager created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
configmap/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
Enter fullscreen mode Exit fullscreen mode

de la dernière version de kpack :

GitHub - pivotal/kpack: Kubernetes Native Container Build Service

ubuntu@korifi:~$ kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.9.1/release-0.9.1.yaml
namespace/kpack created
customresourcedefinition.apiextensions.k8s.io/builds.kpack.io created
customresourcedefinition.apiextensions.k8s.io/builders.kpack.io created
customresourcedefinition.apiextensions.k8s.io/clusterbuilders.kpack.io created
customresourcedefinition.apiextensions.k8s.io/clusterstacks.kpack.io created
customresourcedefinition.apiextensions.k8s.io/clusterstores.kpack.io created
configmap/config-logging created
configmap/build-init-image created
configmap/build-init-windows-image created
configmap/build-waiter-image created
configmap/rebase-image created
configmap/lifecycle-image created
configmap/completion-image created
configmap/completion-windows-image created
deployment.apps/kpack-controller created
serviceaccount/controller created
clusterrole.rbac.authorization.k8s.io/kpack-controller-admin created
clusterrolebinding.rbac.authorization.k8s.io/kpack-controller-admin-binding created
clusterrole.rbac.authorization.k8s.io/kpack-controller-servicebindings-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/kpack-controller-servicebindings-binding created
role.rbac.authorization.k8s.io/kpack-controller-local-config created
rolebinding.rbac.authorization.k8s.io/kpack-controller-local-config-binding created
customresourcedefinition.apiextensions.k8s.io/images.kpack.io created
priorityclass.scheduling.k8s.io/kpack-control-plane created
priorityclass.scheduling.k8s.io/kpack-build-high-priority created
priorityclass.scheduling.k8s.io/kpack-build-low-priority created
service/kpack-webhook created
customresourcedefinition.apiextensions.k8s.io/sourceresolvers.kpack.io created
mutatingwebhookconfiguration.admissionregistration.k8s.io/defaults.webhook.kpack.io created
validatingwebhookconfiguration.admissionregistration.k8s.io/validation.webhook.kpack.io created
secret/webhook-certs created
deployment.apps/kpack-webhook created
serviceaccount/webhook created
role.rbac.authorization.k8s.io/kpack-webhook-certs-admin created
rolebinding.rbac.authorization.k8s.io/kpack-webhook-certs-admin-binding created
clusterrole.rbac.authorization.k8s.io/kpack-webhook-mutatingwebhookconfiguration-admin created
clusterrolebinding.rbac.authorization.k8s.io/kpack-webhook-certs-mutatingwebhookconfiguration-admin-binding created
Enter fullscreen mode Exit fullscreen mode

de Contour comme contrôleur d’entrée et de metrics-server :

Getting Started with Contour

ubuntu@korifi:~$ kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

namespace/projectcontour created
serviceaccount/contour created
serviceaccount/envoy created
configmap/contour created
customresourcedefinition.apiextensions.k8s.io/contourconfigurations.projectcontour.io created
customresourcedefinition.apiextensions.k8s.io/contourdeployments.projectcontour.io created
customresourcedefinition.apiextensions.k8s.io/extensionservices.projectcontour.io created
customresourcedefinition.apiextensions.k8s.io/httpproxies.projectcontour.io created
customresourcedefinition.apiextensions.k8s.io/tlscertificatedelegations.projectcontour.io created
serviceaccount/contour-certgen created
rolebinding.rbac.authorization.k8s.io/contour created
role.rbac.authorization.k8s.io/contour-certgen created
job.batch/contour-certgen-v1.23.2 created
clusterrolebinding.rbac.authorization.k8s.io/contour created
rolebinding.rbac.authorization.k8s.io/contour-rolebinding created
clusterrole.rbac.authorization.k8s.io/contour created
role.rbac.authorization.k8s.io/contour created
service/contour created
service/envoy created
deployment.apps/contour created
daemonset.apps/envoy created
Enter fullscreen mode Exit fullscreen mode

Releases · kubernetes-sigs/metrics-server

ubuntu@korifi:~$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.2/components.yaml

serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

ubuntu@korifi:~$ kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
cert-manager pod/cert-manager-74d949c895-w6gzm 1/1 Running 0 13m
cert-manager pod/cert-manager-cainjector-d9bc5979d-jhr9m 1/1 Running 0 13m
cert-manager pod/cert-manager-webhook-84b7ddd796-xw878 1/1 Running 0 13m
kpack pod/kpack-controller-84cbbcdff6-nnhdn 1/1 Running 0 9m40s
kpack pod/kpack-webhook-56c6b59c4-9zvlb 1/1 Running 0 9m40s
kube-system pod/coredns-565d847f94-kst2l 1/1 Running 0 31m
kube-system pod/coredns-565d847f94-rv8pn 1/1 Running 0 31m
kube-system pod/etcd-kind-control-plane 1/1 Running 0 32m
kube-system pod/kindnet-275pd 1/1 Running 0 31m
kube-system pod/kube-apiserver-kind-control-plane 1/1 Running 0 32m
kube-system pod/kube-controller-manager-kind-control-plane 1/1 Running 0 32m
kube-system pod/kube-proxy-qw9fj 1/1 Running 0 31m
kube-system pod/kube-scheduler-kind-control-plane 1/1 Running 0 32m
kube-system pod/metrics-server-8ff8f88c6-69t9z 0/1 Running 0 4m21s
local-path-storage pod/local-path-provisioner-684f458cdd-f6zqf 1/1 Running 0 31m
metallb-system pod/controller-84d6d4db45-bph5x 1/1 Running 0 29m
metallb-system pod/speaker-pcl4p 1/1 Running 0 29m
projectcontour pod/contour-7b9b9cdfd6-h5jzg 1/1 Running 0 6m43s
projectcontour pod/contour-7b9b9cdfd6-nhbq2 1/1 Running 0 6m43s
projectcontour pod/contour-certgen-v1.23.2-hxh7k 0/1 Completed 0 6m43s
projectcontour pod/envoy-v4xk9 2/2 Running 0 6m43s
servicebinding-system pod/servicebinding-controller-manager-85f7498cf-xd7jc 2/2 Running 0 115s

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager service/cert-manager ClusterIP 10.96.153.49 <none> 9402/TCP 13m
cert-manager service/cert-manager-webhook ClusterIP 10.96.102.82 <none> 443/TCP 13m
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 32m
kpack service/kpack-webhook ClusterIP 10.96.227.201 <none> 443/TCP 9m40s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 32m
kube-system service/metrics-server ClusterIP 10.96.204.62 <none> 443/TCP 4m21s
metallb-system service/webhook-service ClusterIP 10.96.186.139 <none> 443/TCP 29m
projectcontour service/contour ClusterIP 10.96.138.58 <none> 8001/TCP 6m43s
projectcontour service/envoy LoadBalancer 10.96.126.44 172.18.255.200 80:30632/TCP,443:30730/TCP 6m43s
servicebinding-system service/servicebinding-controller-manager-metrics-service ClusterIP 10.96.147.189 <none> 8443/TCP 115s
servicebinding-system service/servicebinding-webhook-service ClusterIP 10.96.14.224 <none> 443/TCP 115s
Enter fullscreen mode Exit fullscreen mode

Je peux procéder à la création du namespace pour Korifi :

ubuntu@korifi:~$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: $ROOT_NAMESPACE
  labels:
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/enforce: restricted
EOF

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: $KORIFI_NAMESPACE
  labels:
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/enforce: restricted
EOF

namespace/cf created
namespace/korifi-system created
Enter fullscreen mode Exit fullscreen mode

et du secret pour l’accès du Docker Hub Registry :

ubuntu@korifi:~$ kubectl --namespace "$ROOT_NAMESPACE" create secret docker-registry image-registry-credentials \
    --docker-username="<USER>" \
    --docker-password="<DOCKER HUB TOKEN>" \
    --docker-server="https://index.docker.io/v1/"
secret/image-registry-credentials created
Enter fullscreen mode Exit fullscreen mode

Déploiement de Korifi via Helm :

ubuntu@korifi:~$ helm install korifi https://github.com/cloudfoundry/korifi/releases/download/v0.5.0/korifi-0.5.0.tgz \
    --namespace="$KORIFI_NAMESPACE" \
    --set=global.generateIngressCertificates=true \
    --set=global.rootNamespace="$ROOT_NAMESPACE" \
    --set=adminUserName="$ADMIN_USERNAME" \
    --set=api.apiServer.url="api.$BASE_DOMAIN" \
    --set=global.defaultAppDomainName="apps.$BASE_DOMAIN" \
    --set=global.containerRepositoryPrefix=index.docker.io/mcas/korifi/ \
    --set=kpack-image-builder.builderRepository=index.docker.io/mcas//mcas/korifi/kpack-builder

NAME: korifi
LAST DEPLOYED: Sun Dec 25 19:17:10 2022
NAMESPACE: korifi-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

ubuntu@korifi:~$ helm ls -A

NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
korifi korifi-system 1 2022-12-25 19:17:10.041417068 +0000 UTC deployed korifi-0.1.0 dev 

ubuntu@korifi:~$ kubectl get po,svc -A

NAMESPACE NAME READY STATUS RESTARTS AGE
cert-manager pod/cert-manager-74d949c895-w6gzm 1/1 Running 0 29m
cert-manager pod/cert-manager-cainjector-d9bc5979d-jhr9m 1/1 Running 0 29m
cert-manager pod/cert-manager-webhook-84b7ddd796-xw878 1/1 Running 0 29m
korifi-system pod/korifi-api-deployment-6b85594bfd-4htmz 1/1 Running 0 97s
korifi-system pod/korifi-controllers-controller-manager-58b4d68785-9wfkj 1/1 Running 0 97s
korifi-system pod/korifi-job-task-runner-controller-manager-fb844d47-4bwft 1/1 Running 0 97s
korifi-system pod/korifi-kpack-build-controller-manager-6cc448db9c-4dw9k 1/1 Running 0 97s
korifi-system pod/korifi-statefulset-runner-controller-manager-7cc8fdb476-h6pkd 1/1 Running 0 97s
kpack pod/kpack-controller-84cbbcdff6-nnhdn 1/1 Running 0 25m
kpack pod/kpack-webhook-56c6b59c4-9zvlb 1/1 Running 0 25m
kube-system pod/coredns-565d847f94-kst2l 1/1 Running 0 48m
kube-system pod/coredns-565d847f94-rv8pn 1/1 Running 0 48m
kube-system pod/etcd-kind-control-plane 1/1 Running 0 48m
kube-system pod/kindnet-275pd 1/1 Running 0 48m
kube-system pod/kube-apiserver-kind-control-plane 1/1 Running 0 48m
kube-system pod/kube-controller-manager-kind-control-plane 1/1 Running 0 48m
kube-system pod/kube-proxy-qw9fj 1/1 Running 0 48m
kube-system pod/kube-scheduler-kind-control-plane 1/1 Running 0 48m
kube-system pod/metrics-server-8ff8f88c6-69t9z 0/1 Running 0 20m
local-path-storage pod/local-path-provisioner-684f458cdd-f6zqf 1/1 Running 0 48m
metallb-system pod/controller-84d6d4db45-bph5x 1/1 Running 0 45m
metallb-system pod/speaker-pcl4p 1/1 Running 0 45m
projectcontour pod/contour-7b9b9cdfd6-h5jzg 1/1 Running 0 22m
projectcontour pod/contour-7b9b9cdfd6-nhbq2 1/1 Running 0 22m
projectcontour pod/contour-certgen-v1.23.2-hxh7k 0/1 Completed 0 22m
projectcontour pod/envoy-v4xk9 2/2 Running 0 22m
servicebinding-system pod/servicebinding-controller-manager-85f7498cf-xd7jc 2/2 Running 0 18m

NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cert-manager service/cert-manager ClusterIP 10.96.153.49 <none> 9402/TCP 29m
cert-manager service/cert-manager-webhook ClusterIP 10.96.102.82 <none> 443/TCP 29m
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 48m
korifi-system service/korifi-api-svc ClusterIP 10.96.157.135 <none> 443/TCP 97s
korifi-system service/korifi-controllers-webhook-service ClusterIP 10.96.106.22 <none> 443/TCP 97s
korifi-system service/korifi-kpack-build-webhook-service ClusterIP 10.96.202.25 <none> 443/TCP 97s
korifi-system service/korifi-statefulset-runner-webhook-service ClusterIP 10.96.232.1 <none> 443/TCP 97s
kpack service/kpack-webhook ClusterIP 10.96.227.201 <none> 443/TCP 25m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 48m
kube-system service/metrics-server ClusterIP 10.96.204.62 <none> 443/TCP 20m
metallb-system service/webhook-service ClusterIP 10.96.186.139 <none> 443/TCP 45m
projectcontour service/contour ClusterIP 10.96.138.58 <none> 8001/TCP 22m
projectcontour service/envoy LoadBalancer 10.96.126.44 172.18.255.200 80:30632/TCP,443:30730/TCP 22m
servicebinding-system service/servicebinding-controller-manager-metrics-service ClusterIP 10.96.147.189 <none> 8443/TCP 18m
servicebinding-system service/servicebinding-webhook-service ClusterIP 10.96.14.224 <none> 443/TCP 18m
Enter fullscreen mode Exit fullscreen mode

Utilisation du client Cloud Foundry possible à cette étape avec création d’une organisation et d’un espace :

ubuntu@korifi:~$ cf api https://api.$BASE_DOMAIN --skip-ssl-validation

Setting API endpoint to https://api.apps-172-18-255-200.nip.io...
OK

API endpoint: https://api.apps-172-18-255-200.nip.io
API version: 3.117.0+cf-k8s

Not logged in. Use 'cf login' or 'cf login --sso' to log in.
ubuntu@korifi:~$ cf login
API endpoint: https://api.apps-172-18-255-200.nip.io

1. kind-kind

Choose your Kubernetes authentication info (enter to skip): 1

Authenticating...
OK

Warning: The client certificate you provided for user authentication expires at 2023-12-25T18:30:11Z
which exceeds the recommended validity duration of 168h0m0s. Ask your platform provider to issue you a short-lived certificate credential or to configure your authentication to generate short-lived credentials automatically.
API endpoint: https://api.apps-172-18-255-200.nip.io
API version: 3.117.0+cf-k8s
user: kubernetes-admin
No org or space targeted, use 'cf target -o ORG -s SPACE'

ubuntu@korifi:~$ cf create-org org1

Creating org org1 as kubernetes-admin...
OK

TIP: Use 'cf target -o "org1"' to target new org

ubuntu@korifi:~$ cf create-space -o org1 space1

Warning: The client certificate you provided for user authentication expires at 2023-12-25T18:30:11Z
which exceeds the recommended validity duration of 168h0m0s. Ask your platform provider to issue you a short-lived certificate credential or to configure your authentication to generate short-lived credentials automatically.
Creating space space1 in org org1 as kubernetes-admin...
OK

Assigning role SpaceManager to user kubernetes-admin in org org1 / space space1 as kubernetes-admin...
OK

Assigning role SpaceDeveloper to user kubernetes-admin in org org1 / space space1 as kubernetes-admin...
OK

TIP: Use 'cf target -o "org1" -s "space1"' to target new space

ubuntu@korifi:~$ cf target -o org1

Warning: The client certificate you provided for user authentication expires at 2023-12-25T18:30:11Z
which exceeds the recommended validity duration of 168h0m0s. Ask your platform provider to issue you a short-lived certificate credential or to configure your authentication to generate short-lived credentials automatically.
API endpoint: https://api.apps-172-18-255-200.nip.io
API version: 3.117.0+cf-k8s
user: kubernetes-admin
org: org1
space: space1
Enter fullscreen mode Exit fullscreen mode

Avec déploiement du démonstrateur FC :

ubuntu@korifi:~/fcdemo3$ cf push fcdemo3

Pushing app fcdemo3 to org org1 / space space1 as kubernetes-admin...
Packaging files to upload...
Uploading files...
 15.22 MiB / 15.22 MiB [==================================================================================================================================================] 100.00%

ubuntu@korifi:~/fcdemo3$ cf apps
Getting apps in org org1 / space space1 as kubernetes-admin...

name requested state processes routes
fcdemo3 started fcdemo3.apps.apps-172-18-255-200.nip.io

ubuntu@korifi:~/fcdemo3$ curl http://fcdemo3.apps.apps-172-18-255-200.nip.io

<!doctype html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css" integrity="sha256-zIG416V1ynj3Wgju/scU80KAEWOsO5rRLfVyRDuOv7Q=" crossorigin="anonymous" />
    <title>Démonstrateur Fournisseur de Service</title>
</head>

<body>
<nav class="navbar" role="navigation" aria-label="main navigation">
    <div class="navbar-start">
        <div class="navbar-brand">
            <a class="navbar-item" href="/">
                <img src="/img/fc_logo_v2.png" alt="Démonstrateur Fournisseur de Service" height="28">
            </a>
        </div>
        <a href="/" class="navbar-item">
            Home
        </a>
    </div>
    <div class="navbar-end">
        <div class="navbar-item">

                <div class="buttons">
                    <a class="button is-light" href="/login">Se connecter</a>
                </div>

        </div>
    </div>
</nav>

<section class="hero is-info is-medium">
    <div class="hero-body">
        <div class="container">
            <h1 class="title">
                Bienvenue sur le démonstrateur de fournisseur de service
            </h1>
            <h2 class="subtitle">
                Cliquez sur "se connecter" pour vous connecter via <strong>FranceConnect</strong>
            </h2>
        </div>
    </div>
</section>

<section class="section is-small">
    <div class="container">
        <h1 class="title">Récupérer vos données via FranceConnect</h1>

        <p>Pour récupérer vos données via <strong>FranceConnect</strong> cliquez sur le bouton ci-dessous</p>
    </div>
</section>
<section class="section is-small">
    <div class="container has-text-centered">
        <!-- FC btn -->
        <a href="/data" class="button is-link">Récupérer mes données via FranceConnect</a>
    </div>
</section>
<footer class="footer custom-content">
    <div class="content has-text-centered">
        <p>
            <a href="https://partenaires.franceconnect.gouv.fr/fcp/fournisseur-service"
               target="_blank"
               alt="lien vers la documentation France Connect">
                <strong>Documentation FranceConnect Partenaires</strong>
            </a>
        </p>
    </div>
</footer>
<!-- This script brings the FranceConnect tools modal which enable "disconnect", "see connection history" and "see FC FAQ" features -->
<script src="https://fcp.integ01.dev-franceconnect.fr/js/franceconnect.js"></script>

</body>
</html>
ubuntu@korifi:~/fcdemo3$ curl http://172.18.255.200
<!doctype html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css" integrity="sha256-zIG416V1ynj3Wgju/scU80KAEWOsO5rRLfVyRDuOv7Q=" crossorigin="anonymous" />
    <title>Démonstrateur Fournisseur de Service</title>
</head>

<body>
<nav class="navbar" role="navigation" aria-label="main navigation">
    <div class="navbar-start">
        <div class="navbar-brand">
            <a class="navbar-item" href="/">
                <img src="/img/fc_logo_v2.png" alt="Démonstrateur Fournisseur de Service" height="28">
            </a>
        </div>
        <a href="/" class="navbar-item">
            Home
        </a>
    </div>
    <div class="navbar-end">
        <div class="navbar-item">

                <div class="buttons">
                    <a class="button is-light" href="/login">Se connecter</a>
                </div>

        </div>
    </div>
</nav>

<section class="hero is-info is-medium">
    <div class="hero-body">
        <div class="container">
            <h1 class="title">
                Bienvenue sur le démonstrateur de fournisseur de service
            </h1>
            <h2 class="subtitle">
                Cliquez sur "se connecter" pour vous connecter via <strong>FranceConnect</strong>
            </h2>
        </div>
    </div>
</section>

<section class="section is-small">
    <div class="container">
        <h1 class="title">Récupérer vos données via FranceConnect</h1>

        <p>Pour récupérer vos données via <strong>FranceConnect</strong> cliquez sur le bouton ci-dessous</p>
    </div>
</section>
<section class="section is-small">
    <div class="container has-text-centered">
        <!-- FC btn -->
        <a href="/data" class="button is-link">Récupérer mes données via FranceConnect</a>
    </div>
</section>
<footer class="footer custom-content">
    <div class="content has-text-centered">
        <p>
            <a href="https://partenaires.franceconnect.gouv.fr/fcp/fournisseur-service"
               target="_blank"
               alt="lien vers la documentation France Connect">
                <strong>Documentation FranceConnect Partenaires</strong>
            </a>
        </p>
    </div>
</footer>
<!-- This script brings the FranceConnect tools modal which enable "disconnect", "see connection history" and "see FC FAQ" features -->
<script src="https://fcp.integ01.dev-franceconnect.fr/js/franceconnect.js"></script>

</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Korifi avec cette implémentation expérimentale de l’API Cloud Foundry V3, est très différent de “Cloud Foundry for VMs” sur le plan architectural. La plupart des composants de base de CF ont été remplacés par des équivalents natifs dans Kubernetes.

Korifi ne prend pas actuellement en charge toutes les API ou tous les filtres CF V3 mais évolue …

À suivre !

Top comments (0)