DEV Community

Daniel Puig Gerarde
Daniel Puig Gerarde

Posted on

A Comprehensive Guide to Building Kubernetes Operators with Kubebuilder

Kubernetes Operators are a powerful way to automate the management of complex applications on Kubernetes. In this blog post, we will provide a hands-on guide for Kubernetes developers who want to learn how to create and use Operators. We will cover the basics of Operators, including how to define custom resources, create controllers, and manage reconciliation loops. We will also provide example of Operator for MySQL.

Prerequisites
go version v1.20.0+
docker version 17.03+.
kubectl version v1.11.3+.
Access to a Kubernetes v1.11.3+ cluster.

Install Kubebuilder

$ curl -L -o kubebuilder "https://go.kubebuilder.io/dl/latest/$(go env GOOS)/$(go env GOARCH)"
$ chmod +x kubebuilder
$ mv kubebuilder /usr/local/bin/

$ kubebuilder version 
Version: main.version{KubeBuilderVersion:"3.11.1", KubernetesVendor:"1.27.1", GitCommit:"1dc8ed95f7cc55fef3151f749d3d541bec3423c9", BuildDate:"2023-07-03T13:10:56Z", GoOs:"linux", GoArch:"amd64"}

Enter fullscreen mode Exit fullscreen mode

Init/bootstrap the project

$ mkdir -p ~/ops/mysql-operator && cd /mysql-operator
$ kubebuilder init --domain dpuigerarde.com --repo github.com/dpuig/mysql-operator
Enter fullscreen mode Exit fullscreen mode

The kubebuilder init --domain command is used to initialize a new Kubernetes Operator project. The domain flag specifies the Kubernetes group for the project's custom resources. The default value for the domain flag is my.domain.

Create an API

$ kubebuilder create api --group apps --version v1alpha1 --kind MySQLCluster
Enter fullscreen mode Exit fullscreen mode

The kubebuilder create api --group command is used to create a new API (custom resource definition) in a Kubernetes Operator project. The group flag specifies the Kubernetes group for the API. The default value for the group flag is the project's domain name.

If you press y for Create Resource [y/n] and for Create Controller [y/n] then this will create the files

api
└── v1alpha1
    ├── groupversion_info.go
    ├── mysqlcluster_types.go
    └── zz_generated.deepcopy.go
Enter fullscreen mode Exit fullscreen mode

where the API is defined

Also the files

internal
└── controller
    ├── mysqlcluster_controller.go
    └── suite_test.go
Enter fullscreen mode Exit fullscreen mode

where the reconciliation business logic is implemented for this Kind(CRD).

Custom Resource Definition (CRD)

The MySQLClusterSpec CRD defines the schema for the MySQLCluster resource. It should include the following fields:

deploymentName: The name of the mysql db.
replicas: The number of mysql pods.
version: The version of mysql to use.
password: The default admin password.

In the generated project, look for api/v1alpha1/mysqlcluster_types.go

Edit the MySQLClusterSpec and MySQLClusterStatus structs:

type MySQLClusterSpec struct {
    // INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
    // Important: Run "make" to regenerate code after modifying this file

    // Foo is an example field of MySQLCluster. Edit mysqlcluster_types.go to remove/update
    // Foo string `json:"foo,omitempty"`

    // +kubebuilder:validation:Required
    // +kubebuilder:validation:Format:=string

    // the name of the deployment
    DeploymentName string `json:"deploymentName"`

    // +kubebuilder:validation:Required
    // +kubebuilder:validation:Minimum=0

    // the number of replicas
    Replicas *int32 `json:"replicas"`

    Version string `json:"version"`

    // +kubebuilder:validation:Required
    // +kubebuilder:validation:Format:=string
    Password string `json:"password"`
}

type MySQLClusterStatus struct {
    // INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
    // Important: Run "make" to regenerate code after modifying this file
    // this is equal deployment.status.availableReplicas
    // +optional
    AvailableReplicas int32 `json:"availableReplicas"`
}
Enter fullscreen mode Exit fullscreen mode

Then run:

$ make manifests
Enter fullscreen mode Exit fullscreen mode

Implement the Controller Logic

Edit the generated controller file, which is located at internal/controller/mysqlcluster_controller.go

Soon we will dedicate a Blog post to focus on the details of the API types and especially the logic in the controllers. For this case and in general terms, this controller is in charge of deploying a Deployment that will launch a the mysql db.

package controller

import (
    "context"

    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/client-go/tools/record"
    ctrl "sigs.k8s.io/controller-runtime"
    "sigs.k8s.io/controller-runtime/pkg/client"
    "sigs.k8s.io/controller-runtime/pkg/log"

    "github.com/go-logr/logr"

    appsv1 "k8s.io/api/apps/v1"
    corev1 "k8s.io/api/core/v1"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

    samplecontrollerv1alpha1 "github.com/dpuig/mysql-operator/api/v1alpha1"
)

var (
    deploymentOwnerKey = ".metadata.controller"
    apiGVStr           = samplecontrollerv1alpha1.GroupVersion.String()
)

// MySQLClusterReconciler reconciles a MySQLCluster object
type MySQLClusterReconciler struct {
    client.Client
    Log      logr.Logger
    Scheme   *runtime.Scheme
    Recorder record.EventRecorder
}

//+kubebuilder:rbac:groups=apps.dpuigerarde.com,resources=mysqlclusters,verbs=get;list;watch;create;update;patch;delete
//+kubebuilder:rbac:groups=apps.dpuigerarde.com,resources=mysqlclusters/status,verbs=get;update;patch
//+kubebuilder:rbac:groups=apps.dpuigerarde.com,resources=mysqlclusters/finalizers,verbs=update

// Reconcile is part of the main kubernetes reconciliation loop which aims to
// move the current state of the cluster closer to the desired state.
// TODO(user): Modify the Reconcile function to compare the state specified by
// the MySQLCluster object against the actual cluster state, and then
// perform operations to make the cluster state reflect the state specified by
// the user.
//
// For more details, check Reconcile and its Result here:
// - https://pkg.go.dev/sigs.k8s.io/controller-runtime@v0.15.0/pkg/reconcile
func (r *MySQLClusterReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
    _ = log.FromContext(ctx)

    log := r.Log.WithValues("mysqlCluster", req.NamespacedName)

    var mysqlCluster samplecontrollerv1alpha1.MySQLCluster
    log.Info("fetching MySQLCluster Resource")
    if err := r.Get(ctx, req.NamespacedName, &mysqlCluster); err != nil {
        log.Error(err, "unable to fetch MySQLCluster")
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    if err := r.cleanupOwnedResources(ctx, log, &mysqlCluster); err != nil {
        log.Error(err, "failed to clean up old Deployment resources for this Foo")
        return ctrl.Result{}, err
    }

    // get deploymentName from mysqlCluster.Spec
    deploymentName := mysqlCluster.Spec.DeploymentName

    // define deployment template using deploymentName
    deploy := &appsv1.Deployment{
        ObjectMeta: metav1.ObjectMeta{
            Name:      deploymentName,
            Namespace: req.Namespace,
        },
    }

    // Create or Update deployment object
    if _, err := ctrl.CreateOrUpdate(ctx, r.Client, deploy, func() error {
        replicas := int32(1)
        if mysqlCluster.Spec.Replicas != nil {
            replicas = *mysqlCluster.Spec.Replicas
        }
        deploy.Spec.Replicas = &replicas

        labels := map[string]string{
            "app":        "mysql",
            "controller": req.Name,
        }

        // set labels to spec.selector for our deployment
        if deploy.Spec.Selector == nil {
            deploy.Spec.Selector = &metav1.LabelSelector{MatchLabels: labels}
        }

        // set labels to template.objectMeta for our deployment
        if deploy.Spec.Template.ObjectMeta.Labels == nil {
            deploy.Spec.Template.ObjectMeta.Labels = labels
        }

        // set a container for our deployment
        containers := []corev1.Container{
            {
                Name:  "db",
                Image: "mysql:" + mysqlCluster.Spec.Version,
                Env: []corev1.EnvVar{
                    {
                        Name:  "MYSQL_ROOT_PASSWORD",
                        Value: mysqlCluster.Spec.Password,
                    },
                },
                Command: []string{"mysqld", "--user=root"},
                Args:    []string{"--default-authentication-plugin=mysql_native_password"},
                Ports: []corev1.ContainerPort{
                    {
                        Name:          "mysql",
                        ContainerPort: 3306,
                    },
                },
                VolumeMounts: []corev1.VolumeMount{
                    {
                        Name:      "mysql-persistent-storage",
                        MountPath: "/var/lib/mysql",
                    },
                },
                SecurityContext: &corev1.SecurityContext{
                    RunAsUser:  func() *int64 { i := int64(1001); return &i }(),
                    RunAsGroup: func() *int64 { i := int64(1001); return &i }(),
                },
            },
        }

        // set containers to template.spec.containers for our deployment
        if deploy.Spec.Template.Spec.Containers == nil {
            deploy.Spec.Template.Spec.Containers = containers
        }

        deploy.Spec.Strategy.Type = "Recreate"
        deploy.Spec.Template.Spec.Volumes = []corev1.Volume{
            {
                Name: "mysql-persistent-storage",
                VolumeSource: corev1.VolumeSource{
                    PersistentVolumeClaim: &corev1.PersistentVolumeClaimVolumeSource{
                        ClaimName: "mysql-pv-claim",
                    },
                },
            },
        }

        deploy.Spec.Template.Spec.SecurityContext = &corev1.PodSecurityContext{
            FSGroup: func() *int64 { i := int64(1001); return &i }(),
        }

        // set the owner so that garbage collection can kicks in
        if err := ctrl.SetControllerReference(&mysqlCluster, deploy, r.Scheme); err != nil {
            log.Error(err, "unable to set ownerReference from mysqlCluster to Deployment")
            return err
        }

        return nil
    }); err != nil {

        // error handling of ctrl.CreateOrUpdate
        log.Error(err, "unable to ensure deployment is correct")
        return ctrl.Result{}, err

    }

    // get deployment object from in-memory-cache
    var deployment appsv1.Deployment
    var deploymentNamespacedName = client.ObjectKey{Namespace: req.Namespace, Name: mysqlCluster.Spec.DeploymentName}
    if err := r.Get(ctx, deploymentNamespacedName, &deployment); err != nil {
        log.Error(err, "unable to fetch Deployment")
        return ctrl.Result{}, client.IgnoreNotFound(err)
    }

    // set mysqlCluster.status.AvailableReplicas from deployment
    availableReplicas := deployment.Status.AvailableReplicas
    if availableReplicas == mysqlCluster.Status.AvailableReplicas {
        return ctrl.Result{}, nil
    }
    mysqlCluster.Status.AvailableReplicas = availableReplicas

    // update mysqlCluster.status
    if err := r.Status().Update(ctx, &mysqlCluster); err != nil {
        log.Error(err, "unable to update mysqlCluster status")
        return ctrl.Result{}, err
    }

    // create event for updated mysqlCluster.status
    r.Recorder.Eventf(&mysqlCluster, corev1.EventTypeNormal, "Updated", "Update mysqlCluster.status.AvailableReplicas: %d", mysqlCluster.Status.AvailableReplicas)

    return ctrl.Result{}, nil
}

// SetupWithManager sets up the controller with the Manager.
func (r *MySQLClusterReconciler) SetupWithManager(mgr ctrl.Manager) error {
    ctx := context.Background()
    // add deploymentOwnerKey index to deployment object which MySQLCluster resource owns
    if err := mgr.GetFieldIndexer().IndexField(ctx, &appsv1.Deployment{}, deploymentOwnerKey, func(rawObj client.Object) []string {
        // grab the deployment object, extract the owner...
        deployment := rawObj.(*appsv1.Deployment)
        owner := metav1.GetControllerOf(deployment)
        if owner == nil {
            return nil
        }
        // ...make sure it's a MySQLCluster...
        if owner.APIVersion != apiGVStr || owner.Kind != "MySQLCluster" {
            return nil
        }

        // ...and if so, return it
        return []string{owner.Name}
    }); err != nil {
        return err
    }

    // define to watch targets...Foo resource and owned Deployment
    return ctrl.NewControllerManagedBy(mgr).
        For(&samplecontrollerv1alpha1.MySQLCluster{}).
        Owns(&appsv1.Deployment{}).
        Complete(r)
}

// cleanupOwnedResources will delete any existing Deployment resources that
// were created for the given mysqlCluster that no longer match the
// mysqlCluster.spec.deploymentName field.
func (r *MySQLClusterReconciler) cleanupOwnedResources(ctx context.Context, log logr.Logger, mysqlCluster *samplecontrollerv1alpha1.MySQLCluster) error {
    log.Info("finding existing Deployments for Foo resource")

    // List all deployment resources owned by this mysqlCluster
    var deployments appsv1.DeploymentList
    if err := r.List(ctx, &deployments, client.InNamespace(mysqlCluster.Namespace), client.MatchingFields(map[string]string{deploymentOwnerKey: mysqlCluster.Name})); err != nil {
        return err
    }

    // Delete deployment if the deployment name doesn't match foo.spec.deploymentName
    for _, deployment := range deployments.Items {
        if deployment.Name == mysqlCluster.Spec.DeploymentName {
            // If this deployment's name matches the one on the Foo resource
            // then do not delete it.
            continue
        }

        // Delete old deployment object which doesn't match foo.spec.deploymentName
        if err := r.Delete(ctx, &deployment); err != nil {
            log.Error(err, "failed to delete Deployment resource")
            return err
        }

        log.Info("delete deployment resource: " + deployment.Name)
        r.Recorder.Eventf(mysqlCluster, corev1.EventTypeNormal, "Deleted", "Deleted deployment %q", deployment.Name)
    }

    return nil
}

Enter fullscreen mode Exit fullscreen mode

Project Structure

.
├── api
│   └── v1alpha1
│       ├── groupversion_info.go
│       ├── mysqlcluster_types.go
│       └── zz_generated.deepcopy.go
├── bin
│   ├── controller-gen
│   └── kustomize
├── cmd
│   └── main.go
├── config
│   ├── crd
│   │   ├── bases
│   │   │   └── apps.dpuigerarde.com_mysqlclusters.yaml
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_mysqlclusters.yaml
│   │       └── webhook_in_mysqlclusters.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   └── manager_config_patch.yaml
│   ├── manager
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── mysqlcluster_editor_role.yaml
│   │   ├── mysqlcluster_viewer_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── role.yaml
│   │   └── service_account.yaml
│   └── samples
│       ├── apps_v1alpha1_mysqlcluster.yaml
│       └── kustomization.yaml
├── Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── internal
│   └── controller
│       ├── mysqlcluster_controller.go
│       └── suite_test.go
├── Makefile
├── mysql-pv.yaml
├── PROJECT
└── README.md

Enter fullscreen mode Exit fullscreen mode

Run Operator Locally (For Development)

For development purposes, you may wish to run your operator locally against a remote cluster. This allows you to iterate more quickly during the development process.

  • Set the kubeconfig context:
$ export KUBECONFIG=<path-to-your-kubeconfig-file>
Enter fullscreen mode Exit fullscreen mode
  • Install the CRDs into the cluster:
$ make install  
Enter fullscreen mode Exit fullscreen mode
$ kubectl get crds 

NAME                                 CREATED AT
mysqlclusters.apps.dpuigerarde.com   2023-08-28T02:22:43Z
Enter fullscreen mode Exit fullscreen mode

For the purpose of this example. We will create some extra resources, PersistentVolume and PersistentVolumeClaim that will serve as a complement, the file mysql-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 20Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
Enter fullscreen mode Exit fullscreen mode

Apply

$ kubectl apply -f mysql-pv.yaml

$ kubectl get pv,pvc
NAME                               CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                    STORAGECLASS   REASON   AGE
persistentvolume/mysql-pv-volume   20Gi       RWO            Retain           Bound    default/mysql-pv-claim   manual                  103m

NAME                                   STATUS   VOLUME            CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/mysql-pv-claim   Bound    mysql-pv-volume   20Gi       RWO            manual         103m
Enter fullscreen mode Exit fullscreen mode
  • Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
$ make run
Enter fullscreen mode Exit fullscreen mode

Deploy Custom Resources

Make sure to update config/samples/apps_v1alpha1_mysqlcluster.yaml with the actual specification you'd like to use for your MySQLCluster resource.

apiVersion: apps.dpuigerarde.com/v1alpha1
kind: MySQLCluster
metadata:
  labels:
    app.kubernetes.io/name: mysqlcluster
    app.kubernetes.io/instance: mysqlcluster-sample
    app.kubernetes.io/part-of: mysql-operator
    app.kubernetes.io/managed-by: kustomize
    app.kubernetes.io/created-by: mysql-operator
  name: mysqlcluster-sample
spec:
  # TODO(user): Add fields here
  deploymentName: mysqlcluster-sample-deploy
  replicas: 1
  version: "5.6"
  password: example
Enter fullscreen mode Exit fullscreen mode
$ kubectl apply -f apply -f config/samples/apps_v1alpha1_mysqlcluster.yaml

mysqlcluster.apps.dpuigerarde.com/mysqlcluster-sample created
Enter fullscreen mode Exit fullscreen mode
$ kubectl get mysqlclusters
NAME                  AGE
mysqlcluster-sample   10m
Enter fullscreen mode Exit fullscreen mode

At this point, your operator should detect the custom resource and execute the reconcile loop, creating the mysql db as specified.

However, this is a Blog Post that contains problems , the example has problems, I hope I can count on your help to solve this problem. I promise to update soon with the solution to the problem

$ kubectl get deploy

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
mysqlcluster-sample-deploy   0/1     1            0           11m
Enter fullscreen mode Exit fullscreen mode
$ kubectl get pods   

NAME                                         READY   STATUS             RESTARTS      AGE
mysqlcluster-sample-deploy-79c78b6c5-62jh5   0/1     CrashLoopBackOff   7 (42s ago)   11m
Enter fullscreen mode Exit fullscreen mode
$ kubectl logs mysqlcluster-sample-deploy-79c78b6c5-62jh5 

2023-08-28 16:26:24 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2023-08-28 16:26:24 0 [Warning] Can't create test file /var/lib/mysql/mysqlcluster-sample-deploy-79c78b6c5-62jh5.lower-test
2023-08-28 16:26:24 0 [Note] mysqld (mysqld 5.6.51) starting as process 1 ...
2023-08-28 16:26:24 1 [Warning] Can't create test file /var/lib/mysql/mysqlcluster-sample-deploy-79c78b6c5-62jh5.lower-test
2023-08-28 16:26:24 1 [Warning] Can't create test file /var/lib/mysql/mysqlcluster-sample-deploy-79c78b6c5-62jh5.lower-test
2023-08-28 16:26:24 1 [Warning] One can only use the --user switch if running as root

2023-08-28 16:26:24 1 [Note] Plugin 'FEDERATED' is disabled.
mysqld: Table 'mysql.plugin' doesn't exist
2023-08-28 16:26:24 1 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it.
2023-08-28 16:26:24 1 [Note] InnoDB: Using atomics to ref count buffer pool pages
2023-08-28 16:26:24 1 [Note] InnoDB: The InnoDB memory heap is disabled
2023-08-28 16:26:24 1 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2023-08-28 16:26:24 1 [Note] InnoDB: Memory barrier is not used
2023-08-28 16:26:24 1 [Note] InnoDB: Compressed tables use zlib 1.2.11
2023-08-28 16:26:24 1 [Note] InnoDB: Using Linux native AIO
2023-08-28 16:26:24 1 [Note] InnoDB: Not using CPU crc32 instructions
2023-08-28 16:26:24 1 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2023-08-28 16:26:24 1 [Note] InnoDB: Completed initialization of buffer pool
2023-08-28 16:26:24 1 [ERROR] InnoDB: ./ibdata1 can't be opened in read-write mode
2023-08-28 16:26:24 1 [ERROR] InnoDB: The system tablespace must be writable!
2023-08-28 16:26:24 1 [ERROR] Plugin 'InnoDB' init function returned error.
2023-08-28 16:26:24 1 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2023-08-28 16:26:24 1 [ERROR] Unknown/unsupported storage engine: InnoDB
2023-08-28 16:26:24 1 [ERROR] Aborting

2023-08-28 16:26:24 1 [Note] Binlog end
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'partition'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_METRICS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMPMEM'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_CMP'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_LOCKS'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'INNODB_TRX'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'BLACKHOLE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'ARCHIVE'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'MRG_MYISAM'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'MyISAM'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'MEMORY'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'CSV'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'sha256_password'
2023-08-28 16:26:24 1 [Note] Shutting down plugin 'mysql_old_password'
Enter fullscreen mode Exit fullscreen mode

Top comments (0)