Validating Your Kubernetes Environment

Updated

Using kubectl commands, validate that your environment is ready for Commvault backups.

Get the API Server URL

You need your kube-apiserver or control plane URL to add your cluster to Commvault.

  • Command to run:

    kubectl cluster-info
  • In the following example output, the URL is https://k8s-123-4.home.arpa:6443:

    Kubernetes control plane is running at https://k8s-123-4.home.arpa:6443

    CoreDNS is running at https://k8s-123-4.home.arpa:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

Verify the Nodes Are Ready

  • Command to run:

    kubectl get nodes
  • Verify the following:

    • All nodes have a status of "Ready".

    • For high-availability deployments, multiple control plane nodes are listed.

    • The version is one that is supported by Commvault.

Verify the CSI Drivers

  • Command to run:

    kubectl get csidrivers
  • Verify that all Container Storage Interface (CSI) drivers are listed.

Verify the CSI Pods

  • Command to run:

    kubectl get pods -A | grep csi
  • Verify that all CSI pods are listed and are in the running state.

Verify the Nodes Do Not Have Active Taints

  • Command to run:

    kubectl describe node node
  • Verify that the nodes with workloads that you want to protect do not have active taints that will prevent Commvault from scheduling temporary worker pods.

Verify the StorageClasses Are CSI-Enabled

The PersistentVolumeClaims (PVCs) that you want to protect must be presented by a registered, CSI-enabled StorageClass. Use the following command to verify that the StorageClasses with PersistentVolumeClaims that you want to protect use the Container Storage Interface (CSI).

  • Command to run:

    kubectl get storageclasses
  • Verify the following:

    • At least one StorageClass includes "(default)" in its name.

    • The provisioners for the StorageClasses that you want to protect include ".csi." in their names.

  • Example output from a Vanilla Kubernetes cluster running hostPath and Ceph raw block device (RBD) CSI drivers:

    Note: If the provisioner does not contact CSI, then the volume plug-in/driver is not supported for protection by Commvault.

    NAME                        PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE

    csi-hostpath-sc hostpath.csi.k8s.io Delete Immediate true 2m58s

    rook-ceph-block (default) rook-ceph.rbd.csi.ceph.com Delete Immediate true 76d

    rook-ceph-delete-bucket rook-ceph.ceph.rook.io/bucket Delete Immediate false 76d

    This cluster is also running a non-CSI based volume plug-in for provisioning object-based Rook-Ceph storage.

Verify the CSI Drivers Are Registered

  • Command to run:

    kubectl get csidrivers
  • Verify that the CSI drivers that are referenced in registered StorageClasses are registered.

Verify You Have a CSI Node That Can Handle Requests

  • Command to run:

    kubectl get csinodes
  • Verify that the environment includes at least one CSI node that can handle requests.

Verify the Pods Are Not in an Error State

  • Command to run:

    kubectl get pods -A
  • Verify the following:

    • All pods are in the Running, Completed, or Terminated state.

    • No pods are in the Pending, Failed, CrashLoop, Evicted, or Unknown states.

Verify You Have a VolumeSnapshotClass That Has a CSI Driver

  • Command to run:

    kubectl get volumesnapshotclass
  • Verify that the environment includes a VolumeSnapshotClass that has a CSI driver.

Verify the API Version Includes snapshot.storage.k8s.io

  • Command to run:

    kubectl describe volumesnapshotclass volumesnapshotclass_name
  • Verify that the API version contains snapshot.storage.k8s.io.

  • Example command:

    kubectl describe volumesnapshotclass csi-rbdplugin-snapclass
  • Example output:

    Name:             csi-rbdplugin-snapclass

    Namespace:

    Labels: <none>

    Annotations: <none>

    API Version: snapshot.storage.k8s.io/v1

    Deletion Policy: Delete

    Driver: rook-ceph.rbd.csi.ceph.com

    Kind: VolumeSnapshotClass

    Metadata:

    Creation Timestamp: 2022-03-14T23:05:36Z

    Generation: 1

    Managed Fields:

    API Version: snapshot.storage.k8s.io/v1

    Fields Type: FieldsV1

    fieldsV1:

    f:deletionPolicy:

    f:driver:

    f:parameters:

    .:

    f:clusterID:

    f:csi.storage.k8s.io/snapshotter-secret-name:

    f:csi.storage.k8s.io/snapshotter-secret-namespace:

    Manager: kubectl-create

    Operation: Update

    Time: 2022-03-14T23:05:36Z

    Resource Version: 67371

    UID: 153a1fac-783c-4b71-9d57-f0e161650100

    Parameters:

    Cluster ID: rook-ceph

    csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner

    csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph

    Events: <none>
  • Command to run:

    kubectl describe volumesnapshotclass volumesnapshotclass_name
  • Verify that the environment includes a VolumeSnapshotClass that links to the CSI driver.

Verify You Have a snapshot-controller Pod in the Running State

  • Command to run:

    kubectl get pods -A | grep -i snapshot
  • Verify that the environment includes at least one snapshot-controller pod that is in the Running state.

    Typically, the environment has at least two snapshot-controller pods running.

Verify There Are No Orphan Objects Created by Commvault

  • Command to run:

    kubectl get pods,pvc,volumesnapshot -l cv-backup-admin= --all-namespaces
  • Verify that no objects are listed.

Verify the centos:8 Image Can Be Pulled

  • Command to run:

    docker pull centos:8
  • Verify that the centos:8 image can be pulled.

    To perform backups and other operations, Commvault pulls an image to create a temporary worker pod. For more information, see System Requirements for Kubernetes.