The Commvault software supports most Kubernetes configurations.
General Requirements
Verify that your environment meets the system requirements for Kubernetes.
Your Kubernetes environment must include the following:
-
At least 1 access node that has the Virtual Server Agent (VSA) installed. For faster backups and restores, use more access nodes. The Commvault software performs automatic load balancing across the access nodes.
The access node can communicate with multiple Kubernetes endpoints.
-
The access node must have access to the kube-apiserver endpoint (for example, https://kube-apiserver
:
kube-apiserver_port_number). The default API port is 443. -
For authentication, a Kubernetes service account with the cluster-wide privilege.
-
For OpenShift, one of the following, for authentication:
-
A user account that is used to access OpenShift must have the storage-admin or cluster admin role assigned.
-
A service account that is used to access OpenShift must have the cluster admin role assigned. If you are using a service account and service token, record those items so that you can enter them when you add the hypervisor.
-
-
To perform Kubernetes backup and restore operations, you must assign the required permissions to the user or the user group.
Kubernetes Distributions
The Commvault software supports all CNCF-certified Kubernetes distributions and Kubernetes revision versions 1.21.x-1.14.x.
The following distributions are validated by Commvault:
-
Red Hat OpenShift Container Platform (RHOCP) 4.8.x-4.2.x
-
Google Anthos 1.7, 1.6, 1.5, and 1.4 (for GKE, AKS, EKS, and Red Hat OKE managed clusters)
-
Azure Kubernetes Service (AKS)
-
Google Kubernetes Engine (GKE)
Architectures
Commvault supports protection of Linux x86-64 (amd64) based containerized applications running on Linux worker nodes.
Important
Commvault does not support protection of arm32v6, arm32v7, arm64v8, Windows AMD64 images or environments.
Commvault does not support protection of arm32v5, ppc64le, s390x, mips64le, riscv64, or i386 images or environments maintained by the Docker community. See Architectures other than amd64? in the Docker library.
Volume Snapshot CRD Versions
-
snapshot.storage.k8s.io/v1
-
snapshot.storage.k8s.io/v1beta1
-
snapshot.storage.k8s.io/v1alpha1
To know the API version of your VolumeSnapshotClass CRD, type the following command:
kubectl explain volumesnapshotclass | grep VERSION
Cloud Native Storage
The Commvault software protects any storage volumes that are presented with an in-tree plugin or out-of-tree Container Storage Interface (CSI) plugin.
Volumes must be provisioned and managed by a registered StorageClass. The applications must support creating snapshots of individual PersistentVolumeClaims.
Supported In-Tree Storage Volumes
You can back up and restore certain in-tree storage plugin presented volumes.
You can back up and restore the following in-tree storage plugin presented volumes:
-
awsElasticBlockStore
-
azureDisk
-
azureFile
-
cephfs
-
cinder
-
gcePersistentDisk
-
Portworx
-
Rbd
Supported Out-of-Tree CSI Storage Volumes
Containers must reside on storage that has a registered Cloud Storage Interface (CSI) v1.2, 1.1, 1.0, or 0.3 driver with snapshot support.
Note
You must install the CSI driver that is relevant to your storage provider, and configure a Kubernetes storage class to use the CSI driver.
For a list of supported CSI drivers, see Kubernetes production CSI drivers list in the Kubernetes documentation.
The following CSI drivers are validated by Commvault:
CSI plugin |
CSI driver |
Snapshot verified |
---|---|---|
Hedvig For Hedvig, you must shut down the application to perform an in-place volume level restore. |
io.hedvig.csi (v1.0.3) |
Yes |
AWS Elastic Block Storage |
ebs.csi.aws.com |
Yes |
Azure Disk |
disk.csi.azure.com |
Yes |
Azure File |
file.csi.azure.com |
Yes |
Ceph RBD |
rbd.csi.ceph.com |
Yes |
GCE Persistent Disk |
pd.csi.storage.gke.io |
Yes |
HPE |
csi.hpe.com |
Yes |
NetApp |
csi.trident.netapp.io |
Yes |
Portworx |
pxd.openstorage.org |
Yes |
vSphere |
csi.vsphere.vmware.com |
Not available |
Resource Limits for Commvault Temporary Pods
The Commvault software spawns temporary pods during backup and restore operations. Commvault deploys one pod per persistent volume for backup, within the namespace of the volume that is being protected.
The temporary pods are deployed with the following deployment limits:
- The temporary pods request compute resources of cpu:5m and memory:16Mi, with a limit of cpu:500m and memory:128Mi.
YAML Snippet of Commvault Pods
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 5m
memory: 16Mi
Customizing Resource Limits of Commvault Pods
To customize the resource limits for temporary pods, you must add additional settings to the access node.
Additional setting |
Category |
Type |
Value |
---|---|---|---|
VirtualServer |
String |
Maximum CPU for the temporary pod. |
|
VirtualServer |
String |
Minimum CPU for the temporary pod. |
|
VirtualServer |
String |
Maximum memory for the temporary pod. |
|
VirtualServer |
String |
Minimum memory for the temporary pod. |