Verify that your environment meets the system requirements for Kubernetes.
Access Nodes
Access nodes run backups and other operations.
You must have at least 1 access node for Kubernetes. In environments that require high availability for Kubernetes data management activities, having at least 2 access nodes is recommended. A single access node can protect multiple Kubernetes clusters. Commvault automatically load balances across available access nodes, and restarts data management activities that are interrupted when an access node becomes unavailable.
For information, see Configuration for Kubernetes Access Nodes.
Placement of Access Nodes
The access node requires a low-latency connection to the Kubernetes control plane (api-server). Commvault expects latency between the access node and the Kubernetes API server of less than 1 millisecond. If round trip time (RTT) latency between the access node and Kubernetes API server exceeds 1 millisecond, then backups and other operations might perform sub-optimally.
For optimal performance, place the access node within LAN or MAN proximity to the clusters that it protects.
Commvault Packages for Access Nodes
Kubernetes access nodes must have the Virtual Server package installed.
Hardware for Access Nodes
Component |
Requirements |
---|---|
Disk space |
100 GB |
vCPUs |
|
Operating Systems for Access Nodes
The following operating systems and architectures are supported for Kubernetes access nodes:
Operating system |
Supported releases |
Supported access node architectures |
---|---|---|
Linux |
Amazon Linux 2023 AMI |
|
Linux |
Oracle Linux 8.6 |
|
Linux |
Red Hat Linux 9.x, 8.x, 7.x Commvault supports all GA releases. For more information, see Red Hat Enterprise Linux Release Dates on the Red Hat customer portal. |
|
Linux |
Ubuntu 22.4 LTS, 20.4 LTS |
|
Windows |
Microsoft Windows Server 2022 x64 Editions (Standard, DataCenter, and Core) |
x86 64-bit |
Windows |
Microsoft Windows Server 2019 x64 Editions (Standard, DataCenter, and Core) |
x86 64-bit |
Windows |
Microsoft Windows Server 2016 x64 Editions (Standard, DataCenter, and Core) |
x86 64-bit |
Helm Chart Protection
If Helm is installed on your Kubernetes access nodes, Commvault automatically discovers, protects, and restores Helm-based applications and metadata.
Commvault supports the following protection operations for Helm-based applications:
-
Full backup
-
Incremental backup
-
In-place recovery to the original cluster and namespace
Download the most recent Helm binary for your Kubernetes distribution from helm / helm on GitHub.
Requirements are as follows:
-
The Helm binary must be installed in the system PATH of the Kubernetes access nodes.
-
The following labels are required on applications that are deployed by Helm chart:
-
app.kubernetes.io/instance
-
app.kubernetes.io/managed-by
-
You can disable backups of helm charts.
For information about restrictions and known issues related to helm chart data management, see Restrictions and Known Issues for Kubernetes.
Architectures
Commvault supports protection of Linux x86-64 (amd64 and arm64) based containerized applications running on Linux worker nodes.
Important
Commvault does not support protection of arm32v6, arm32v7, Windows AMD64 images or environments.
Commvault does not support protection of arm32v5, ppc64le, s390x, mips64le, riscv64, or i386 images or environments maintained by the Docker community. See Architectures other than amd64? in the Docker library.
Kubernetes Service Account
To protect Kubernetes data, Commvault requires a restricted or cluster-wide Kubernetes service account and a service account token.
The service account must have either a custom ClusterRole or the cluster-admin role.
Kubernetes Releases
Commvault provides "best effort" support for the most recent versions of supported applications within 30 days of the generally available release dates of the applications.
In addition to the specific releases documented in this section, Commvault supports protection of the following:
-
All CNCF-certified Kubernetes distributions that are listed and expose the kube-apiserver
-
Kubernetes releases that are in active maintenance at the time of the Commvault release into General Availability (GA)
Vanilla Kubernetes
1.32.x, 1.31.x, 1.30.x, 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x, 1.24.x, 1.23.x, 1.22.x, 1.21.x, 1.20.x
For more information on the Vanilla Kubernetes releases, see the Release History page on the kubernetes website.
Amazon EKS
Amazon EKS, Amazon EKS on AWS Outposts, Amazon EKS Anywhere, Amazon EKS Distro 1.31.x, 1.30.x, 1.29.x, 1.28.x, 1.27x, 1.26x, 1.25x, 1.24.x, 1.23.x, 1.22.x
Google Anthos
Anthos 1.18.x, 1.17.x, 1.16.x, 1.15.x, 1.14.x, 1.13.x, 1.12.x
For more information on versions, see the GKE Enterprise version and upgrade support page on the Google Cloud website.
Google Kubernetes Engine (GKE)
GKE 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x, 1.24.x
For more information on versions, see the Schedule for release channels page on the Google Cloud website.
Microsoft Azure Kubernetes Service (AKS)
AKS 1.30.x, 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x
For more information on versions, see AKS Kubernetes release calendar on the Microsoft website.
Oracle Container Engine for Kubernetes (OKE)
OKE 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x
For more information on OKE versions, see the Release Calendar section of the Supported Versions of Kubernetes page on the Oracle Cloud Infrastructure Documentation website.
Red Hat OpenShift Container Platform (RHOCP)
The following RHOCP, Azure Red Hat OpenShift (RHOCP on Azure) and Red Hat OpenShift Service on AWS (ROSA) active releases, are supported:
RHOCP 4.17, RHOCP 4.16, RHOCP 4.15, RHOCP 4.14, RHOCP 4.13, RHOCP 4.12, RHOCP 4.11
For more information on RHOCP releases, see the Red Hat OpenShift Container Platform Life Cycle Policy page on the Red Hat website.
VMware Tanzu
-
Tanzu Kubernetes Grid Integrated Edition (TKGi) 1.19.x, 1.18.x, 1.17.x, 1.16.x, 1.15.x
-
Tanzu Kubernetes Grid (TKG) v2.1.0 – v2.5.0, 1.6.0 or later
-
vSphere with Tanzu: vSphere 8.0.0 A running Kubernetes 1.24, 1.25, 1.26, 1.27, 1.28 or later.
Cloud-Native Storage
CSI Storage
Commvault supports protection of PersistentVolumeClaims residing on production CSI drivers. See Kubernetes production CSI drivers list in the Kubernetes documentation.
Commvault requires the production CSI driver to support the following features:
-
Dynamic provisioning (for restores)
-
Expansion (for restores)
-
Snapshot (for backups)
PersistentVolumes must be provisioned and managed by a registered StorageClass and a corresponding VolumeSnapshotClass.
For CSI storage of the NFS (Network File Sharing) type, you must configure the Storage Class with the root enabled flag, to allow Commvault to restore files as any uid or gid (which is collected during backups). For information, see Configuring a Root Access Storage Class for Kubernetes.
The following CSI drivers are validated by Commvault:
CSI plug-in |
CSI driver |
Snapshot verified |
---|---|---|
Commvault File System |
io.hedvig.csi |
Yes |
AWS Elastic Block Storage |
ebs.csi.aws.com |
Yes |
Azure Blob |
blob.csi.azure.com |
Not available |
Azure Disk |
disk.csi.azure.com |
Yes |
Azure File |
file.csi.azure.com |
Yes |
Ceph FS |
cephfs.csi.ceph.com |
Yes |
Ceph RBD |
rbd.csi.ceph.com |
Yes |
GCE Persistent Disk |
pd.csi.storage.gke.io |
Yes |
HPE |
csi.hpe.com |
Yes |
NetApp |
csi.trident.netapp.io |
Yes |
Oracle Cloud Infrastructure Block Volume |
blockvolume.csi.oraclecloud.com |
Not supported by the driver |
Portworx |
pxd.portworx.com |
Yes |
vSphere |
csi.vsphere.vmware.com |
Yes (vSphere CSI driver v2.5 and more recent versions are supported for CSI snapshots) |
Volume Snapshot CRD Versions
Multiple versions of the CSI external-snapshotter are available for download. Commvault supports all API versions of the volume snapshot custom resource.
Commvault supports all released versions of the external-snapshotter and all API versions of the volume snapshot custom resource.
To determine the API version of your VolumeSnapshotClass CRD, use the following command:
kubectl describe volumesnapshot class <volume-snapshot-class-name> | grep -i version
Example output:
API Version: snapshot.storage.k8s.io/v1
Kubernetes Worker Node Architectures
Commvault supports the protection of containers that run on x86 64-bit and arm64 processor architectures from Intel and AMD.
Commvault does not support the protection of the IBM S/390 containers.
Network and Firewall Requirements for On-Premises Access Nodes
Commvault access nodes require that the following network connectivity and firewall dependencies are met.
Kubernetes API Server Endpoint
Commvault access nodes must be able to reach the Kubernetes API server endpoint, either directly or via a Commvault network gateway.
Commvault performs backup and recovery control and data plane transfers via the kube-apiserver. Commvault recommends no more than 1 millisecond round-trip time (RTT) latency between the access node and the kube-apiserver endpoint.
To determine your Kubernetes API server endpoint, run the following command:
kubectl cluster-info
Example output:
Kubernetes control plane is running at https://aks-qa-cluster-001-dns-ed45cbd8.hcp.eastus.azmk8s.io:443
CoreDNS is running at https://k8s-123-4.local.domain:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
For information about setting up a network gateway, see Setting Up the Commvault Network Gateway.
Docker Hub
To perform backups and other operations for Kubernetes, Commvault pulls a Docker image for a temporary worker pod that performs data movement. Commvault uses the following images:
Commvault release |
Docker Hub image |
---|---|
Platform Release 2023 and more recent releases |
|
Platform Release 2022E–Feature Release 24 |
|
Feature Release 20 |
You can configure your Kubernetes clusters to pull container images from the Docker Hub. Or, if you have an air-gapped cluster, you can specify a private container registry that contains the image.
Commvault is committed to the security of your data and ensures that the Docker image that the Commvault software uses is scanned with Clair before each release and that no critical security vulnerabilities exist in the image.
Commvault does not support the use of custom Docker images for the Commvault worker pod.
vsphereVolume Snapshot Support
Commvault access nodes must be able to contact the vCenter SDK endpoint URL on port 443 to authenticate and orchestrate the creation and deletion of VMDK snapshots and the creation of VMDK volumes.
Istio Service Mesh Support
Commvault supports protection of Kubernetes applications in clusters that use the Istio.io service mesh. Commvault supports all currently supported Istio releases, for all Kubernetes releases that are supported by Commvault.
DISCLAIMER
Certain third-party software and service releases (together, "Releases") may not be supported by Commvault. You are solely responsible for ensuring Commvault’s products and services are compatible with any such Releases.