System Requirements for Kubernetes

Verify that your environment meets the system requirements for Kubernetes.

Access Nodes

Access nodes run backups and other operations.

You must have at least 1 access node for Kubernetes. In environments that require high availability for Kubernetes data management activities, having at least 2 access nodes is recommended. A single access node can protect multiple Kubernetes clusters. Commvault automatically load balances across available access nodes, and restarts data management activities that are interrupted when an access node becomes unavafilable.

For information, see Configuration for Kubernetes Access Nodes.

Access Node Placement

The access node requires a low-latency connection to the Kubernetes control plane (api-server). Commvault expects latency between the access node and the Kubernetes API server of less than 1 millisecond. If round trip time (RTT) latency between the access node and Kubernetes API server exceeds 1 millisecond, then backups and other operations might perform sub-optimally.

For optimal performance, place the access node within LAN or MAN proximity to the clusters that it protects.

Commvault Packages

Kubernetes access nodes must have the Virtual Server package installed.

Hardware

Component

Requirements

Disk space

100 GB

vCPUs

  • 2, with 4 GB RAM

  • x86 64-bit

Operating System

Commvault supports the Virtual Server Agent (VSA) for Kubernetes as follows:

The following operating systems and architectures are supported for Kubernetes access nodes:

Operating system

Supported releases

Supported access node architectures

Linux

CentOS 8.x, 7.x

Commvault supports all actively supported CentOS releases. For more information, see CentOS Manual on the CentOS website.

x86 64-bit

Linux

Red Hat Linux 9.x, 8.x, 7.x

Commvault supports all 8.x and 7.x GA releases. For more information, see Red Hat Enterprise Linux Release Dates on the Red Hat customer portal.

x86 64-bit

Linux

Ubuntu 20.4 LTS

x86 64-bit

Windows

Microsoft Windows Server 2022 x64 Editions (Standard, DataCenter, and Core)

x86 64-bit

Windows

Microsoft Windows Server 2019 x64 Editions (Standard, DataCenter, and Core)

x86 64-bit

Windows

Microsoft Windows Server 2016 x64 Editions (Standard, DataCenter, and Core)

x86 64-bit

Helm Chart Protection

If Helm is installed on your Kubernetes access nodes, Commvault automatically discovers, protects, and restores Helm-based applications and metadata.

Commvault supports the following protection operations for Helm-based applications:

  • Full backup

  • Incremental backup

  • In-place recovery to the original cluster and namespace

Download the most recent Helm binary for your Kubernetes distribution from helm / helm on GitHub.

Requirements are as follows:

  • The Helm binary must be installed in the system PATH of the Kubernetes access nodes.

  • The following labels are required on applications that are deployed by Helm chart:

    • app.kubernetes.io/instance

    • app.kubernetes.io/managed-by

You can disable backups of helm charts.

For information about restrictions and known issues related to helm chart data management, see Restrictions and Known Issues for Kubernetes.

Architectures

Commvault supports protection of Linux x86-64 (amd64) based containerized applications running on Linux worker nodes.

Important

Commvault does not support protection of arm32v6, arm32v7, arm64v8, Windows AMD64 images or environments.

Commvault does not support protection of arm32v5, ppc64le, s390x, mips64le, riscv64, or i386 images or environments maintained by the Docker community. See Architectures other than amd64? in the Docker library.

Kubernetes Service Account

To protect Kubernetes data, Commvault requires a restricted or cluster-wide Kubernetes service account and a service account token.

The service account must have either a custom ClusterRole or the cluster-admin role.

Kubernetes Releases

In addition to the specific releases documented in this section, Commvault supports protection of the following:

  • All CNCF-certified Kubernetes distributions that are listed and expose the kube-apiserver

  • Kubernetes releases that are in active maintenance at the time of the Commvault release into General Availability (GA)

Vanilla Kubernetes

1.31.x, 1.30.x, 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x, 1.24.x, 1.23.x, 1,22.x, 1,21.x, 1,20.x

For more information on the Vanilla Kubernetes releases, see the Release History page on the kubernetes website.

Amazon EKS

Amazon EKS, Amazon EKS on AWS Outposts, Amazon EKS Anywhere, Amazon EKS Distro 1.31.x, 1.30.x, 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x, 1.24.x, 1.23.x, 1.22.x, 1.21.x, 1.20.x

Google Anthos

Anthos 1.18.x, 1.17.x, 1.16.x, 1.15.x, 1.14.x, 1.13.x, 1.12.x, 1.11.x, 1.10.x, 1.9.x

Google Kubernetes Engine (GKE)

GKE 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x, 1.24.x (Rapid channel), 1.23 (Rapid channel), 1.22 (Rapid channel, Regular Channel, No Channel)

Microsoft Azure Kubernetes Service (AKS)

AKS 1.30.x, 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x, 1.24.x, 1.23.x, 1.22.x

Oracle Container Engine for Kubernetes (OKE)

OKE 1.29.x, 1.28.x, 1.27.x, 1.26.x, 1.25.x

Red Hat OpenShift Container Platform (RHOCP)

The following RHOCP, Azure Red Hat OpenShift (RHOCP on Azure) and Red Hat OpenShift Service on AWS (ROSA) versions are supported:

RHOCP 4.17, RHOCP 4.16, RHOCP 4.15, RHOCP 4.14, RHOCP 4.13, RHOCP 4.12, RHOCP 4.11, RHOCP 4.10, RHOCP 4.9, RHOCP 4.8, RHOCP 4.7, RHOCP 4.6

For more information on RHOCP releases, see the Red Hat OpenShift Container Platform Life Cycle Policy page on the Red Hat website.

VMware Tanzu

  • Tanzu Kubernetes Grid Integrated Edition (TKGi) 1.19.x, 1.18.x, 1.17.x, 1.16.x, 1.15.x

  • Tanzu Kubernetes Grid (TKG) v2.1.0 – v2.5.0, 1.6.0 or later

  • vSphere with Tanzu: vSphere 8.0.0 A running Kubernetes 1.24, 1.25, 1.26, 1.27, 1.28 or later.

Cloud-Native Storage

CSI Storage

Commvault supports protection of PersistentVolumeClaims residing on production CSI drivers. See Kubernetes production CSI drivers list in the Kubernetes documentation.

Commvault requires the production CSI driver to support the following features:

  • Dynamic provisioning (for restores)

  • Expansion (for restores)

  • Snapshot (for backups)

PersistentVolumes must be provisioned and managed by a registered StorageClass and a corresponding VolumeSnapshotClass.

For CSI storage of the NFS (Network File Sharing) type, you must configure the Storage Class with the root enabled flag, to allow Commvault to restore files as any uid or gid (which is collected during backups). For information, see Configuring a Root Access Storage Class for Kubernetes.

The following CSI drivers are validated by Commvault:

CSI plug-in

CSI driver

Snapshot verified

Commvault Distributed Storage

io.hedvig.csi

Yes

AWS Elastic Block Storage

ebs.csi.aws.com

Yes

Azure Disk

disk.csi.azure.com

Yes

Azure File

file.csi.azure.com

Yes

Ceph FS

cephfs.csi.ceph.com

Yes

Ceph RBD

rbd.csi.ceph.com

Yes

GCE Persistent Disk

pd.csi.storage.gke.io

Yes

HPE

csi.hpe.com

Yes

NetApp

csi.trident.netapp.io

Yes

Portworx

pxd.openstorage.org

Yes

vSphere

csi.vsphere.vmware.com

Yes (vSphere CSI driver v2.5 and more recent versions are supported for CSI snapshots)

Volume Snapshot CRD Versions

Multiple versions of the CSI external-snapshotter are available for download. Commvault supports all API versions of the volume snapshot custom resource.

Commvault supports all released versions of the external-snapshotter and all API versions of the volume snapshot custom resource.

To determine the API version of your VolumeSnapshotClass CRD, use the following command:

kubectl describe volumesnapshot class <volume-snapshot-class-name> | grep -i version

Example output:

API Version:     snapshot.storage.k8s.io/v1

In-Tree Storage

For snapshot-based protection, use CSI-based StorageClasses. For organizations that still use in-tree storage, volumes that are on the following StorageClass types can be protected and recovered:

  • awsElasticBlockStore

  • azureDisk

  • gcePersistentDisk

  • vSphereVolume

    • VMware Cloud Provider (vCP) must be installed and configured on your VMware cluster.

    • VMware Cloud Provider is supported for vSphere 6.7U3 and later.

      Support is being deprecated for vSphere releases prior to 7.0u2. For information, see commit details in kubernetes / kubernetes on GitHub.

    • The VMware Cloud Provider StorageClass must be registered on your Kubernetes cluster. For instructions, see vCP Provisioner in the Kubernetes documentation.

    • The StorageClass must have ReclaimPolicy set to Delete.

    • Commvault provides snapshot-based backup via direct integration with VMware vCenter.

    • vsphereVolume plugin supports vSphere VMDK snapshot creation and volume creation for both VMFS and VSAN datastores.

      Note

      Commvault does not require a VMware VSA hypervisor be created within Commvault.

Kubernetes Worker Node Architectures

Commvault supports the protection of containers that run on x86 64-bit processor architectures from Intel and AMD.

Commvault does not support the protection of the following:

  • Arm 64-bit containers

  • IBM S/390 containers

Network and Firewall Requirements for On-Premises Access Nodes

Commvault access nodes require that the following network connectivity and firewall dependencies are met.

Kubernetes API Server Endpoint

Commvault access nodes must be able to reach the Kubernetes API server endpoint, either directly or via a Commvault network gateway.

Commvault performs backup and recovery control and data plane transfers via the kube-apiserver. Commvault recommends no more than 1 millisecond round-trip time (RTT) latency between the access node and the kube-apiserver endpoint.

To determine your Kubernetes API server endpoint, run the following command:

kubectl cluster-info

Example output:

Kubernetes control plane is running at https://aks-qa-cluster-001-dns-ed45cbd8.hcp.eastus.azmk8s.io:443
CoreDNS is running at https://k8s-123-4.local.domain:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

For information about setting up a network gateway, see Setting Up the Commvault Network Gateway.

Docker Hub

To perform backups and other operations for Kubernetes, Commvault pulls a Docker image for a temporary worker pod that performs data movement. Commvault uses the following images:

  • Commvault 11.24 and more recent releases: centos:8

  • Commvault 11.23–11.20: debian:stretch-slim

You can configure your Kubernetes clusters to pull container images from the Docker Hub. Or, if you have an air-gapped cluster, you can specify a private container registry that contains the image.

Commvault is committed to the security of your data and ensures that the docker image that the Commvault software uses is scanned with Clair before each release and that no critical security vulnerabilities exist in the image.

Commvault does not support the use of custom docker images for the Commvault worker pod.

vsphereVolume Snapshot Support

Commvault access nodes must be able to contact the vCenter SDK endpoint URL on port 443 to authenticate and orchestrate the creation and deletion of VMDK snapshots and the creation of VMDK volumes.

DISCLAIMER

Certain third-party software and service releases (together, "Releases") may not be supported by Commvault. You are solely responsible for ensuring Commvault’s products and services are compatible with any such Releases.

Loading...