System Requirements for Kubernetes

Updated

Verify that your environment meets the system requirements for Kubernetes.

Access Nodes

Access nodes are virtual machines, cloud instances, or physical servers that run backups and other operations.

You must have at least 1 access node for Kubernetes. In environments that require high availability for Kubernetes data management activities, having at least 2 access nodes is recommended. A single access node can protect multiple Kubernetes clusters. Commvault automatically load balances across available access nodes, and restarts data management activities that are interrupted when an access node becomes unavailable.

For information, see Configuration for Kubernetes Access Nodes.

Access Node Placement

The access node requires a low-latency connection to the Kubernetes control plane (api-server). Commvault expects latency between the access node and the Kubernetes API server of less than 1 millisecond. If round trip time (RTT) latency between the access node and Kubernetes API server exceeds 1 millisecond, then backups and other operations might perform sub-optimally.

For optimal performance, place the access node within LAN or MAN proximity to the clusters that it protects.

Commvault Packages

Kubernetes access nodes must have the Virtual Server package installed.

Hardware

Component

Requirements

Disk space

100 MB

vCPUs

  • 2, with 4 GB RAM

  • x86 64-bit

Operating System

Commvault supports the Virtual Server Agent (VSA) for Kubernetes as follows:

The following operating systems and architectures are supported for Kubernetes access nodes:

Operating system

Supported releases

Supported access node architectures

Linux

CentOS 8.x, 7.x

Commvault supports all actively supported CentOS releases. For more information, see CentOS Manual on the CentOS website.

x86 64-bit

Linux

Red Hat Linux 8.x, 7.x

Commvault supports all 8.x and 7.x GA releases. For more information, see Red Hat Enterprise Linux Release Dates on the Red Hat customer portal.

x86 64-bit

Windows

Microsoft Windows Server 2019 x64 Editions (Standard, DataCenter, and Core)

x86 64-bit

Windows

Microsoft Windows Server 2016 x64 Editions (Standard, DataCenter, and Core)

x86 64-bit

Helm Chart Protection

If Helm is installed on your Kubernetes access nodes, Commvault automatically discovers, protects, and restores Helm-based applications and metadata.

Commvault supports the following protection operations for Helm-based applications:

  • Full backup

  • Incremental backup

  • In-place recovery to the original cluster and namespace

Download the most recent Helm binary for your Kubernetes distribution from helm / helm on GitHub.

Requirements are as follows:

  • The Helm binary must be installed in the system PATH of the Kubernetes access nodes.

  • The following labels are required on applications that are deployed by Helm chart:

    • app.kubernetes.io/instance

    • app.kubernetes.io/managed-by

You can disable backups of helm charts.

For information about restrictions and known issues related to helm chart data management, see Restrictions and Known Issues for Kubernetes.

Kubernetes Service Account

To protect Kubernetes data, Commvault requires a restricted or cluster-wide Kubernetes service account and a service account token.

The service account must have either a custom ClusterRole or the cluster-admin role.

Kubernetes Release Supportability

In addition to the specific releases documented in this section, Commvault supports protection of the following:

  • All CNCF-certified Kubernetes distributions that are listed in the Platforms category that exposes the kube-apiserver

  • Kubernetes releases that are in active maintenance at the time of the Commvault release into General Availability (GA)

Vanilla Kubernetes: Active Maintenance Releases

The following Vanilla Kubernetes releases, which are actively maintained by the Kubernetes Project, are supported:

Commvault release

Kubernetes 1.24

Kubernetes 1.23

Kubernetes 1.22

Kubernetes 1.21

Commvault Platform Release 2022E

Yes

Yes

Yes

--

Feature Release 11.26

--

Yes

Yes

Yes

Feature Release 11.25

--

--

Yes

Yes

Feature Release 11.24

--

--

--

Yes

Feature Release 11.23

--

--

--

--

Feature Release 11.20

--

--

--

--

Vanilla Kubernetes: Non-Active Releases

The following Vanilla Kubernetes releases, which are not actively maintained by the Kubernetes Project, are supported:

Commvault release

Kubernetes 1.20

Kubernetes 1.19

Kubernetes 1.18

Kubernetes 1.17

Kubernetes 1.16

Kubernetes 1.15

Kubernetes 1.14

Commvault Platform Release 2022E

--

--

--

--

--

--

--

Feature Release 11.26

Yes

--

--

--

--

--

--

Feature Release 11.25

Yes

Yes

--

--

--

--

--

Feature Release 11.24

Yes

Yes

--

--

--

--

--

Feature Release 11.23

--

--

Yes

Yes

--

--

--

Feature Release 11.20

--

--

Yes

Yes

Yes

Yes

Yes

Amazon EKS

Amazon EKS, Amazon EKS on AWS Outposts, Amazon EKS Anywhere, Amazon EKS Distro 1.22.x, 1.21.x, 1.20.x

Google Anthos

Anthos 11.11.x, 1.10.x, 1.9.x

Google Kubernetes Engine (GKE)

GKE 1.24.x (Rapid channel), 1.23 (Rapid channel), 1.22 (Rapid channel, Regular Channel, No Channel)

Microsoft Azure Kubernetes Service (AKS)

AKS 1.24.x, 1.23.x, 1.22.x

Red Hat OpenShift Container Platform (RHOCP): Active Maintenance Releases

The following RHOCP releases, which are actively maintained by Red Hat, are supported:

Commvault release

RHOCP 4.10

RHOCP 4.9

RHOCP 4.8

RHOCP 4.7

RHOCP 4.6 EUS

Commvault Platform Release 2022E

Yes

Yes

Yes

Yes

Yes

Feature Release 11.26

--

Yes

Yes

Yes

--

Feature Release 11.25

--

--

--

Yes

--

Feature Release 11.24

--

--

--

Yes

--

Feature Release 11.23

--

--

--

--

--

Feature Release 11.20

--

--

--

--

--

Red Hat OpenShift Container Platform (RHOCP): Non-Active Releases

The following RHOCP releases, which are not actively maintained by Red Hat, are supported:

Commvault release

RHOCP 4.6 EUS

RHOCP 4.6

RHOCP 4.5

RHOCP 4.4

RHOCP 4.3

RHOCP 4.2

RHOCP 4.1

RHOCP 4.0

RHOCP 3.x

Commvault Platform Release 2022E

--

--

--

--

--

--

--

--

--

Feature Release 11.26

Yes

--

--

--

--

--

--

--

Yes

Feature Release 11.25

Yes

Yes

--

--

--

--

--

--

Yes

Feature Release 11.24

Yes

Yes

Yes

--

--

--

--

--

Yes

Feature Release 11.23

Yes

Yes

Yes

--

--

--

--

--

Yes

Feature Release 11.20

--

--

--

Yes

Yes

Yes

Yes

Yes

Yes

End-of-Life Kubernetes Releases

To protect end of life releases of Kubernetes, use a long-term support (LTS) release of Commvault. For information, see Platform Release Schedule and Lifecycles. The following versions are supported:

Commvault LTS release

Kubernetes release

Commvault Platform Release 2022E

Kubernetes 1.21, End of Life 2022-06-28

Feature Release 11.24

  • Kubernetes 1.20, End of Life 2022-02-28

  • Kubernetes 1.19, End of Life 2021-10-28

Feature Release 11.20

  • Kubernetes 1.18, End of Life 2021-06-18

  • Kubernetes 1.17, End of Life 2021-01-13

  • Kubernetes 1.16, End of Life 2020-09-02

  • Kubernetes 1.15, End of Life 2020-05-06

  • Kubernetes 1.14, End of Life 2019-12-11

Cloud-Native Storage

CSI Storage

Commvault supports protection of PersistentVolumeClaims residing on production CSI drivers. See Kubernetes production CSI drivers list in the Kubernetes documentation.

Commvault requires the production CSI driver to support the following features:

  • Dynamic provisioning (for restores)

  • Expansion (for restores)

  • Snapshot (for backups)

PersistentVolumes must be provisioned and managed by a registered StorageClass and a corresponding VolumeSnapshotClass.

Commvault has validated the following CSI drivers during development and testing.

CSI plug-in

CSI driver

Snapshot verified

Commvault Distributed Storage

io.hedvig.csi

Yes

AWS Elastic Block Storage

ebs.csi.aws.com

Yes

Azure Disk

disk.csi.azure.com

Yes

Azure File

file.csi.azure.com

Yes

Ceph RBD

rbd.csi.ceph.com

Yes

GCE Persistent Disk

pd.csi.storage.gke.io

Yes

HPE

csi.hpe.com

Yes

NetApp

csi.trident.netapp.io

Yes

Portworx

pxd.openstorage.org

Yes

Volume Snapshot CRD Versions

Multiple versions of the CSI external-snapshotter are available for download. Commvault supports all API versions of the volume snapshot custom resource.

Commvault supports all released versions of the external-snapshotter and all API versions of the volume snapshot custom resource.

To determine the API version of your VolumeSnapshotClass CRD, use the following command:

kubectl describe volumesnapshot class <volume-snapshot-class-name> | grep -i version

Example output:

API Version:     snapshot.storage.k8s.io/v1

In-Tree Storage

For snapshot-based protection, use CSI-based StorageClasses. For organizations that still use in-tree storage, volumes that are on the following StorageClass types can be protected and recovered:

  • awsElasticBlockStore

  • azureDisk

  • gcePersistentDisk

  • vSphereVolume

    • VMware Cloud Provider (vCP) must be installed and configured on your VMware cluster.

    • VMware Cloud Provider is supported for vSphere 6.7U3 and later.

      Support is being deprecated for vSphere releases prior to 7.0u2. For information, see commit details in kubernetes / kubernetes on GitHub.

    • The VMware Cloud Provider StorageClass must be registered on your Kubernetes cluster. For instructions, see vCP Provisioner in the Kubernetes documentation.

    • The StorageClass must have ReclaimPolicy set to Delete.

    • Commvault provides snapshot-based backup via direct integration with VMware vCenter.

    • vsphereVolume plugin supports vSphere VMDK snapshot creation and volume creation for both VMFS and VSAN datastores.

      Note: Commvault does not require a VMware VSA hypervisor be created within Commvault.

Network and Firewall Requirements for On-Premises Access Nodes

Commvault access nodes require that the following network connectivity and firewall dependencies are met.

Kubernetes API Server Endpoint

Commvault access nodes must be able to reach the Kubernetes API server endpoint, either directly or via a Commvault network gateway.

Commvault performs backup and recovery control and data plane transfers via the kube-apiserver. Commvault requires no more than 1 millisecond round-trip time (RTT) latency between the access node and the kube-apiserver endpoint.

To determine your Kubernetes API server endpoint, run the following command:

kubectl cluster-info

Example output:

Kubernetes control plane is running at https://aks-qa-cluster-001-dns-ed45cbd8.hcp.eastus.azmk8s.io:443

CoreDNS is running at https://k8s-123-4.local.domain:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

For information about setting up a network gateway, see Setting Up the Commvault Network Gateway.

Docker Hub

To perform backups and other operations for Kubernetes, Commvault pulls a Docker image for a temporary worker pod that performs data movement. Commvault uses the following images:

  • Commvault 11.24 and more recent releases: centos:8

  • Commvault 11.23–11.20: debian:stretch-slim

You can configure your Kubernetes clusters to pull container images from the Docker Hub. Or, if you have an air-gapped cluster, you can specify a private container registry that contains the image.

Commvault is committed to the security of your data and ensures that the docker image that the Commvault software uses is scanned with Clair before each release and that no critical security vulnerabilities exist in the image.

Commvault does not support the use of custom docker images for the Commvault worker pod.

vsphereVolume Snapshot Support

Commvault access nodes must be able to contact the vCenter SDK endpoint URL on port 443 to authenticate and orchestrate the creation and deletion of VMDK snapshots and the creation of VMDK volumes.

DISCLAIMER

Certain third-party software and service releases (together, "Releases") may not be supported by Commvault. You are solely responsible for ensuring Commvault’s products and services are compatible with any such Releases.