Disk performance statistics for Commvault HyperScale X nodes in a storage pool can be obtained using the cv_disk_perf.py script.
The tool runs I/O directly on the physical disks using the mount points that are purposed for data and metadata on the storage nodes. The script uses a size of 10GB or 40% of the free space available on disk, whichever is lower. If the free space available on disk is less than 5% of the total physical disk size, the tool is not run on that disk. The tool runs random read + write I/O.
The script must be executed from one of the nodes in the storage pool. The script automatically identifies the other nodes in storage pool and runs the tool on all the nodes and the intended disks, simultaneously.
Before You Begin
To obtain precise results, run the disk performance tools when there are no backup jobs to the storage pool.
Procedure
-
Logon to any one of nodes in the storage pool and navigate to the following folder:
# cd /opt/commvault/MediaAgent
-
Run the following command:
python3 cv_disk_perf.py <option>
<option>
can be one of the following:-
CVDiskPerf
to run the command using Commvault disk performance tool.Example:
python3 cv_disk_perf.py CVDiskPerf
-
fio
to run the tool using using the linux native fio command.Example:
python3 cv_disk_perf.py fio
Caution
Do not run
CVDiskPerf
orfio
commands independently on the nodes while running the script
-
Result
Note
The run time may vary from few minutes to hours depending on the write size, load and resources on the nodes.
-
The output is logged in the following folder:
/var/log/commvault/Log_Files/CVFSPerfMon/
-
The output is logged using the following format:
disk_performance_<timestamp>.log
For example:
/var/log/commvault/Log_Files/CVFSPerfMon/disk_performance_20221007081957.log
-
The SUMMARY section provides brief details of throughput on each disk. (The rest of the log contains details of the parameters of the run and debug traces.)