This topic describes the high-level steps that first-time users must follow to set up the archiving feature in the Command Center.
Before You Begin
- 
If your production environment is configured with antivirus software, see Antivirus Exclusions for Windows and Recommended Antivirus Exclusions for UNIX and Mac.
 - 
Configure Storage. For more information about configuring storage, see Storage.
 
Procedure
- 
Verify that your environment meets
 - 
Verify that your share access nodes meet the following hardware requirements.
A share access node is a computer that has access to the network shares and is used for live scan and archive operations.
Component
Requirement
CPU/RAM10
2 CPU cores, 16 GB RAM
(or 2 vCPUs/16 GB)OS or Software Disk
200 GB usable disk, min 2 spindles 15KRPM
 - 
Install the Commvault software packages on your share access node.
- 
For NFS share, you must mount the NFS share on a computer where the UNIX file system agent package is installed.
 - 
For IBM Spectrum Scale (GPFS) and Lustre, install the File System package on your data access node.
 - 
For Hadoop (HDFS), install the Hadoop package on your data access node.
 
 - 
 - 
Add a new file server to the Command Center.
You can also configure an existing file server in the Command Center.
 - 
Archive sets are logical groupings of subclients that contain data to be archived. You can create user-defined archive sets to manage specific data.
 - 
Optional: Create a subclient for archiving..
Archiving subclients are logical containers of data to be archived. When you create an archive set, a default subclient is automatically created. The default subclient contains the data, plan, filters, and exceptions that you added when you created the archive set.
You can create additional user-defined subclients to manage specific data.
 - 
When you run an archive operation, the files that meet the archiving rules are stubbed.
 - 
You can recall your archived data from any computer other than a computer used as a data access node.