Hadoop Distributed File System (HDFS) supports tiered storage. You can define and apply different storage policies to store hot, warm, and cold Hadoop data on disk or archive storage. Based on predefined rules, you can use the data mover utility to move data across storage tiers. You can use the NFS ObjectStore for archive storage, so that the cold Hadoop data is retained for long term and served back seamlessly when required. For more information about NFS ObjectStore, see NFS ObjectStore.
- Create an NFS ObjectStore share.
For instructions, see Create an NFS ObjectStore Share.
- Mount the NFS ObjectStore share on the Hadoop DataNodes that you want to use for archive storage.
For more information on Archival Storage, refer to https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html.
- Stop Hadoop DataNode services.
- Assign the NFS ObjectStore as archive storage for Hadoop DataNode.
- Start Hadoop DataNode services.
- Set the storage policy as COLD on the paths that you want to migrate to the NFS ObjectStore.
- Use the data mover utility to apply the storage policy and to move the blocks specified by the storage policy.
The blocks specified by the storage policy are migrated to the NFS ObjectStore.