V11 Service Pack 10
Loading...

Hadoop Tiering Using the NFS ObjectStore

Hadoop Distributed File System (HDFS) supports tiered storage. You can define and apply different storage policies to store hot, warm, and cold Hadoop data on disk or archive storage. Based on predefined rules, you can use the data mover utility to move data across storage tiers. You can use the NFS ObjectStore for archive storage, so that the cold Hadoop data is retained for long term and served back seamlessly when required. For more information about NFS ObjectStore, see NFS ObjectStore.

Procedure

  1. Create an NFS ObjectStore share.

    For instructions, see Create an NFS ObjectStore Share.

  2. Mount the NFS ObjectStore share on the Hadoop DataNodes that you want to use for archive storage.

    For more information on Archival Storage, refer to https://hadoop.apache.org/docs/r2.8.0/hadoop-project-dist/hadoop-hdfs/ArchivalStorage.html.

  3. Stop Hadoop DataNode services.
  4. Assign the NFS ObjectStore as archive storage for Hadoop DataNode.
  5. Start Hadoop DataNode services.
  6. Set the storage policy as COLD on the paths that you want to migrate to the NFS ObjectStore.
  7. Use the data mover utility to apply the storage policy and to move the blocks specified by the storage policy.

Result

The blocks specified by the storage policy are migrated to the NFS ObjectStore.