Prepare the Distributed Storage Servers

Prepare the nodes in the Distributed Storage servers, by installing the OS, assigning hostnames, and then enabling jumbo frames (if supported).

Procedure

On each node in the Distributed Storage server:

  1. Create a mirrored volume (RAID-1) on the two SSD’s meant for the OS on each of the nodes. All HDD’s are to remain in the pass-through or JBOD mode.

  2. Mount the downloaded ISO as a virtual media and reboot the server.

  3. Once the server boots from the mounted ISO, select the following options:

    • RAID-1 volume as the destination to install the OS.

    • Perform a fresh install (do not preserve old partitions or data).

    Provide the root user password (hedvig by default) twice while the OS packages are installed. This step could take up to 30 minutes after which the server needs to be rebooted.

  4. Once the OS is installed, set the hostname, stop Firewall and Network Manager processes, and make the changes persistent using the following commands:

    hostnamectl  set-hostname  <private_hostname>  --static
    systemctl stop firewalld
    systemctl stop NetworkManager
    chkconfig firewalld off
    chkconfig NetworkManager off
  5. Edit the /etc/sysconfig/selinux file and disable SELinux as follows:

    SELINUX=disabled
  6. Edit /etc/hosts to reflect entries for each storageserver in the cluster.

    The configuration shown in the following example, reflects 3 storage servers and 2 proxies with their respective private IP’s and hostnames.

    Note

    Assign hostnames to each of the storage servers based on the private storage network. This can be managed through the /etc/hosts files on all storage servers, including the Deployment server. The Client facing data-protection IP on each Distributed Storage proxy may be assigned through DNS.

    Setting Up the Hedvig Cluster Nodes (3)

  7. Navigate to /etc/sysconfig/network-scripts directory, and configure bond1 for private storage traffic as shown in the following example:

    Setting Up the Hedvig Cluster Nodes (2)

  8. Reboot the server for the above changes to take effect.

  9. If your network supports jumbo frames end-to-end, enable jumbo frames on the host and on ESXi, following these guidelines. It is recommended to enable jumbo frames, if it is available end-to-end.

    For more information about enabling jumbo frames on ESXi, refer to VMware documentation for jumbo frames.

    1. Log on to each storage cluster node or storage proxy as the root user.

      ssh root@<storage cluster node or storage proxy>
      password: hedvig
    2. To the /etc/sysconfig/network-scripts/ifcfg-ens160 file, add the following entry:

      MTU=9000

      Important

      Do not enter a different value for MTU (maximum transmission unit).

    3. Save the file and exit.

    4. Restart the network.

      systemct1 restart network
    5. Verify that the jumbo frames are enabled end-to-end by running the following command on one of the hosts:

      ping -M do -s <MTU - 28> <remote host>

      For example:

      host1# ping -M do -s 8972 host2
  10. If RHEL is used as the OS, enable RHEL subscription to download software and updates using the following command:

    subscription-manager register --username=<USER_NAME> --password=<PASSWORD> --auto-attach

Loading...