No announcement yet.

Client installation tips, HDP Stack and CDH Parcels

  • Filter
  • Time
  • Show
Clear All
new posts

  • Client installation tips, HDP Stack and CDH Parcels

    Fusion is composed of two components. The server/daemon bits which usually sit off to the side on their own system, then the client libs which are necessary to tell your user/apps what "fusion://" is. The fusionFS type is defined automatically in your core-site.xml and deployed via Ambari or CDM, but the client libs still need to be installed on each node that needs to understand fusion:// (so every node that has user/apps running).

    When running through the Fusion installation you will be presented with an HTTP link to either Hortonwork stack installation for the client libs, the Cloudera parcel installation or you can manually install via RPMs. Here are some tips on how I install these.

    Hortonworks stack -
    1. Copy the stack link to your clipboard in the install web page to your clipboard. This will be an HTML address directly to a Fusion REST end pointing to a tar.gz file.
    2. SSH into your Ambari server and "cd /var/lib/ambari-server/resources/stacks/HDP/2.2/services
    3. wget ""
    4. tar xzvf fusion-hdp-2.2.0-2.4_SNAPSHOT.stack.tar.gz #Filename may differ, so unzip whatever the tar.gz is you just downloaded
    5. ambari-server restart
    6. Log into Ambari and on the dashboard do an Add Service from the Actions button, and add the WANdisco Fusion client service

    Cloudera Parcels -
    1. Copy first the parcels link from the install web page where its displayed
    2. SSH into your CDM server and browse to "cd /opt/cloudera/parcel-repo"
    3. wget ""
    4. Copy the second link which is the parcels .sha file
    5. wget "
    6. chown cloudera-scm:cloudera-scm FUSION*
    7. Wait a good minute, then log into CDM and under Parcels, check for new parcels (make sure to click the link on the upper left to look at all parcels and not just web ones)
    8. Deploy and Activate the Fusion client parcel
    Manual RPM -
    1. Copy the RPM link from the installation web page
    2. SSH into any of your hadoop servers
    3. Run a for loop to first scp the client jars to the necessary servers, then a second one to install it. Something like.... (note the fusion-client.rpm will have a different name with version number in it)
      1. for i in 1 2 3 4 5; do scp fusion-client.rpm hadoop-server$i:~; done
      2. for i in 1 2 3 4 5; do ssh hadoop-server$i yum -y install fusion-client.rpm

    Make sure your core-site.xml has the Fusion server refs as well, otherwise they may need to be manually copied. We currently use the hadoop managers to do this but if you are doing a fully manual install that will be necessary to copy by hand, I usually do it with a similar for loop.