Upgrading an HDF
Cluster

Prerequisites To perform an HDF upgrade using Ambari, your cluster must meet the following prerequisites. These prerequisites are required because they allow Ambari to know whether the cluster is in a healthy operating mode and can be successfully managed from Ambari.

Registering Your Target VersionRegistering your target version makes Ambari aware of the Hortonworks stack to which you want to upgrade, provides the public repository location, and specifies your public or private repository delivery preference.

Backup and Upgrade Ambari InfraThe Ambari Infra Solr instance is used to index data for Ranger, and Log Search. The version of Solr used by Ambari Infra in Ambari 2.6 is Solr 5. The version of Solr used by the Ambari Infra in Ambari 2.7 is Solr 7. When moving from Solr 5 to Solr 7 indexed data needs to be backed up from Solr 5, migrated, and restored into Solr 7 as there are on disk format changes, and collection-specific schema changes. The Ambari Infra Solr components must also be upgraded. Fortunately scripts are available to do both, and are explained below.

Verifying Symbolic Links for SAM and Schema RegistryAfter you register and install your target version, but before you proceed with an Express Upgrade, verify that the symbolic links to the SAM and Schema Registry configuration directories on each host are still valid. If the links are not valid, fix them before upgrading.

Upgrade HDFUpgrading HDF installs your target software version onto each node in your cluster. Note that the Express Upgrade is the only option available to HDF 3.2.

Update Ranger PasswordsRanger password validation has been updated for HDF 3.2.0, and to conform to these new password policies, the following Ranger passwords need to be updated to ensure that they have at least 8 characters with minimum one alphabet and one numeric. These passwords cannot contain the following special characters: " ' \ `