Issues Fixed in Cloudera Manager 5.3.2

The Review Changes page sometimes hangs

The Review Changes page hangs due to the inability to handle the "File missing"
scenario.

High volume of TGT events against AD server with "bad token" messages

A fix has been made to how Kerberos credential caching is handled by management services,
resulting in a reduction in the number of Kerberos Ticket Granting Ticket (TGT) requests
from the cluster to a KDC. This would have been noticed as "Bad Token" messages being seen
in high volume in KDC logging and unnecessarily causing re-authentication by management
services.

Accumulo missing kinit when running with Kerberos

Cloudera Manager is unable to run Accumulo when hostname command doesn't
return FQDN of hosts.

HiveServer2 leaks threads when using impersonation

For CDH 5.3 and higher, Cloudera Manager will configure HiveServer2 to use the HDFS cache
even when impersonation is on. For earlier CDH, there were bugs with the cache when
impersonation was in use, so it is still disabled.

Deploying client configurations fails if there are dead hosts present in the cluster

If there are hosts in the cluster where the Cloudera Manager agent heartbeat is not working, then deploying client configurations doesn't work. Starting with Cloudera Manager 5.3.2, such hosts are ignored while deploying client configurations. When the issues with the host are fixed, Cloudera Manager will show those hosts as having stale client configurations, at which point you can redeploy them.

Health test monitors free space available on the wrong filesystem

The Cloudera Manager Health Test to monitor free space available for the Cloudera Manager
Agent's process directory monitors space on the wrong filesystem. It should monitor the
tmpfs that the Cloudera Manager Agent creates, but instead monitors the
Cloudera Manager Agent working directory.

Starting ZooKeeper Servers from Service or Instance page fails

Stopped ZooKeeper servers cannot be started from the Service or Instance page, but only from the Role page of the server using the start action for the role.

Flume Metrics page doesn't render agent metrics

Starting in Cloudera Manager 5.3, some or all Flume component data was missing from the
Flume Metrics Details page.

Broken link to help pages on Chart Builder page

The help icon (question mark) on the Chart Builder page returns a 404 error.

Running the wizard to import MapReduce configurations to YARN will now populate yarn.nodemanager.resource.cpu-vcores and yarn.nodemanager.resource.memory-mb correctly based on equivalent MapReduce configuration.

Issues Fixed in Cloudera Manager 5.3.1

Deploy client configuration no longer fails after 60 seconds

When configuring a gateway role on a host that already contains a role of the
same type—for example, an HDFS gateway on a DataNode—the deploy client
configuration command no longer fails after 60 seconds.

service cloudera-scm-server force_start now works

When using Isilon, Cloudera Manager now sets mapred_submit_replication correctly

When EMC Isilon storage is used, there is no DataNode, so you cannot set
mapred_submit_replication to a number smaller than or
equal to the number of DataNodes in the network. Cloudera Manager now
does the following when setting
mapred_submit_replication:

If using HDFS, sets to a minimum of 1 and issues a warning when greater than the
number of DataNodes

If using Isilon, sets to 1 and does not check against the number of DataNodes

The Cloudera Manager Agent now sets the file descriptor ulimit correctly on Ubuntu

During upgrade, bootstrapping the standby NameNode step no longer fails with standby NameNode connection refused when connecting to active NameNode

Deploy krb5.conf now also deploys it on hosts with Cloudera Management Service roles

Cloudera Manager allows upgrades to unknown CDH maintenance releases

Cloudera Manager 5.3.0 supports any CDH release less than or equal to 5.3, even
if the release did not exist when Cloudera Manager 5.3.0 was released.
For packages, you cannot currently use the upgrade wizard to upgrade
to such a release. This release adds a custom CDH field for the
package case, where you can type in a version that did not exist at
the time of the Cloudera Manager release.

impalad memory limit units error in EnableLlamaRMCommand

The EnableLlamaRMCommand sets the value of the impalad memory limit to equal the
NM container memory value. But the latter is in MB, and the former is
in bytes. Previously, the command did not perform the conversion; this
has been fixed.

Running MapReduce v2 jobs are now visible using the Application Master view

In the Application view, selecting Application Master for a MRv2 job
previously resulted in no action.

The Cloudera Manager Server log previously showed several foreign key constraint
exceptions that were associated with deleted services. This has been
fixed.

HiveServer2 keystore and LDAP group mapping passwords are no longer exposed in client configuration files

The HiveServer2 keystore password and LDAP group mapping passwords were emitted
into the client configuration files. This exposed the passwords in
plain text in a world-readable file. This has been fixed.

The high availability wizard now sets the HDFS dependency on ZooKeeper

Workaround: Before enabling high availability, do the following:

Create and start a ZooKeeper service if one does not exist.

Go to the HDFS service.

Click the Configuration tab.

In the Service-Wide category, set the
ZooKeeper Service property to the ZooKeeper
service.

Click Save Changes to commit the changes.

BDR no longer assumes superuser is common if clusters have the same realm

If source and destination clusters are in the same Kerberos realm, Cloudera
Manager assumed that superuser of the destination is also the
superuser on the source cluster. However, HDFS can be configured so
that this is not the case.

Issues Fixed in Cloudera Manager 5.3.0

Fixed MapReduce Usage by User reports when using an Oracle database backend

Setting the default umask in HDFS fails in new configuration layout

Setting the default umask in the HDFS configuration section to 002 in the new
configuration layout displays an error:"Could not parse:
Default Umask : Could not parse parameter 'dfs_umaskmode'. Was
expecting an octal value with a leading 0. Input: 2",
preventing the change from being submitted.

The Enable Integrated Resource Management command for Impala (available from the
Actions pull-down menu on the Impala service page) sets the
Impala Daemon Memory Limit to an unusably small value. This can cause
Impala queries to fail.

Workaround 1: Upgrade to Cloudera Manager 5.3.

Workaround 2:

Run the Enable Integrated Resource Management wizard up to the Restart
Cluster step. Do not click Restart Now.

Click on the leave this wizard link to exit the wizard without restarting
the cluster.

Go to the Impala service page and click Configuration. Type
impala daemon memory limit into the search box.

Set the value of the Impala Daemon Memory Limit property to the value noted in
step 4 above.

Restart the cluster.

Rolling restart and upgrade of Oozie fails if there is a single Oozie server

Rolling restart and upgrade of Oozie fails if there is only a single Oozie
server. Cloudera Manager will show the error message "There is already
a pending command on this role."

Workaround: If you have a single Oozie server, do a normal restart.

Allow "Started but crashed" processes to be restarted by a Start command

In Cloudera Manager 5.3, it is now possible to restart a crashed process with
the Start command and not just the Restart command.

Add dependency from Agent to Daemons package to yum

In Cloudera Manager 5.3, an explicit dependency has been added from the Agent
package to the Daemons package so that upgrading Cloudera Manager
5.2.0 or later to Cloudera Manager 5.3 causes the agent to be upgraded
as well. Previously, the Cloudera Manager installer always installed
both packages, but this is now enforced at the package dependency
level as well.

Issues Fixed in Cloudera Manager 5.2.1

“POODLE” vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack takes
advantage of a cryptographic flaw in the obsolete SSLv3 protocol,
after first forcing the use of that protocol. The only solution is to
disable SSLv3 entirely. This requires changes across a wide variety of
components of CDH and Cloudera Manager in 5.2.0 and all earlier
versions. Cloudera Manager 5.2.1 provides these changes for Cloudera
Manager 5.2.0 deployments. All Cloudera Manager 5.2.0 users should
upgrade to 5.2.1 as soon as possible. For more information, see the
Cloudera
Security Bulletin.

Can use the log4j advanced configuration snippet to override the default audit logging configuration even if not using Navigator

In Cloudera Manager 5.2.0 only, it was not possible to use the log4j advanced
configuration snippet to override the default audit logging
configuration when Navigator was not being used.

HTTP queries against the Reports Manager and Event Server Thrift server would
earlier cause it to crash with out-of-memory exception.

Replication commands now use the correct JAVA_HOME if an override has been provided for it

ZooKeeper connection leaks from HBase clients in Service Monitor have been fixed

When a parcel is activated, user home directories are now created with umask 022 instead of using the "useradd" default 077

Issues Fixed in Cloudera Manager 5.2.0

Alternatives database points to client configurations of deleted service

In the past, if you created a service, deployed its client configurations, and
then deleted that service, the client configurations lived in the
alternative database, with a possibly high priority, until cleaned up
manually. Now, for a given "alternatives path" (for example
/etc/hadoop/conf) if there exist both "live" client
configurations (ones that would be pushed out with deploy client
configurations for active services) and ones that have been "orphaned"
client configurations (the service they correspond to has been
deleted), the orphaned ones will be removed from the alternatives
database. In other words, to trigger cleanup of client configurations
associated with a deleted service you must create a service to replace
it.

The YARN property ApplicationMaster Max Retries has no effect in CDH 5

The issue arises because yarn.resourcemanager.am.max-retries
was replaced with
yarn.resourcemanager.am.max-attempts.

Workaround:

Add the following to ResourceManager Advanced Configuration Snippet for
yarn-site.xml, replacing MAX_ATTEMPTS with
the desired maximum number of attempts:

where principal is the name of the Kerberos principal to use
for the History Server, and keytab is the path to
the principal's keytab file on the local filesystem of the host
running the History Server.

Hive replication issue with TLS enabled

Hive replication will fail when the source Cloudera Manager instance has TLS
enabled, even though the required certificates have been added to the
target Cloudera Manager's trust store.

Workaround: Add the required Certificate Authority or self-signed
certificates to the default Java trust store, which is typically a
copy of the cacerts file named jssecacerts in the
$JAVA_HOME/jre/lib/security/ path of your installed
JDK. Use keytool to import your private CA certificates into the
jssecacert file.

The Spark Upload Jar command fails in a secure cluster

The Spark Upload Jar command fails in a secure cluster.

Workaround: To run Spark on YARN, manually upload the Spark assembly jar
to HDFS /user/spark/share/lib. The Spark assembly jar
is located on the local filesystem, typically in
/usr/lib/spark/assembly/lib or
/opt/cloudera/parcels/CDH/lib/spark/assembly/lib.

Clients of the JobHistory server administrative interface, such as the
mapred hsadmin tool, may fail to connect to the
server when run on hosts other than the one where the JobHistory
server is running.

Workaround: Add the following to both the MapReduce Client
Advanced Configuration Snippet for mapred-site.xml and the
Cluster-wide Advanced Configuration Snippet for
core-site.xml, replacing
JOBHISTORY_SERVER_HOST with the hostname of your
JobHistory server:

Issues Fixed in Cloudera Manager 5.1.3

Improved speed and heap usage when deleting hosts on cluster with long history

Speed and heap usage have been improved when deleting hosts on clusters that
have been running for a long time.

When there are multiple clusters, each cluster's topology files and validation for legal topology is limited to hosts in that cluster

When there are multiple clusters, each cluster's topology files and validation
for legal topology is limited to hosts in that cluster. Most commands
will now fail up front if the cluster's topology is invalid.

The size of the statement cache has been reduced for Oracle databases

For users of Oracle databases, the size of the statement cache has been reduced
to help with memory consumption.

Improvements to memory usage of "cluster diagnostics collection" for large clusters.

Memory usage of "cluster diagnostics collection" has been improved for large
clusters.

Issues Fixed in Cloudera Manager 5.1.2

If a NodeManager that is used as ApplicationMaster is decommissioned, YARN jobs will hang

Jobs can hang on NodeManager decommission due to a race condition when
continuous scheduling is enabled.

Could not find a healthy host with CDH 5 on it to create HiveServer2 error during upgrade

When upgrading from CDH 4 to CDH 5, if no parcel is active then the error
message "Could not find a healthy host with CDH5 on it to create
HiveServer2" displays. This can happen when transitioning from
packages to parcels, or if you explicitly deactivate the CDH 4 parcel
(which is not necessary) before upgrade.

Note: Due to a bug in Java 7u45
(http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8014618),
SSL connections between the Cloudera Manager Server and Cloudera
Manager Agents and between the Cloudera Management Service and CDH
processes break intermittently. If you do not have SSL enabled on your
cluster, there is no impact.

The YARN property ApplicationMaster Max Retries has no effect in CDH 5

The issue arises because yarn.resourcemanager.am.max-retries
was replaced with
yarn.resourcemanager.am.max-attempts.

Workaround:

Add the following to ResourceManager Advanced Configuration Snippet for
yarn-site.xml, replacing MAX_ATTEMPTS with
the desired maximum number of attempts:

(BDR) Replications can be affected by other replications or commands running at the same time

Replications can be affected by other replications or commands running at the
same time, causing replications to fail unexpectedly or even be
silently skipped sometimes. When this occurs, a StaleObjectException
is logged to the Cloudera Manager logs. This is known to occur even
with as few as four replications starting at the same time.

If you have manually installed Oracle's official JDK 7 or 8 rpm on a host (or
hosts), and check the Install Java Unlimited Strength Encryption
Policy Files checkbox in the Add Cluster or Add Host wizard when
installing Cloudera Manager on that host (or hosts), or when upgrading
Cloudera Manager to 5.1, Cloudera Manager installs JDK 6 policy files,
which will prevent any Java programs from running against that JDK.
Additionally, if this situation does apply, Cloudera Manager/CDH will
also choose that particular Java as the default to run against,
meaning that Cloudera Manager/CDH fail to start, throwing the
following message in logs: Caused by:
java.lang.SecurityException: The jurisdiction policy files are not
signed by a trusted signer!.

Workaround: Do not select the Install Java Unlimited Strength
Encryption Policy Files checkbox during the aforementioned
wizards. Instead download and install them manually, following the
instructions on Oracle's website.

Issues Fixed in Cloudera Manager 5.1.0

Important: Cloudera Manager 5.1.0 is no longer available for download from the Cloudera
website or from archive.cloudera.com due to the JCE policy file issue
described in the Issues Fixed in Cloudera Manager 5.1.1
section of the Release Notes. The download URL at
archive.cloudera.com for Cloudera Manager 5.1.0 now
forwards to Cloudera Manager 5.1.1 for the RPM-based distributions for
Linux RHEL and SLES.

Changes to property for yarn.nodemanager.remote-app-log-dir are not included in the JobHistory Server yarn-site.xml file

When "Remote App Log Directory" is changed in YARN configuration, the property
yarn.nodemanager.remote-app-log-dir is not included
in the JobHistory Server's yarn-site.xml file.

Secure CDH 4.1 clusters can't have Hue and Impala share the same Hive

In a secure CDH 4.1 cluster, Hue and Impala cannot share the same Hive instance.
If "Bypass Hive Metastore Server" is disabled on the Hive service,
then Hue will not be able to talk to Hive. Conversely, if "Bypass Hive
Metastore" enabled on the Hive service, then Impala will have a
validation error.

Severity: High

Workaround: Upgrade to CDH 4.2.

The command history has an option to select the number of commands, but doesn't always return the number you request

Workaround: None.

Hue doesn't support YARN ResourceManager High Availability

Workaround: Configure the Hue Server to point to the active
ResourceManager:

Go to the Hue service.

Click the Configuration tab.

ClickHue Server Default Group > Advanced.

In the Hue Server Advanced Configuration Snippet for
hue_safety_valve_server.ini field, add the following:

Hive CLI does not work in CDH 4 when "Bypass Hive Metastore Server" is enabled

Alternatively, an approach can be taken that will cause the "Hive Auxiliary JARs
Directory" to not work, but will enable basic Hive commands to work.
Add the following to "Gateway Client Environment Advanced
Configuration Snippet for hive-env.sh," then
re-deploy the Hive client configuration:

The downloaded client configuration for YARN includes the
topology.py script. The location of this script is
given by the net.topology.script.file.name property
in core-site.xml. But the
core-site.xml file downloaded with the client
configuration has an incorrect absolute path to
/etc/hadoop/... for topology.py.
This can cause clients that run against this configuration to fail
(including Spark clients run in yarn-client mode, as well as YARN
clients).

Workaround: Edit core-site.xml to change the value of
the net.topology.script.file.name property to the
path where the downloaded copy of topology.py is
located. This property must be set to an absolute path.

search_bind_authentication for Hue is not included in .ini file

When search_bind_authentication is set to
false, CM does not include it in
hue.ini.

Workaround: Add the following to the Hue Service Advanced
Configuration Snippet (Safety Valve) for
hue_safety_valve.ini:

An erroneous "Failed parameter validation" warning is displayed on the HBase
configuration page on CDH 4.1 in Cloudera Manager 5.0.0

Severity: Low

Workaround: Use CDH4.2 or higher, or ignore the warning.

Host recommissioning and decommissioning should occur independently

In large clusters, when problems appear with a host or role, administrators may
choose to decommission the host or role to fix it and then
recommission the host or role to put it back in production.
Decommissioning, especially host decommissioning, is slow, hence the
importance of parallelization, so that host recommissioning can be
initiated before decommissioning is done.

Issues Fixed in Cloudera Manager 5.0.2

Cloudera Manager Impala Query Monitoring does not work with Impala 1.3.1

Impala 1.3.1 contains changes to the runtime profile format that break the
Cloudera Manager Query Monitoring feature. This leads to exceptions in
the Cloudera Manager Service Monitor logs, and Impala queries no
longer appear in the Cloudera Manager UI or API. The issue affects
Cloudera Manager 5.0 and 4.6 - 4.8.2.

Workaround: None. The issue will be fixed in Cloudera Manager 4.8.3 and
Cloudera Manager 5.0.1. To avoid the Service Monitor exceptions, turn
off the Cloudera Manager Query Monitoring feature by going to
Impala Daemon > Monitoring and setting the Query
Monitoring Period to 0 seconds. Note that the Impala Daemons must be
restarted when changing this setting, and the setting must be restored
once the fix is deployed to turn the query monitoring feature back on.
Impala queries will then appear again in Cloudera Manager’s Impala
query monitoring feature.

Issues Fixed in Cloudera Manager 5.0.1

If installing CDH 4 packages, the Impala 1.3.0 option does not work because Impala 1.3 is not yet released for CDH 4.

If installing CDH 4 packages, the Impala 1.3.0 option listed in the install
wizard does not work because Impala 1.3.0 is not yet released for CDH
4.

Workaround: Install using parcels (where the unreleased version of Impala
does not appear), or select a different version of Impala when
installing with packages.

When updating dynamic resource pools, Cloudera Manager updates roles but may fail to update role information displayed in the UI

When updating dynamic resource pools, Cloudera Manager automatically refreshes
the affected roles, but they sometimes get marked incorrectly as
running with outdated configurations and requiring a refresh.

Upgrade of secure cluster requires installation of JCE policy files

When upgrading a secure cluster via Cloudera Manager, the upgrade initially
fails due to the JDK not having Java Cryptography Extension (JCE)
unlimited strength policy files. This is because Cloudera Manager
installs a copy of the Java 7 JDK during the upgrade, which does not
include the unlimited strength policy files. To ensure that unlimited
strength functionality continues to work, install the unlimited
strength JCE policy files immediately after completing the Cloudera
Manager Upgrade Wizard and before taking any other actions in Cloudera
Manager.

Workaround: Install the unlimited strength JCE policy files immediately
after completing the Cloudera Manager Upgrade Wizard and before taking
any other action in Cloudera Manager.

The Details page for MapReduce jobs displays the wrong id for YARN-based replications

The Details link for MapReduce jobs is wrong for
YARN-based replications.

Workaround: Find the job id in the link and then go to the
YARN Applications page and look for the job
there.

During an upgrade from CDH 4 to CDH 5, if the HDFS File Block Storage Locations
Timeout was previously set to a custom value, it will now be set to 10
seconds or the custom value, whichever is higher. This is required for
Impala to start in CDH 5, and any value under 10 seconds is now a
validation error. This configuration is only emitted for Impala and no
services should be adversely impacted.

Workaround: None.

HDFS NFS gateway works only on RHEL and similar systems

Because of a bug in native versions of portmap/rpcbind, the
HDFS NFS gateway does not work out of the box on SLES, Ubuntu, or
Debian systems if you install CDH from the command-line, using
packages. It does work on
supported
versions of RHEL-compatible systems on which
rpcbind-0.2.0-10.el6 or later is installed, and it
does work if you use Cloudera Manager to install CDH, or if you start
the gateway as root.

You can use the gateway by running rpcbind in insecure mode,
using the -i option, but keep in mind that
this allows anyone from a remote host to bind to the portmap.

Sensitive configuration values exposed in Cloudera Manager

Certain configuration values that are stored in Cloudera Manager are considered
sensitive, such as database passwords. These configuration values
should be inaccessible to non-administrator users, and this is
enforced in the Cloudera Manager Administration Console. However,
these configuration values are not redacted when they are read through
the API, possibly making them accessible to users who should not have
such access.

Gateway role configurations not respected when deploying client configurations

Gateway configurations set for gateway role groups other than the default one or
at the role level were not being respected.

Cloudera Security now
indicates that before enabling Kerberos authentication you should
first enable at least Level 1 encryption.

HDFS NFS gateway does not work on all Cloudera-supported platforms

The NFS gateway cannot be started on some Cloudera-supported platforms.

Workaround: None. Fixed in Cloudera Manager 5.0.1.

Replace YARN_HOME with HADOOP_YARN_HOME during upgrade

If yarn.application.classpath was set to a non-default value on
a CDH 4 cluster, and that cluster is upgraded to CDH 5, the classpath
is not updated to reflect that $YARN_HOME was
replaced with $HADOOP_YARN_HOME. This will cause YARN
jobs to fail.

Workaround: Reset yarn.application.classpath to the
default, then re-apply your classpath customizations if needed.

Insufficient password hashing in Cloudera Manager

In versions of Cloudera Manager earlier than 4.8.3 and earlier than 5.0.1, user
passwords are only hashed once. Passwords should be hashed multiple
times to increase the cost of dictionary based attacks, where an
attacker tries many candidate passwords to find a match. The issue
only affects user accounts that are stored in the Cloudera Manager
database. User accounts that are managed externally (for example, with
LDAP or Active Directory) are not affected.

In addition, because of this issue, Cloudera Manager 4.8.3 cannot be upgraded to
Cloudera Manager 5.0.0. Cloudera Manager 4.8.3 must be upgraded to
5.0.1 or later.

Workaround: Upgrade to Cloudera Manager 5.0.1.

Upgrade to Cloudera Manager 5.0.0 from SLES older than Service Pack 3 with PostgreSQL older than 8.4 fails

Upgrading to Cloudera Manager 5.0.0 from SUSE Linux Enterprise Server (SLES)
older than Service Pack 3 will fail if the embedded PostgreSQL
database is in use and the installed version of PostgreSQL is less
than 8.4.

Workaround: Either migrate away from the embedded PostgreSQL database
(use MySQL or Oracle) or upgrade PostgreSQL to 8.4 or greater.

MR1 to MR2 import fails on a secure cluster

When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to
find container-executor.cfg.

After an upgrade from CDH 4 to CDH 5, Oozie does not pick up the new workflow
extension schemas automatically. User will need to update
oozie.service.SchemaService.wf.ext.schemas manually
and add the schemas added in CDH 5:
shell-action-0.3.xsd,
sqoop-action-0.4.xsd,
distcp-action-0.2.xsd,
oozie-sla-0.1.xsd,
oozie-sla-0.2.xsd. Note: None of the existing jobs
will be affected by this bug, only new workflows that require new
schemas.

Workaround: Add the new workflow extension schemas to Oozie manually by
editing oozie.service.SchemaService.wf.ext.schemas.

Issues Fixed in Cloudera Manager 5.0.0

The Sqoop Upgrade command in Cloudera Manager may report success even when the upgrade fails

Workaround: Do one of the following:

Click the Sqoop service and then the Instances tab.

Click the Sqoop server role then the Commands tab.

Click the stdout link and scan for the Sqoop Upgrade command.

In the All Recent Commands page, select the stdout link for latest Sqoop
Upgrade command.

Verify that the upgrade did not fail.

Cannot restore a snapshot of a deleted HBase table

If you take a snapshot of an HBase table, and then delete that table in HBase,
you will not be able to restore the snapshot.

Severity: Med

Workaround: Use the "Restore As" command to recreate the table in HBase.

When enabling HDFS Automatic Failover, you need to first stop any dependent
HBase services. The Automatic Failover configuration workflow restarts
both NameNodes, which could cause HBase to become unavailable.

Severity: Medium

New schema extensions have been introduced for Oozie in CDH 4.1

In CDH 4.1, Oozie introduced new versions for Hive, Sqoop and workflow schema.
To use them, you must add the new schema extensions to the Oozie
SchemaService Workflow Extension Schemas configuration property in
Cloudera Manager.

Severity: Low

Workaround: In Cloudera Manager, do the following:

Go to the CDH 4 Oozie service page.

Go to the Configuration tab, View and Edit.

Search for "Oozie Schema". This should show the Oozie SchemaService Workflow
Extension Schemas property.

Add the following to the Oozie SchemaService Workflow Extension Schemas
property:

shell-action-0.2.xsd
hive-action-0.3.xsd
sqoop-action-0.3.xsd

Save these changes.

YARN Resource Scheduler user FairScheduler rather than FIFO.

Cloudera Manager 5.0.0 sets the default YARN Resource Scheduler to
FairScheduler. If a cluster was previously running YARN with the FIFO
scheduler, it will be changed to FairScheduler next time YARN
restarts. The FairScheduler is only supported with CDH4.2.1 and later,
and older clusters may hit failures and need to manually change the
scheduler to FIFO or CapacityScheduler.

Severity: Medium

Workaround: For clusters running CDH 4 prior to CDH 4.2.1:

Go the YARN service Configuration page

Search for "scheduler.class"

Click in the Value field and select the schedule you want to use.

Save your changes and restart YARN to update your configurations.

Resource Pools Summary is incorrect if time range is too large.

The Resource Pools Summary does not show correct information if the Time Range
selector is set to show 6 hours or more.

Severity: Medium

Workaround: None.

When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to find container-executor.cfg

Workaround: Restart YARN after the import steps finish. This causes the
file to be created under the YARN configuration path, and the jobs now
work.

When upgrading to Cloudera Manager 5.0.0, the "Dynamic Resource Pools" page is not accessible

When upgrading to Cloudera Manager 5.0.0, users will not be able to directly
access the "Dynamic Resource Pools" page. Instead, they will be
presented with a dialog saying that the Fair Scheduler XML Advanced
Configuration Snippet is set.

Workaround:

Go to the YARN service.

Click the Configuration tab.

Copy the value of the Fair Scheduler XML Advanced Configuration Snippet
into a file.

Clear the value of Fair Scheduler XML Advanced Configuration Snippet.

Recreate the desired Fair Scheduler allocations in the Dynamic Resource
Pools page, using the saved file for reference.

New Cloudera Enterprise licensing is not reflected in the wizard and license page

Workaround: None.

The AWS Cloud wizard fails to install Spark due to missing roles

Workaround: Do one of the following:

Use the Installation wizard.

Open a new window, click the Spark service, click on the Instances tab,
click Add, add all required roles to Spark. Once the roles
are successfully added, click the Retry button in the
Installation wizard.

Spark on YARN requires manual configuration

Spark on YARN requires the following manual configuration to work correctly:
modify the YARN Application Classpath by adding /etc/hadoop/conf,
making it the very first entry.

Workaround: Add /etc/hadoop/conf as the first entry in
the YARN Application classpath.

Monitoring works with Solr and Sentry only after configuration updates

Cloudera Manager monitoring does not work out of the box with Solr and Sentry on
Cloudera Manager 5. The Solr service is in Bad health, and all Solr
Servers have a failing "Solr Server API Liveness" health check.

Severity: Medium

Workaround: Complete the configuration steps below:

Create "HTTP" user and group on all machines in the cluster (with
useradd 'HTTP' on RHEL-type systems).

The instructions that follow this step assume there is no existing Solr Sentry
policy file in use. In that case, first create the policy file on
/tmp and then copy it over to the appropriate
location in HDFS that Solr Servers check. If there is already a Solr
Sentry policy in use, it must be modified to add the following
[group] / [role] entries for 'HTTP'. Create a file
(for example, /tmp/cm-authz-solr-sentry-policy.ini)
with the following contents:

[groups]
HTTP = HTTP
[roles]
HTTP = collection = admin->action=query

Copy this file to the location for the "Sentry Global Policy File" for Solr. The
associated config name for this location is
sentry.solr.provider.resource, and you can see the
current value by navigating to the Sentry
sub-category in the Service Wide
configuration editing workflow in the Cloudera Manager UI. The
default value for this entry is
/user/solr/sentry/sentry-provider.ini. This refers
to a path in HDFS.

Check if you have entries in HDFS for the parent(s) directory:

sudo -u hdfs hadoop fs -ls /user

You may need to create the appropriate parent directories if they are not
present. For example:

sudo -u hdfs hadoop fs -mkdir /user/solr/sentry

After ensuring the parent directory is present, copy the file created in step 2
to this location, as follows:

Restart the Solr service. If both Kerberos and Sentry are being enabled for
Solr, the MGMT services also need to be restarted. The Solr Server
liveness health checks should clear up once SMON has had a chance to
contact the servers and retrieve metrics.

Out-of-memory errors may occur when using the Reports Manager

Out-of-memory errors may occur when using the Cloudera Manager Reports Manager.

Workaround: Set the value of the "Java Heap Size of Reports Manager"
property to at least the size of the HDFS filesystem image
(fsimage) and restart the Reports Manager.

Applying license key using Internet Explorer 9 and Safari fails

Cloudera Manager is designed to work with IE 9 and above and Safari. However the
file upload widget used to upload a license currently doesn't work
with IE 9 or Safari. Therefore, installing an enterprise license
doesn't work.

Workaround: Use another supported browser.

Issues Fixed in Cloudera Manager 5.0.0 Beta 2

The Sqoop Upgrade command in Cloudera Manager may report success even when the upgrade fails

Workaround: Do one of the following:

Click the Sqoop service and then the Instances tab.

Click the Sqoop server role then the Commands tab.

Click the stdout link and scan for the Sqoop Upgrade command.

In the All Recent Commands page, select the stdout link for latest Sqoop
Upgrade command.

Verify that the upgrade did not fail.

The HDFS Canary Test is disabled for secured CDH 5 services.

Due to a bug in Hadoop's handling of multiple RPC clients with distinct
configurations within a single process with Kerberos security enabled,
Cloudera Manager will disable the HDFS canary test when security is
enabled so as to prevent interference with Cloudera Manager's
MapReduce monitoring functionality.

Severity: Medium

Workaround: None

Not all monitoring configurations are migrated from MR1 to MR2.

When MapReduce v1 configurations are imported for use by YARN (MR2), not all of
the monitoring configuration values are currently migrated. Users may
need to reconfigure custom values for properties such as thresholds.

Severity: Medium

Workaround: Manually reconfigure any missing property values.

"Access Denied" may appear for some features after adding a license or starting a trial.

After starting a 60-day trial or installing a license for Enterprise Edition,
you may see an "access denied" message when attempting to access
certain Enterprise Edition-only features such as the Reports Manager.
You need to log out of the Admin Console and log back in to access
these features.

Severity: Low

Workaround: Log out of the Admin Console and log in again.

Hue must set impersonation on when using Impala with impersonation.

When using Impala with impersonation, the impersonation_enabled
flag must be present and configured in the hue.ini
file. If impersonation is enabled in Impala (i.e. Impala is using
Sentry) then this flag must be set true. If Impala is not using
impersonation, it should be set false (the default).

Workaround: Set advanced configuration snippet value for
hue.ini as follows:

Go to the Hue Service Configuration Advanced Configuration Snippet for
hue_safety_valve.ini under the Hue service Configuration
settings, Service-Wide > Advanced category.

Add the following, then uncomment the setting and set the value True or False as
appropriate: