Cloudera Enterprise

This section lists security bulletins for vulnerabilities that potentially affect the entire Cloudera Enterprise product suite. Bulletins specific to a component, such as
Cloudera Manager, Impala, Spark etc., can be found in the sections that follow.

Cloudera Manager transmits certain diagnostic data (or "bundles") to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and address technical
issues for customers.

Cloudera support discovered that potentially sensitive data may be included in diagnostic bundles and transmitted to Cloudera. This sensitive data cannot be used by Cloudera for any
purpose.

Cloudera has modified Cloudera Manager so that known sensitive data is redacted from the bundles before transmission to Cloudera. Work is in progress in Cloudera CDH components to remove
logging and output of known potentially sensitive properties and configurations.

Cloudera strives to establish and follow best practices for the protection of customer information. Cloudera continually reviews and improves security practices, infrastructure, and
data-handling policies.

Apache Commons Collections Deserialization Vulnerability

Cloudera has learned of a potential security vulnerability in a third-party library called the Apache Commons Collections. This library is used in products distributed and supported by Cloudera (“Cloudera Products”), including core Apache Hadoop. The Apache Commons Collections
library is also in widespread use beyond the Hadoop ecosystem. At this time, no specific attack vector for this vulnerability has been identified as present in Cloudera Products.

In an abundance of caution, we are currently in the process of incorporating a version of the Apache Commons Collections library with a fix into the Cloudera Products. In most cases,
this will require coordination with the projects in the Apache community. One example of this is tracked by HADOOP-12577.

The Apache Commons Collections potential security vulnerability is titled “Arbitrary remote code execution with InvokerTransformer” and is tracked by COLLECTIONS-580. MITRE has not issued a CVE, but related CVE-2015-4852 has been filed for the vulnerability. CERT has issued Vulnerability Note #576313 for this issue.

Impact: This potential vulnerability might enable an attacker to run arbitrary code from a remote machine without requiring authentication.

Immediate action required: Upgrade to the latest suitable version containing this fix when it is available.

Addressed in release/refresh/patch: Beginning with CDH 5.5.1, 5.4.9, and 5.3.9, Cloudera Manager 5.5.1, 5.4.9, and 5.3.9, Cloudera Navigator 2.4.1, 2.3.9
and 2.2.9, and Director 1.5.2, the new Apache Commons Collections library version is included in all Cloudera products.

Heartbleed Vulnerability in OpenSSL

The Heartbleed vulnerability is a serious vulnerability in OpenSSL as described at http://heartbleed.com/ (OpenSSL TLS heartbeat read overrun, CVE-2014-0160). Cloudera products do not ship with OpenSSL, but some components
use this library. Customers using OpenSSL with Cloudera products need to update their OpenSSL library to one that doesn’t contain the vulnerability.

Products affected:

All versions of OpenSSL 1.0.1 prior to 1.0.1g

Components affected:

Hadoop Pipes uses OpenSSL.

If SSL encryption is enabled for Impala's RPC implementation (by setting --ssl_server_certificate). This applies
to any of the three Impala demon processes: impalad, catalogd and statestored.

If HTTPS is enabled for Impala’s debug web server pages (by setting --webserver_certificate_file). This applies to
any of the three Impala demon processes: impalad, catalogd and statestored.

Ensure your Linux distribution version does not have the vulnerability.

“POODLE” Vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack, announced by Bodo Möller, Thai Duong, and Krzysztof Kotowicz at Google, forces the use of the obsolete SSLv3
protocol and then exploits a cryptographic flaw in SSLv3. The result is that an attacker on the same network as the victim can potentially decrypt parts of an otherwise encrypted channel.

SSLv3 has been obsolete, and known to have vulnerabilities, for many years now, but its retirement has been slow because of backward-compatibility concerns. SSLv3 has in the
meantime been replaced by TLSv1, TLSv1.1, and TLSv1.2. Under normal circumstances, the strongest protocol version that both sides support is negotiated at the start of the connection. However, an
attacker can introduce errors into this negotiation and force a fallback to the weakest protocol version -- SSLv3.

The only solution to the POODLE attack is to completely disable SSLv3. This requires changes across a wide variety of components of CDH, and in Cloudera Manager.

No security exposure due to CVE-2017-3162 for Cloudera Hadoop clusters

Information only. No action required. In the spirit of being overly cautious, CVE-2017-3162 was filed by the Apache Hadoop community to document the ability
of the HDFS client (in the CDH 5.x code base) to browse the HDFS namespace without validating the NameNode as a query parameter.

This benign exposure was discovered independently by Cloudera (as well as other members of the Hadoop community) during regular routine static source code analyses. It is considered
benign because there are no known attack vectors from this vulnerability.

Addressed in release/refresh/patch: The vulnerability described by CVE-2017-3161 was previously caught and patched in the CDH code base back to 5.2.x. The
following Cloudera Hadoop clusters are safe from this vulnerability.

Apache YARN NodeManager Password Exposure

The YARN NodeManager in Apache Hadoop may leak the password for its credential store. This credential store is created by Cloudera Manager and contains sensitive information used by the
NodeManager. Any container launched by that NodeManager can gain access to the password that protects the credential store.

Examples of sensitive information inside the credential store include a keystore password and an LDAP bind user password.

The credential store is also protected by Unix file permissions. When managed by Cloudera Manager, the credential store is readable only by the yarn user and the hadoop group. As a
result, the scope of this leak is mitigated, making this a Low severity issue.

Impact: A remote user who can authenticate with the HDFS NameNode can possibly run arbitrary commands with the same privileges as the HDFS service.

This vulnerability is critical because it is easy to exploit and compromises system-wide security. As a result, a remote user can potentially run any arbitrary command as the hdfs user.
This bypasses all Hadoop security. There is no mitigation for this vulnerability.

Encrypted MapReduce spill data on the local file system is vulnerable to unauthorized disclosure

MapReduce spills intermediate data to the local disk. The encryption key used to encrypt this spill data is stored in clear text on the local filesystem along with the encrypted data
itself. A malicious user with access to the file with these credentials can load the tokens from the file, read the key, and then decrypt the spill data.

Immediate action required: Upgrade to one of the above releases if you use spill data encryption. This security fix causes MapReduce ApplicationMaster
failures to not be tolerated when spill data is encrypted; post-upgrade, individual MapReduce jobs might fail if the ApplicationMaster goes down.

When Cloudera Manager starts a YARN NodeManager, it makes all files in its configuration directory (typically /var/run/cloudera-scm-agent/process) readable by all users. This includes
the file containing the Kerberos keytabs (yarn.keytab) and the file containing passwords for the SSL keystore (ssl-server.xml).

Global read permissions must be removed on the NodeManager’s security-related files.

Products affected: Cloudera Manager

Releases affected: All releases of Cloudera Manager 4.0 and higher.

Users affected: Customers who are using YARN in environments where Kerberos or SSL is enabled.

Date/time of detection: March 8, 2015

Severity (Low/Medium/High): High

Impact: Any user who can log in to a host where the YARN NodeManager is running can get access to the keytab file, use it to authenticate to the cluster,
and perform unauthorized operations. If SSL is enabled, the user can also decrypt data transmitted over the network.

CVE: CVE-2015-2263

Immediate action required:

If you are running YARN with Kerberos/SSL with Cloudera Manager 5.x, upgrade to the maintenance release with the security fix. If you are running YARN with Kerberos with Cloudera
Manager 4.x, upgrade to any Cloudera Manager 5.x release with the security fix.

Delete all “yarn” and “HTTP” principals from KDC/Active Directory. After deleting them, regenerate them using Cloudera Manager.

Regenerate SSL keystores that you are using with the YARN service, using a new password.

ETA for resolution: Patches are available immediately with the release of this TSB.

Addressed in release/refresh/patch: Cloudera Manager releases 5.0.6, 5.1.5, 5.2.5, 5.3.3, and 5.4.0 have the fix for this bug.

For further updates on this issue see the corresponding Knowledge article:

Apache Hadoop Distributed Cache Vulnerability

The Distributed Cache Vulnerability allows a malicious cluster user to expose private files owned by the user running the YARN NodeManager process. The malicious user can
create a public tar archive containing a symbolic link to a local file on the host running the YARN NodeManager process.

If you are running Cloudera Manager and CDH 5.2.0, upgrade to Cloudera Manager and CDH 5.2.1

If you are running Cloudera Manager and CDH 5.1.0 through 5.1.3, upgrade to Cloudera Manager and CDH 5.1.4

If you are running Cloudera Manager and CDH 5.0.0 through 5.0.4, upgrade to Cloudera Manager and CDH 5.0.5

Some DataNode Admin Commands Do Not Check If Caller Is An HDFS Admin

Three HDFS admin commands—refreshNamenodes, deleteBlockPool, and shutdownDatanode—lack proper
privilege checks in Apache Hadoop 0.23.x prior to 0.23.11 and 2.x prior to 2.4.1, allowing arbitrary users to make DataNodes unnecessarily refresh their federated NameNode configs, delete inactive
block pools, or shut down. The shutdownDatanode command was first introduced in 2.4.0 and refreshNamenodes and deleteBlockPool were added in 0.23.0. The deleteBlockPool command does not actually remove any underlying data from affected DataNodes, so there is
no data loss possibility due to this vulnerability, although cluster operations can be severely disrupted.

Products affected:

Hadoop HDFS

Releases affected:

CDH 5.0.0 and CDH 5.0.1

Users affected:

All users running an HDFS cluster configured with Kerberos security

Date/time of detection:

April 30, 2014

Severity: Medium

Impact: Through HDFS admin command-line tools, non-admin users can shut down DataNodes or force them to perform unnecessary operations.

CVE: CVE-2014-0229

Immediate action required: Upgrade to CDH 5.0.2 or higher.

JobHistory Server Does Not Enforce ACLs When Web Authentication is Enabled

The JobHistory Server does not enforce job ACLs when web authentication is enabled. This means that any user can see details of all jobs. This only affects users who are using MRv2/YARN
with HTTP authentication enabled.

Products affected:

Hadoop

Releases affected:

All versions of CDH 4.5.x up to 4.5.0

All versions of CDH 4.4.x up to 4.4.0

All versions of CDH 4.3.x up to 4.3.1

All versions of CDH 4.2.x up to 4.2.2

All versions of CDH 4.1.x up to 4.1.5

All versions of CDH 4.0.x

CDH 5.0.0 Beta 1

Users affected:

Users of YARN who have web authentication enabled.

Date/time of detection: October 14, 2013

Severity: Low
Note: YARN is an experimental feature in CDH 4; it is no longer experimental in CDH 5.

Impact: Low

CVE: CVE-2013-6446

Immediate action required:

None, if you are not using MRv2/YARN with HTTP authentication.

If you are using MRv2/YARN with HTTP authentication, upgrade to CDH 4.6.0 or CDH 5.0.0 Beta 2 or contact Cloudera for a patch.

ETA for resolution: Fixed in CDH 5.0.0 Beta 2 released on 2/10/2014 and CDH 4.6.0 released on 2/27/2014.

Addressed in release/refresh/patch: CDH 4.6.0 and CDH 5.0.0 Beta 2.

Verification:

This vulnerability affects the JobHistory Server Web Services; it does not affect the JobHistory Server Web UI.

Important:

The vulnerability is exposed only when the JobHistory Server HTTP endpoint is configured with an authentication filter (such as Hadoop's built-in AuthenticationFilter or a custom filter)
that populates the HttpServletRequest.getRemoteUser() that is propagated to the JobHistory Server. This configuration is independent of the Hadoop cluster being
configured with Kerberos security.

To verify that the vulnerability has been fixed, do the following steps:

If the vulnerability has been fixed, you should get an HTTP UNAUTHORIZED response; if the vulnerability has not been fixed, you should get an XML output
with basic information about the job.

Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability

The Apache Hadoop and HBase RPC protocols are intended to provide bi-directional authentication between clients and servers. However, a malicious server or network attacker can
unilaterally disable these authentication checks. This allows for potential reduction in the configured quality of protection of the RPC traffic, and privilege escalation if authentication
credentials are passed over RPC.

Products affected:

Hadoop

HBase

Releases affected:

All versions of CDH 4.3.x prior to 4.3.1

All versions of CDH 4.2.x prior to 4.2.2

All versions of CDH 4.1.x prior to 4.1.5

All versions of CDH 4.0.x

Users affected:

Users of HDFS who have enabled Hadoop Kerberos security features and HDFS data encryption features.

Users of MapReduce or YARN who have enabled Hadoop Kerberos security features.

Users of HBase who have enabled HBase Kerberos security features and who run HBase co-located on a cluster with MapReduce or YARN.

Date/time of detection: June 10th, 2013

Severity: Severe

Impact:

RPC traffic from Hadoop clients, potentially including authentication credentials, may be intercepted by any user who can submit jobs to Hadoop. RPC traffic from HBase clients to Region
Servers may be intercepted by any user who can submit jobs to Hadoop.

Impact: Malicious clients may gain write access to data for which they have read-only permission, or gain read access to any data blocks whose IDs they can
determine.

Mechanism: When Hadoop security features are enabled, clients authenticate to DataNodes using BlockTokens issued by the NameNode to the client. The
DataNodes are able to verify the validity of a BlockToken, and will reject BlockTokens that were not issued by the NameNode. The DataNode determines whether or not it should check for BlockTokens
when it registers with the NameNode.

Due to a bug in the DataNode/NameNode registration process, a DataNode which registers more than once for the same block pool will conclude that it thereafter no longer needs to check
for BlockTokens sent by clients. That is, the client will continue to send BlockTokens as part of its communication with DataNodes, but the DataNodes will not check the validity of the tokens. A
DataNode will register more than once for the same block pool whenever the NameNode restarts, or when HA is enabled.

Immediate action required:

Understand the vulnerability introduced by restarting the NameNode, or enabling HA.

Upgrade to CDH 4.0.1 as soon as it becomes available.

Resolution: July 6, 2012

Addressed in release/refresh/patch: CDH 4.0.1 This release addresses the vulnerability identified by CVE-2012-3376.

Verification: On the NameNode run one of the following:

yum list hadoop-hdfs-namenode on RPM-based systems

dpkg -l | hadoop-hdfs-namenode on Debian-based systems

zypper info hadoop-hdfs-namenode for SLES11

On all DataNodes run one of the following:

yum list hadoop-hdfs-datanode on RPM-based systems

dpkg -l | grep hadoop-hdfs-datanode on Debian-base

zypper info hadoop-hdfs-datanode for SLES11

The reported version should be >= 2.0.0+91-1.cdh4.0.1

Several Authentication Token Types Use Secret Key of Insufficient Length

Products Affected: HDFS, MapReduce, YARN, Hive, HBase

Releases Affected: If you use MapReduce, HDFS, HBase, or YARN, CDH4.0.x and all CDH3 versions between CDH3 Beta 3 and CDH3u5 refresh 1.

Impact: Malicious users may crack the secret keys used to sign security tokens, granting access to modify data stored in HDFS, HBase, or Hive without
authorization. HDFS Transport Encryption may also be brute-forced.

Mechanism: This vulnerability impacts a piece of security infrastructure in Hadoop Common, which affects the security of authentication tokens used by HDFS,
MapReduce, YARN, HBase, and Hive.

Several components in Hadoop issue authentication tokens to clients in order to authenticate and authorize later access to a secured resource. These tokens consist of an identifier and a
signature generated using the well-known HMAC scheme. The HMAC algorithm is based on a secret key shared between multiple server-side components.

For example, the HDFS NameNode issues block access tokens, which authorize a client to access a particular block with either read or write access. These tokens are then verified using a
rotating secret key, which is shared between the NameNode and DataNodes. Similarly, MapReduce issues job-specific tokens, which allow reducer tasks to retrieve map output. HBase similarly issues
authentication tokens to MapReduce tasks, allowing those tasks to access HBase data. Hive uses the same token scheme to authenticate access from MapReduce tasks to the Hive metastore.

The HMAC scheme relies on a shared secret key unknown to the client. In currently released versions of Hadoop, this key was created with an insufficient length (20 bits), which allows an
attacker to obtain the secret key by brute force. This may allow an attacker to perform several actions without authorization, including accessing other users' data.

Immediate action required: If Security is enabled, upgrade to the latest CDH release.

ETA for resolution: As of 10/12/2012, this is patched in CDH4.1.0 and CDH3u5 refresh 2. Both are available now.

Impact: Vulnerability allows an authenticated malicious user to impersonate any other user on the cluster.

Immediate action required: Upgrade the hadoop-0.20-sbin package to version to 0.20.2+923.197 or
higher on all TaskTrackers to address the vulnerability. Upgrading hadoop-0.20-sbin causes upgrade of several related (but unchanged) hadoop packages. If using Cloudera
Manager versions 3.7.3 and below, you will also need to upgrade to Cloudera Manager 3.7.4 or higher before you can successfully run jobs with Kerberos enabled after upgrading the hadoop-0.20-sbin package.

Resolution: 3/21/2012

Addressed in release/refresh/patch: hadoop-0.20-sbin package, version 0.20.2+923.197 This release addresses the vulnerability
identified by CVE-2012-1574.

Remediation verification: On all TaskTrackers run one of the following:

yum list hadoop-0.20-sbin on RPM-based systems

dpkg -l | grep hadoop-0.20-sbin on Debian-based systems

zypper info hadoop-0.20-sbin for SLES11

The reported version should be >= 0.20.2+923.197.

If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support through http://support.cloudera.com.

Apache HBase

This section lists the security bulletins that have been released for Apache HBase.

HBase Metadata in ZooKeeper Can Lack Proper Authorization Controls

In certain circumstances, HBase does not properly set up access control in ZooKeeper. As a result, any user can modify this metadata and perform attacks, including service denial, or
cause data loss in a replica cluster. Clusters configured using Cloudera Manager are not vulnerable.

Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability

The Apache Hadoop and HBase RPC protocols are intended to provide bi-directional authentication between clients and servers. However, a malicious server or network attacker can
unilaterally disable these authentication checks. This allows for potential reduction in the configured quality of protection of the RPC traffic, and privilege escalation if authentication
credentials are passed over RPC.

Products affected:

Hadoop

HBase

Releases affected:

All versions of CDH 4.3.x prior to 4.3.1

All versions of CDH 4.2.x prior to 4.2.2

All versions of CDH 4.1.x prior to 4.1.5

All versions of CDH 4.0.x

Users affected:

Users of HDFS who have enabled Hadoop Kerberos security features and HDFS data encryption features.

Users of MapReduce or YARN who have enabled Hadoop Kerberos security features.

Users of HBase who have enabled HBase Kerberos security features and who run HBase co-located on a cluster with MapReduce or YARN.

Date/time of detection: June 10th, 2013

Severity: Severe

Impact:

RPC traffic from Hadoop clients, potentially including authentication credentials, may be intercepted by any user who can submit jobs to Hadoop. RPC traffic from HBase clients to Region
Servers may be intercepted by any user who can submit jobs to Hadoop.

"Apache Hive (JDBC + HiveServer2) implements SSL for plain TCP and HTTP connections (it supports both transport modes). While validating the server's certificate during
the connection setup, the client doesn't seem to be verifying the common name attribute of the certificate. In this way, if a JDBC client sends an SSL request to server abc.example.com, and the
server responds with a valid certificate (certified by CA) but issued to xyz.example.com, the client will accept that as a valid certificate and the SSL handshake will go through."

This means that it would be possible to set up a man-in-the-middle attack to intercept all SSL-protected JDBC communication.

CDH Hive users have the option of deploying either the Apache Hive JDBC driver or the Cloudera Hive JDBC driver that is distributed by Cloudera for use with their JDBC applications.
Traditionally, Cloudera has strongly recommended use of the Cloudera Hive JDBC driver — and offers limited support for the Apache Hive JDBC driver. The JDBC jars in the
CLASSPATH environment variable can be examined to determine which JDBC driver is in use. If the hive-jdbc-1.1.0-cdh<CDH_VERSION>.jar is included in the CLASSPATH, the Apache JDBC driver is being used. If the HiveJDBC4.jar or the HiveJDBC41.jar is in the CLASSPATH, that indicates the Cloudera Hive JDBC driver is being used.

JDBC drivers can also be used in an embedded mode. For example, when connecting to HiveServer2 by way of tools such as Beeline, the JDBC Client is invoked internally over the Thrift API.
The JDBC driver in use by Beeline can also be determined by examining the driver version information printed after the connection is established.

For Apache JDBC drivers with SSL enabled: You can switch to use the Cloudera Hive JDBC driver. Note that the Cloudera Hive JDBC driver only displays query
results and skips displaying informational messages such as those logged by MapReduce jobs (that are invoked as part of executing the JDBC command). For example:

For non-Beeline clients (including third-party tools or applications): If Apache Hive JDBC drivers are being used, switch to Cloudera JDBC drivers (and use externally signed CA certs
as always recommended for production use).

For Beeline (or Beeline-based clients, e.g. Oozie): Update Beeline’s embedded Apache JDBC driver to Cloudera JDBC driver as shown above. Alternatively, if these JDBC-based clients are
invoked within a CDH cluster, upgrade the cluster to a release where the issue has been addressed.

Addressed release/refresh/patch: CDH5.11.1 and later

For the latest update on this issue, see the corresponding Knowledge article:

Hive built-in functions “reflect”, “reflect2”, and “java_method” not blocked by default in Sentry

Sentry does not block the execution of Hive built-in functions “reflect”, “reflect2”, and “java_method” by default in some CDH versions. These functions allow the execution of arbitrary
user code, which is a security issue.

Hive may allow a user to authenticate without entering a password, depending on the order in which classes are loaded.

Specifically, Hive's SaslPlainServerFactory checks passwords, but the same class provided in Hadoop does not. Therefore, if the Hadoop class is loaded first, users can authenticate with
HiveServer2 without specifying the password.

Products affected: Hive

Releases affected:

CDH 5.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5

CDH 5.1, 5.1.2, 5.1.3, 5.1.4

CDH 5.2, 5.2.1, 5.2.3, 5.2.4

CDH 5.3, 5.3.1, 5.3.2

CDH 5.4.1, 5.4.2, 5.4.3

Note: CDH 5.4.0 is not affected by this issue.

Users affected: All users using Hive with LDAP authentication.

Date/time of detection: March 11, 2015

Severity: (Low/Medium/High) High

Impact: A malicious user may be able to authenticate with HiveServer2 without specifying a password.

Impact: An attacker can leverage this issue to harvest valid user accounts and attempt to use the accounts in brute-force attacks.

CVE: CVE-2016-4947

Immediate action required: Upgrade to any of the following releases, which
resolve this issue.

Addressed in release/refresh/patch:

CDH 5.8.3 and higher

CDH 5.9.1 and higher

CDH 5.10.0 and higher

Hue Document Privilege Escalation

A user with read-only access to a document in Hue can grant oneself write access to that document, and change that document’s privileges for other users. If the document is a Hive,
Impala, or Oozie job, the user can inject arbitrary code that runs with the permissions of the next user that runs the job.

Impala Statestore exposes plaintext data with SSL TLS enabled

During a security analysis, Cloudera found that despite TLS being enabled for “internal” Impala ports, the Statestore thrift port did not actually use TLS. This gap would allow an
adversary with network access to eavesdrop and potentially modify the packets going to and coming from that port.

A malicious server which impersonates an Impala service (either Impala daemon, Catalog Server or Statestore) can cause a client (Impala daemon or Statestore) to skip its authentication
checks when Kerberos is enabled. That malicious server may then intercept sensitive data intended for the Impala service.

Products affected: Impala

Releases affected:

CDH 5.7 and lower

CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3, 5.8.4

CDH 5.9.0, 5.9.1

CDH 5.10.0

Users affected: Deployments that use Kerberos, but not TLS, for authentication between Impala daemons. Deployments that use TLS to secure communication
between services are not affected by the same issue.

Read Access to Impala Views in queries with WHERE-clause Subqueries

Impala bypasses Sentry authorization for views if the query or the view itself contains a subquery in any WHERE clause. This gives read access to the views to any user that would
otherwise have insufficient privileges.

The underlying base tables of views are unaffected. Queries that do not have subqueries in the WHERE clause are unaffected (unless the view itself contains such a subquery).

Other operations, like accessing the view definition or altering the view, are unaffected.

Products affected: Impala

Releases affected:

CDH 5.2.0 and higher

CDH 5.3.0 and higher

CDH 5.4.0 and higher

CDH 5.5.0 and higher

CDH 5.6.0, 5.6.1

CDH 5.7.0, 5.7.1, 5.7.2

CDH 5.8.0

Users affected: Users who run Impala + Sentry and use views

Date/time of detection: July 26, 2016

Severity (Low/Medium/High): High

Impact: Users can bypass Sentry authorization for Impala views.

CVE: CVE-2016-6605

Immediate action required: Upgrade to a CDH version containing the fix.

Impala issued REVOKE ALL ON SERVER does not revoke all privileges

For Impala users that use Sentry for authorization, issuing a REVOKE ALL ON SERVER FROM <ROLE> statement does not remove all server-level privileges from the <ROLE>.
Specifically, Sentry fails to revoke privileges that were issued to <ROLE> through a GRANT ALL ON SERVER TO <ROLE> statement. All other privileges are revoked, but <ROLE> still has
ALL privileges at SERVER scope after the REVOKE ALL ON SERVER statement has been executed. The privileges are shown in the output of a SHOW GRANT statement.

Products affected: Impala, Sentry

Releases affected:

CDH 5.5.0, CDH 5.5.1, CDH 5.5.2, CDH 5.5.4

CDH 5.6.0, CDH 5.6.1

CDH 5.7.0

Users affected: Customers who use Sentry authorization in Impala

Date/time of detection: April 25, 2016

Severity (Low/Medium/High): Medium

Impact: Inability to revoke ALL SERVER privileges from a specific role using Impala if they have been granted through a GRANT ALL SERVER statement.

CVE: CVE-2016-4572

Immediate action required: If the affected role has ALL privileges on SERVER, you can remove these privileges by dropping and re-creating the role.
Alternatively, upgrade to 5.7.1, or 5.8.0 or higher.

In an Impala deployment secured with Kerberos, a malicious authenticated user can create a program that bypasses Impala and Sentry authorization mechanisms to issue internal API calls
directly. That user can then query tables to which they should not have access, or alter table metadata.

Products affected: Impala

Releases affected: All versions of CDH 5, except for those indicated in the ‘Addressed in release/refresh/patch’ section below.

Sensitive data of processes managed by Cloudera Manager are not secured by file permissions

Impact: Sensitive data (such as passwords) might be exposed to users with direct access to cluster hosts due to overly-permissive local file system
permissions for certain files created by Cloudera Manager.

The password is also visible in the Cloudera Manager Admin Console in the configuration files for the Spark History Server process.

Local Script Injection Vulnerability In Cloudera Manager

There is a script injection vulnerability in Cloudera Manager’s help search box. The user of Cloudera Manager can enter a script but there is no way for an attacker to inject a script
externally. Furthermore, the script entered into the search box has to actually return valid search results for the script to execute.

Cross Site Scripting (XSS) Vulnerability in Cloudera Manager

Several pages in the Cloudera Manager UI are vulnerable to a XSS attack.

Products affected: Cloudera Manager

Releases affected: All versions of Cloudera Manager 5 except for those indicated in the ‘Addressed in release/refresh/patch’ section below.

Users affected: All customers who use Cloudera Manager.

Date/time of detection: May 19, 2016

Detected by: Solucom Advisory

Severity (Low/Medium/High): High

Impact: A XSS vulnerability can be used by an attacker to perform malicious actions. One probable form of attack is to steal the credentials for a victim
Cloudera Manager account.

CVE: CVE-2016-4948

Immediate action required: Upgrade Cloudera Manager to version 5.7.2 or higher or 5.8.x

Addressed in release/refresh/patch: Cloudera Manager 5.7.2 and higher and 5.8.x.

Sensitive Data Exposed in Plain-Text Readable Files

Cloudera Manager Agent stores configuration information in various configuration files that are world-readable. Some of this configuration information may involve sensitive user data,
including credentials values used for authentication with other services. These files are located in /var/run/cloudera-scm-agent/supervisor/include on every host.
Cloudera Manager passes information such as credentials to Hadoop processes it manages via environment variables, which are written in configuration files in this directory.

Additionally, the response from Cloudera Manager Server to heartbeat messages sent by the Cloudera Manager Agent is stored in a world-readable file (/var/lib/cloudera-scm-agent/response.avro) on every host. This file may contain sensitive data.

These files and directories have been restricted to being readable only by the user running Cloudera Manager Agent, which by default is root.

Sensitive Information in Cloudera Manager Diagnostic Support Bundles

Cloudera Manager is designed to transmit certain diagnostic data (or "bundles") to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and
address technical issues for our customers. Cloudera internally discovered a potential vulnerability in this feature, which could cause any sensitive data stored as "advanced configuration snippets
(ACS)" (formerly called "safety valves") to be included in diagnostic bundles and transmitted to Cloudera. Notwithstanding any possible transmission, such sensitive data is not used by Cloudera for
any purpose.

Cloudera has taken the following actions:

Modified Cloudera Manager so that it no longer transmits advanced configuration snippets containing the sensitive data, and

Cloudera strives to follow and also help establish best practices for the protection of customer information. In this effort, we continually review and improve our security practices,
infrastructure, and data handling policies.

When Cloudera Manager starts a YARN NodeManager, it makes all files in its configuration directory (typically /var/run/cloudera-scm-agent/process) readable by all users. This includes
the file containing the Kerberos keytabs (yarn.keytab) and the file containing passwords for the SSL keystore (ssl-server.xml).

Global read permissions must be removed on the NodeManager’s security-related files.

Products affected: Cloudera Manager

Releases affected: All releases of Cloudera Manager 4.0 and higher.

Users affected: Customers who are using YARN in environments where Kerberos or SSL is enabled.

Date/time of detection: March 8, 2015

Severity (Low/Medium/High): High

Impact: Any user who can log in to a host where the YARN NodeManager is running can get access to the keytab file, use it to authenticate to the cluster,
and perform unauthorized operations. If SSL is enabled, the user can also decrypt data transmitted over the network.

CVE: CVE-2015-2263

Immediate action required:

If you are running YARN with Kerberos/SSL with Cloudera Manager 5.x, upgrade to the maintenance release with the security fix. If you are running YARN with Kerberos with Cloudera
Manager 4.x, upgrade to any Cloudera Manager 5.x release with the security fix.

Delete all “yarn” and “HTTP” principals from KDC/Active Directory. After deleting them, regenerate them using Cloudera Manager.

Regenerate SSL keystores that you are using with the YARN service, using a new password.

ETA for resolution: Patches are available immediately with the release of this TSB.

Addressed in release/refresh/patch: Cloudera Manager releases 5.0.6, 5.1.5, 5.2.5, 5.3.3, and 5.4.0 have the fix for this bug.

For further updates on this issue see the corresponding Knowledge article:

Cloudera Manager exposes sensitive data

In the Cloudera Manager 5.2 release, the LDAP bind password was erroneously marked such that it would be written to the world-readable files in /etc/hadoop, in addition to the more
private files in /var/run. Thus, any user on any host of a Cloudera Manager managed cluster could read the LDAP bind password.

The fix to this issue removes the LDAP bind password from the files in /etc/hadoop; it is only written to configuration files in /var/run. Those files are owned by and only readable by
the appropriate service.

Cloudera Manager writes configuration parameters to several locations. Each service gets every parameter that it requires in a directory in /var/run, and the files in those directories
are not world-readable. Clients (for example, the “hdfs” command) obtain their configuration parameters from files in /etc/hadoop. The files in /etc/hadoop are world-readable. Cloudera Manager keeps
track of where each configuration parameter is to be written so as to expose each parameter only in the location where it is required.

Sensitive configuration values exposed in Cloudera Manager

Certain configuration values that are stored in Cloudera Manager are considered "sensitive", such as database passwords. These configuration values are expected to be inaccessible to
non-admin users, and this is enforced in the Cloudera Manager Admin Console. However, these configuration values are not redacted when reading them through the API, possibly making them accessible to
users who should not have such access.

For CM 3.7.x (Enterprise Edition), edit the configuration "Minimum user ID for job submission" to a number higher than any UIDs on the system. 65535 is the largest value that Cloudera
Manager will accept, and is typically sufficient. Restart the MapReduce service. To find the current maximum UID on your system, run

For CM 3.7.x Free Edition, remove the file /usr/lib/hadoop-0.20/sbin/Linux-amd64-64/task-controller. This file is part of the hadoop-0.20-sbin package and is re-installed by upgrades.

For SCM 3.5, if the cluster has been run in both secure and non-secure configurations, remove /etc/hadoop/conf/taskcontroller.cfg from all TaskTrackers.
Repeat this in the future if you reconfigure the cluster from a Kerberized to a non-Kerberized configuration.

Resolution: Mar 27, 2012

Addressed in release/refresh/patch: Cloudera Manager 3.7.5

Verification: Verify that, in non-secure clusters, /etc/hadoop/conf/taskcontroller.cfg is unconfigured on all TaskTrackers.
(A file with only lines starting with # is unconfigured.)

If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support through http://support.cloudera.com.

Cloudera Navigator

Cloudera Navigator Vulnerable to the POODLE Attack

Cloudera Navigator 2.2.0 through 2.2.3, and 2.3.0 and 2.3.1 includes SSL/TLS support; however, SSLv3 protocol support, which is vulnerable to the POODLE (CVE-2014-3566) attack, was erroneously not removed.

This vulnerability affects only those installations of Cloudera Navigator that are configured to use SSL/TLS.

Products affected: Cloudera Navigator

Releases affected:

Cloudera Navigator

Corresponding Cloudera Manager

2.2.0

5.3.0

2.2.1

5.3.1

2.2.2

5.3.2

2.2.3

5.3.3

2.3.0

5.4.0

2.3.1

5.4.1

Users affected: All web users and API clients of Cloudera Navigator when SSL/TLS is enabled.

Please note: if you are upgrading from Cloudera Navigator 2.2.x to 2.3.3 or higher (that is, upgrading from Cloudera Manager 5.3.x to 5.4.3 or higher) and
are impacted by this issue, you must remove the Advanced Configuration (safety value) SSL settings and reconfigure SSL using the new configuration, as specified at:

Cloudera Navigator Key Trustee

Key Trustee Server Passive Not Storing Keys Synchronously

Under normal operations, the active Key Trustee Server rejects key creation requests when a passive server is down. However, due to lack of synchronous replication on the active server,
the following scenario can occur, causing key loss:

Passive server goes down

Encryption zone is created generating a new key stored only in the active server

The solrconfig.xml.secure sample configuration which was provided with CDH, if used to create solrconfig.xml, does not enforce Sentry authorization on the request URI /update/json/docs
because it is missing a necessary attribute.

Products affected: Solr (if Sentry enabled)

Releases affected:

CDH 5.8 and lower

CDH 5.9.2 and lower

CDH 5.10.1 and lower

CDH 5.11.1 and lower

Users affected: Those who are using Sentry authorization with Cloudera Search and who have used the provided sample configuration and have not specified the
below attributes in their solrconfig.xml file.

After updating the configuration in Zookeeper, the collections must be reloaded.

Addressed in release/refresh/patch: The following releases will contain the fixed sample configuration file:

CDH 5.9.3 and higher

CDH 5.10.2 and higher

CDH 5.11.2 and higher

CDH 5.12.0 and higher

Upgrading will only correct the sample configuration file. The fix mentioned above will still need to be applied on the affected cluster.

Apache Solr ReplicationHandler Path Traversal Attack

When using the Index Replication feature, Solr nodes can pull index files from a master/leader node using an HTTP API that accepts a file name. However, Solr did not validate the file
name, hence it was possible to craft a special request involving path traversal, leaving any file readable to the Solr server process exposed. Solr servers using Kerberos authentication are at less
risk since only authenticated users can gain direct HTTP access.

Solr RealTimeGet queries with the id or ids parameters are not checked by Sentry document-level security in versions prior
to CDH5.7.0. The id or ids parameters must be exact matches for document ids (wild-carding is not supported) and the document ids are not
otherwise visible to users who are denied access by document-level security. However, a user with internal knowledge of the document id structure or who is able to guess document ids is able to
access unauthorized documents. This issue is documented in SENTRY-989.

The solrconfig.xml.secure sample configuration which was provided with CDH, if used to create solrconfig.xml, does not enforce Sentry authorization on the request URI /update/json/docs
because it is missing a necessary attribute.

Products affected: Solr (if Sentry enabled)

Releases affected:

CDH 5.8 and lower

CDH 5.9.2 and lower

CDH 5.10.1 and lower

CDH 5.11.1 and lower

Users affected: Those who are using Sentry authorization with Cloudera Search and who have used the provided sample configuration and have not specified the
below attributes in their solrconfig.xml file.

After updating the configuration in Zookeeper, the collections must be reloaded.

Addressed in release/refresh/patch: The following releases will contain the fixed sample configuration file:

CDH 5.9.3 and higher

CDH 5.10.2 and higher

CDH 5.11.2 and higher

CDH 5.12.0 and higher

Upgrading will only correct the sample configuration file. The fix mentioned above will still need to be applied on the affected cluster.

Impala issued REVOKE ALL ON SERVER does not revoke all privileges

For Impala users that use Sentry for authorization, issuing a REVOKE ALL ON SERVER FROM <ROLE> statement does not remove all server-level privileges from the <ROLE>.
Specifically, Sentry fails to revoke privileges that were issued to <ROLE> through a GRANT ALL ON SERVER TO <ROLE> statement. All other privileges are revoked, but <ROLE> still has
ALL privileges at SERVER scope after the REVOKE ALL ON SERVER statement has been executed. The privileges are shown in the output of a SHOW GRANT statement.

Products affected: Impala, Sentry

Releases affected:

CDH 5.5.0, CDH 5.5.1, CDH 5.5.2, CDH 5.5.4

CDH 5.6.0, CDH 5.6.1

CDH 5.7.0

Users affected: Customers who use Sentry authorization in Impala

Date/time of detection: April 25, 2016

Severity (Low/Medium/High): Medium

Impact: Inability to revoke ALL SERVER privileges from a specific role using Impala if they have been granted through a GRANT ALL SERVER statement.

CVE: CVE-2016-4572

Immediate action required: If the affected role has ALL privileges on SERVER, you can remove these privileges by dropping and re-creating the role.
Alternatively, upgrade to 5.7.1, or 5.8.0 or higher.

Addressed in release/refresh/patch: CDH 5.7.1, CDH 5.8.0 and higher.

Hive built-in functions “reflect”, “reflect2”, and “java_method” not blocked by default in Sentry

Sentry does not block the execution of Hive built-in functions “reflect”, “reflect2”, and “java_method” by default in some CDH versions. These functions allow the execution of arbitrary
user code, which is a security issue.

Apache Spark

Unsafe deserialization in Apache Spark launcher API

In Apache Spark 1.6.0 until 2.1.1, the launcher API performs unsafe deserialization of data received by its socket. This makes applications launched programmatically using the
SparkLauncher#startApplication() API potentially vulnerable to arbitrary code execution by an attacker with access to any user account on the local machine. It does not affect apps run by
spark-submit or spark-shell. The attacker would be able to execute code as the user that ran the Spark application. Users are encouraged to update to Spark version 2.2.0 or later.

Cloudera Distribution of Apache Spark 2

Unsafe deserialization in Apache Spark launcher API

In Apache Spark 1.6.0 until 2.1.1, the launcher API performs unsafe deserialization of data received by its socket. This makes applications launched programmatically using the
SparkLauncher#startApplication() API potentially vulnerable to arbitrary code execution by an attacker with access to any user account on the local machine. It does not affect apps run by
spark-submit or spark-shell. The attacker would be able to execute code as the user that ran the Spark application. Users are encouraged to update to Spark version 2.2.0 or later.

Impact: The ZooKeeper C client shells cli_st and cli_mt have a buffer overflow vulnerability
associated with parsing of the input command when using the cmd:<cmd> batch mode syntax. If the command string exceeds 1024 characters, a buffer overflow occurs.
There is no known compromise that takes advantage of this vulnerability, and if security is enabled, the attacker is limited by client-level security constraints.

CVE: CVE-2016-5017

Immediate action required: Use the fully featured/supported Java CLI rather than the C CLI. This can be accomplished by executing the zookeeper-client command on hosts running the ZooKeeper server role.