I am not sure about the CM's warning, but in principal, you should only add an add number of Zookeeper instances, e.g. 3, 5, or even 7.

The RetryInvocationHandler warning should be unrelated to the zookeeper issue though. Instead, it's probably that the first namenode is the standby NN. If you manually fail over, I think you wouldn't see the warning again.

You might also want to enable command line debug logs with the following command:export HADOOP_ROOT_LOGGER=DEBUG,console

Yes, you are correct about having an odd number of ZK nodes. CM also warned me about this, which is part of the reason I wanted to reduce it down to 5 ZK nodes.

Regarding the namenode in standby, that is a very relevant topic for me because I just finished a difficult issue where one of my namenodes failed and I needed to switch to the standby node. You can see the forum post here. Everything was working good after this but I guess it is possible that I did something wrong.

I have attached a screenshot of CM showing the active and standby namenodes. Are you suggesting that I should do a manual failover?

10.0.0.157 is definitely done. The namenode role was removed from that host in the course of resolving this issue. In its place, I added a new host to the cluster and made it the namenode (10.0.0.246).

Here is the screenshot again. Let me know if you still cannot see it. It is visible from this link as well.

Since it is starting to look like a problem with the configuration files on the host, I am wondering if 'Deploy Client Configuration' could be useful. (That is the option in CM under the 'Actions' dropdown.)

Also in that dropdown is the option for 'View Client Configuration URLs'. Selecting the hdfs from the following menu showed this hdfs-site.xml (below). It contains the correct ip addresses for the current namenodes.

@epowell, Yes, you are correct. Client Configurations are managed separately from the configurations that servers use when CDH is managed by Cloudera Manager. Deploy Client Configuration for your cluster to make sure the /etc/hadoop/conf files contain the latest configuration items. Once that is done, you should be able to run commands just fine.