I assume this issue is the same as this: https://community.hortonworks.com/questions/248245/cloudbreak-stuck-in-creating-aws-hdf-cluster.html I suggest to handle this in that question and not continuing the conversation everywhere.
... View more

And have you checked if every port on security group are configured well? Check this doc to make sure: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.1/security/content/cb_default-cluster-security-groups.html
... View more

Anyway, you need to wait until cluster creation finishes (or throws some error in case of failure), cluster creation time can vary, depends on the speed of the communication between cloudbreak and cluster, depends on the instance details (cpu, memory), etc.
... View more

Hi @kirk sullivan! Which version of cloudbreak are you using? For now you can try to add the missing policy (iam:CreateServiceLinkedRole) to your role (CredentialRole/hadoop-provisioning) used for cluster provision. If you provide the version of Cloudbreak, I can check source code and the documentation and get deeper into this issue.
... View more

Hi @Kartheek Kopparapu! Has the cluster provision succeeded? It takes time for nodes to start and during that time Cloudbreak polls the availability of the node, that can cause temporary error message.
... View more

Hi @navdeep agarwal Regarding the error you mentioned, it seems you are trying to use the same database for both clusters, I think Ambari need a different, empty database on the existing database instance (as I see it is an AWS RDS instance) for new cluster. Have you tried to use a different database for your hdp 3.1 cluster?
... View more

Hi @Tianyi Chen! Could you check which images were not accessible from the generated docker-compose.yml? Dockerhub shows only the last created/updated images in tags (https://github.com/docker/hub-feedback/issues/1416) but when I execute this: docker pull hortonworks/cloudbreak:2.7.2 then docker pulls it successfully, so it exists.
... View more

Introduction
Cloudbreak uses SaltStack to manage nodes of the cluster, install packages, change configuration files and execute recipes.
Motivation
Cloudbreak gives the opportunity to the users to manage their virtual machines. When user SSH into a machine and updates/installs packages using system package manager, SaltStack can fail with version incompatibility issues. This scenario led us to create a solution which ensures SaltStack’s correct behavior independently of the user actions with system package manager.
SaltStack in virtualenv
Since version 2.8, Cloudbreak provides official images to every cloud provider which contains Saltstack in a separated virtual environment. This prevents the mentioned version incompatibility. The virtual machines created by the official images creates a separated environment for SaltStack and installs it using the Python package manager of that environment.
Virtualenv
Virtualenv is widely used tool which can used to create different Python environments. When you create an environment, virtualenv copies the system Python binaries and libraries and adds additions binaries. You can define different Python version also.
You have to activate the environment to using the Python of the created environment.
source /path/to/environment/bin/activate
With activation, virtualenv adds the binary directory of your environment to the
PATH system variable.
After your work is finished, you have to deactivate the environment.
deactivate
Using Saltstack
SaltStack can be used on virtual machines like in the previous versions, the only difference is that you need to activate the environment to execute any Salt commands.
Without activation you will get an error message like “salt command is not found”.
You need to
switch to root user to activate the environment and execute salt commands in the same session.
Official images from Cloudbreak team defines a binary which can used to activate the environment without knowledge about the location the environment.
source activate_salt_env
Without this binary, usually you can find the directory under
/opt/salt_{salt_version} and you can activate it using this directory:
source /opt/salt_{salt_version}/bin/activate
After activation you are able to execute Salt commands:
salt '*' test.ping
After you finished your work with SaltStack,
you have to deactivate the environment. It is an important step because as long as your environment is active you’re using the Python of the activated environment in the current session. Forgetting to deactivate an environment can lead to Python issues in current session.
deactivate
SaltStack version
You can check the SaltStack version using Python package manager of the separated environment.
After activation you can list the packages:
pip list
If you want to upgrade SaltStack you can also use Python package manager of the separated environment.
pip install salt=={desired_salt_version} --upgrade
Please be aware during upgrade of SaltStack, version of SaltStack (and it’s dependencies like ZeroMQ) have to match on every instance of a cluster.
Also be aware to modify the directory of the environment, because every Salt related system service relies on that directory and have to be modified in case of directory change.
... View more

Hi @Jakub Igla, We're investigating this issue. Could you share with us, which Ambari blueprint are you using? Cloudbreak doesn't manage these IDs and it seems they are generated randomly between a range. I suggest to try cluster creation again, sometimes hive user gets ID higher than 1000. Regards, Adam
... View more