3. RE: Zenoss deployment tool

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.

4. RE: Zenoss deployment tool

>> but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.Well, as it read in documentation, actually it's required to have a spare partition, not entire disk. So if vragnan is able to create a spare partition,you will be able to configure it. As an instance lvm_dev=/dev/sda4 .

>>Then I'm not sure how to proceed. // about ssh keysI suppose you have (rsa) ssh keys :)You have to upload your public key ~/.ssh/id_rsa.pub to gce.This can be accomplished at gcloud (web) console):1. Navigate to "Compute engine"2. Click "Metadata" in the list below3. Upload your key at "SSH Keys" tab.

Your ssh public key have to be properly formatted: have your username after the key string.

Than you can create the instance and test login: ssh <your username as in ssh key>@<IP>

If that's work fine you are prepared ;). Destroy the created instance and:1. checkout the latest version of ansible-serviced-zenoss from github2. in the playbook directory run:./test <your username>

>> Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?No, I just thought about a convenience to provide arbitrary repository configuration. Just in case it's possible to easy configure any number repositories with any options in one single place. If I would use yum_repository it will be required to make more complex structures and than invent how to pass that to yum_repository module.

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.

5. RE: Zenoss deployment tool

Oh!A spare partition, not a whole disk! Well that's obvious now that you say it and you even said that in the readme.Sorry, I took what your Terraform script did as a hard and fast requirement and didn't look further. That's embarrassing

That explanation of the SSH keys should be what I need to make this work and I just looked at the new test script and that make sit easier to see how you intended it to work.

I will run it as you have it for a start and figure out how to use it from Vagrant later. I need a dev Zenoss environment that I can use to work on some Zenpacks.

>> but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.Well, as it read in documentation, actually it's required to have a spare partition, not entire disk. So if vragnan is able to create a spare partition,you will be able to configure it. As an instance lvm_dev=/dev/sda4 .

>>Then I'm not sure how to proceed. // about ssh keysI suppose you have (rsa) ssh keys :)You have to upload your public key ~/.ssh/id_rsa.pub to gce.This can be accomplished at gcloud (web) console):1. Navigate to "Compute engine"2. Click "Metadata" in the list below3. Upload your key at "SSH Keys" tab.

Your ssh public key have to be properly formatted: have your username after the key string.

Than you can create the instance and test login: ssh <your username as in ssh key>@<IP>

If that's work fine you are prepared ;). Destroy the created instance and:1. checkout the latest version of ansible-serviced-zenoss from github2. in the playbook directory run:./test <your username>

>> Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?No, I just thought about a convenience to provide arbitrary repository configuration. Just in case it's possible to easy configure any number repositories with any options in one single place. If I would use yum_repository it will be required to make more complex structures and than invent how to pass that to yum_repository module.

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.

6. RE: Zenoss deployment tool

The problem?I'm not a linux admin and I had carefully named my rsa key files so that I knew what they were for, but didn't realise that linux literally looks for an id_rsa file if you don't use the -i option to specify one. I had to create a .ssh/config and tell it to look at my file too.

And of course having non-standard key names breaks the "distribute ssh files and keys" task. Way to make the one mistake that dooms everything Craig!

And now that I have it working I can unpick it and make it work with vagrant if I want to.

Thank-you very much for your help and even more for creating the playbooks.

>> but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.Well, as it read in documentation, actually it's required to have a spare partition, not entire disk. So if vragnan is able to create a spare partition,you will be able to configure it. As an instance lvm_dev=/dev/sda4 .

>>Then I'm not sure how to proceed. // about ssh keysI suppose you have (rsa) ssh keys :)You have to upload your public key ~/.ssh/id_rsa.pub to gce.This can be accomplished at gcloud (web) console):1. Navigate to "Compute engine"2. Click "Metadata" in the list below3. Upload your key at "SSH Keys" tab.

Your ssh public key have to be properly formatted: have your username after the key string.

Than you can create the instance and test login: ssh <your username as in ssh key>@<IP>

If that's work fine you are prepared ;). Destroy the created instance and:1. checkout the latest version of ansible-serviced-zenoss from github2. in the playbook directory run:./test <your username>

>> Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?No, I just thought about a convenience to provide arbitrary repository configuration. Just in case it's possible to easy configure any number repositories with any options in one single place. If I would use yum_repository it will be required to make more complex structures and than invent how to pass that to yum_repository module.

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.

The problem?I'm not a linux admin and I had carefully named my rsa key files so that I knew what they were for, but didn't realise that linux literally looks for an id_rsa file if you don't use the -i option to specify one. I had to create a .ssh/config and tell it to look at my file too.

And of course having non-standard key names breaks the "distribute ssh files and keys" task. Way to make the one mistake that dooms everything Craig!

And now that I have it working I can unpick it and make it work with vagrant if I want to.

Thank-you very much for your help and even more for creating the playbooks.

>> but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.Well, as it read in documentation, actually it's required to have a spare partition, not entire disk. So if vragnan is able to create a spare partition,you will be able to configure it. As an instance lvm_dev=/dev/sda4 .

>>Then I'm not sure how to proceed. // about ssh keysI suppose you have (rsa) ssh keys :)You have to upload your public key ~/.ssh/id_rsa.pub to gce.This can be accomplished at gcloud (web) console):1. Navigate to "Compute engine"2. Click "Metadata" in the list below3. Upload your key at "SSH Keys" tab.

Your ssh public key have to be properly formatted: have your username after the key string.

Than you can create the instance and test login: ssh <your username as in ssh key>@<IP>

If that's work fine you are prepared ;). Destroy the created instance and:1. checkout the latest version of ansible-serviced-zenoss from github2. in the playbook directory run:./test <your username>

>> Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?No, I just thought about a convenience to provide arbitrary repository configuration. Just in case it's possible to easy configure any number repositories with any options in one single place. If I would use yum_repository it will be required to make more complex structures and than invent how to pass that to yum_repository module.

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.

The problem?I'm not a linux admin and I had carefully named my rsa key files so that I knew what they were for, but didn't realise that linux literally looks for an id_rsa file if you don't use the -i option to specify one. I had to create a .ssh/config and tell it to look at my file too.

And of course having non-standard key names breaks the "distribute ssh files and keys" task. Way to make the one mistake that dooms everything Craig!

And now that I have it working I can unpick it and make it work with vagrant if I want to.

Thank-you very much for your help and even more for creating the playbooks.

>> but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.Well, as it read in documentation, actually it's required to have a spare partition, not entire disk. So if vragnan is able to create a spare partition,you will be able to configure it. As an instance lvm_dev=/dev/sda4 .

>>Then I'm not sure how to proceed. // about ssh keysI suppose you have (rsa) ssh keys :)You have to upload your public key ~/.ssh/id_rsa.pub to gce.This can be accomplished at gcloud (web) console):1. Navigate to "Compute engine"2. Click "Metadata" in the list below3. Upload your key at "SSH Keys" tab.

Your ssh public key have to be properly formatted: have your username after the key string.

Than you can create the instance and test login: ssh <your username as in ssh key>@<IP>

If that's work fine you are prepared ;). Destroy the created instance and:1. checkout the latest version of ansible-serviced-zenoss from github2. in the playbook directory run:./test <your username>

>> Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?No, I just thought about a convenience to provide arbitrary repository configuration. Just in case it's possible to easy configure any number repositories with any options in one single place. If I would use yum_repository it will be required to make more complex structures and than invent how to pass that to yum_repository module.

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.

9. RE: Zenoss deployment tool

Yes, well, I hadn't realised that Control Center used the OS users. Embarassing.

Once I found that, thanks to another post in the forums, I created a user for that purpose, but I'm switched using the serviced account.

Then the Zenoss first start screens wouldn't display, which fixed itself after another rebuild.

I'm rebuilding it right right now to make sure I have specified the password hash for the serviced account correctly, but before that I had for the first time a fully working Zenoss 6.2 environment built from scratch in Google Compute in about 30 minutes.

Your scripts work extremely well.

Thank-you.

Do you have a donation page or similar where I can buy you a beer or something?

The problem?I'm not a linux admin and I had carefully named my rsa key files so that I knew what they were for, but didn't realise that linux literally looks for an id_rsa file if you don't use the -i option to specify one. I had to create a .ssh/config and tell it to look at my file too.

And of course having non-standard key names breaks the "distribute ssh files and keys" task. Way to make the one mistake that dooms everything Craig!

And now that I have it working I can unpick it and make it work with vagrant if I want to.

Thank-you very much for your help and even more for creating the playbooks.

>> but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.Well, as it read in documentation, actually it's required to have a spare partition, not entire disk. So if vragnan is able to create a spare partition,you will be able to configure it. As an instance lvm_dev=/dev/sda4 .

>>Then I'm not sure how to proceed. // about ssh keysI suppose you have (rsa) ssh keys :)You have to upload your public key ~/.ssh/id_rsa.pub to gce.This can be accomplished at gcloud (web) console):1. Navigate to "Compute engine"2. Click "Metadata" in the list below3. Upload your key at "SSH Keys" tab.

Your ssh public key have to be properly formatted: have your username after the key string.

Than you can create the instance and test login: ssh <your username as in ssh key>@<IP>

If that's work fine you are prepared ;). Destroy the created instance and:1. checkout the latest version of ansible-serviced-zenoss from github2. in the playbook directory run:./test <your username>

>> Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?No, I just thought about a convenience to provide arbitrary repository configuration. Just in case it's possible to easy configure any number repositories with any options in one single place. If I would use yum_repository it will be required to make more complex structures and than invent how to pass that to yum_repository module.

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.

Yes, well, I hadn't realised that Control Center used the OS users. Embarassing.

Once I found that, thanks to another post in the forums, I created a user for that purpose, but I'm switched using the serviced account.

Then the Zenoss first start screens wouldn't display, which fixed itself after another rebuild.

I'm rebuilding it right right now to make sure I have specified the password hash for the serviced account correctly, but before that I had for the first time a fully working Zenoss 6.2 environment built from scratch in Google Compute in about 30 minutes.

Your scripts work extremely well.

Thank-you.

Do you have a donation page or similar where I can buy you a beer or something?

The problem?I'm not a linux admin and I had carefully named my rsa key files so that I knew what they were for, but didn't realise that linux literally looks for an id_rsa file if you don't use the -i option to specify one. I had to create a .ssh/config and tell it to look at my file too.

And of course having non-standard key names breaks the "distribute ssh files and keys" task. Way to make the one mistake that dooms everything Craig!

And now that I have it working I can unpick it and make it work with vagrant if I want to.

Thank-you very much for your help and even more for creating the playbooks.

>> but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.Well, as it read in documentation, actually it's required to have a spare partition, not entire disk. So if vragnan is able to create a spare partition,you will be able to configure it. As an instance lvm_dev=/dev/sda4 .

>>Then I'm not sure how to proceed. // about ssh keysI suppose you have (rsa) ssh keys :)You have to upload your public key ~/.ssh/id_rsa.pub to gce.This can be accomplished at gcloud (web) console):1. Navigate to "Compute engine"2. Click "Metadata" in the list below3. Upload your key at "SSH Keys" tab.

Your ssh public key have to be properly formatted: have your username after the key string.

Than you can create the instance and test login: ssh <your username as in ssh key>@<IP>

If that's work fine you are prepared ;). Destroy the created instance and:1. checkout the latest version of ansible-serviced-zenoss from github2. in the playbook directory run:./test <your username>

>> Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?No, I just thought about a convenience to provide arbitrary repository configuration. Just in case it's possible to easy configure any number repositories with any options in one single place. If I would use yum_repository it will be required to make more complex structures and than invent how to pass that to yum_repository module.

I'm afraid I'm going to need some help as my use of Ansible has been fairly simple until now and this is more complex than that. Mainly, I've used it from Vagrant and that handles the inventory and SSH keys for me. I confess that I had hoped to plug what you have done into Vagrant, but the vagrant-google plug-in doesn't yet let me create more than one disk, which is annoying.

So I create the GCE instance using Terraform, which works perfectly, thank-you.

Then I'm not sure how to proceed.Do I add the public IP address to the hosts file, or the instance name?

If I add the public IP address SSH authentication fails.

If I use the instance name, name resolution fails.

Using gcloud compute ssh zenoss I can connect. But Ansible is apparently not using the gcloud SSH and I can't figure out which SSH keys I need to be using and how to do that.

And then a question because I'm curious.I'm not sure why the playbook creates serviced.repo the way it does, there is an Ansible yum_repository module that can handle that. Have you found yum_repository to have issues?

My apologies for asking such basic questions, you have done a lot of work and I can see that it will work if I can get past these issues.