I’m doing some work to make launching multiple systems more friendly to
human observation. My specific goal right now is to force more things to
go through the logger, and to only log warning or error (and not debug and
info) to the console. Info and above goes to a file for later perusal if
needed.

if I have Log.info defined, why is chef spamming stdout instead of using
the logger? Wouldn’t it make more sense to have one of the following two
options:

Be able to disable live_stream from the resource

Have live_stream asynchronously call the logger to print log lines

The pain point here is that most of the commands that I run from chef using
the execute resource are not verbose (tar, cp, etc.). However, the apt LWRP
uses execute for apt-get update, etc. which is incredibly verbose and which
therefore drowns out all useful info in a parallel launch. Any suggestions
on how to reduce this is appreciated. The option I think I’m going to have
to go with at the moment is to start monkey-patching from within client.rb,
but I’d like it if there was a clearer way to deal with this.

I’m attempting to create a clone of the running chef server. Since it’s running in AWS, I created an EBS/AMI and re-launched it. Everything almosts, but when I attempt to do “knife cookbook upload mycookbook -V” I get the following error:

I suspect that “chef-client” is going to fail as well. As near as my rudimentary google search could ascertain, this appears related to rabbitmq’s DB. What’s the safe way to change chef hosts without regenerating all of my validation.pem and client.pem files? I have the original instance still running, so I can copy any files from the original if need be.

I’m attempting to create a clone of the running chef server. Since it’s running in AWS, I created an EBS/AMI and re-launched it. Everything almosts, but when I attempt to do “knife cookbook upload mycookbook -V” I get the following error:

I suspect that “chef-client” is going to fail as well. As near as my rudimentary google search could ascertain, this appears related to rabbitmq’s DB. What’s the safe way to change chef hosts without regenerating all of my validation.pem and client.pem files? I have the original instance still running, so I can copy any files from the original if need be.

HTH,

Dan is probably right about Rabbit. IIRC its configuration is tied to
its hostname so if you bring it up again with a new hostname, it will
ignore the config. Re-applying the configuration should sort it out.

I’m attempting to create a clone of the running chef server. Since it’s running in AWS, I created an EBS/AMI and re-launched it. Everything almosts, but when I attempt to do “knife cookbook upload mycookbook -V” I get the following error:

I suspect that “chef-client” is going to fail as well. As near as my rudimentary google search could ascertain, this appears related to rabbitmq’s DB. What’s the safe way to change chef hosts without regenerating all of my validation.pem and client.pem files? I have the original instance still running, so I can copy any files from the original if need be.

Dan is probably right about Rabbit. IIRC its configuration is tied to
its hostname so if you bring it up again with a new hostname, it will
ignore the config. Re-applying the configuration should sort it out.

I’m attempting to create a clone of the running chef server. Since it’s running in AWS, I created an EBS/AMI and re-launched it. Everything almosts, but when I attempt to do “knife cookbook upload mycookbook -V” I get the following error:

I suspect that “chef-client” is going to fail as well. As near as my rudimentary google search could ascertain, this appears related to rabbitmq’s DB. What’s the safe way to change chef hosts without regenerating all of my validation.pem and client.pem files? I have the original instance still running, so I can copy any files from the original if need be.

HTH,

I’m doing some work to make launching multiple systems more friendly to human observation. My specific goal right now is to force more things to go through the logger, and to only log warning or error (and not debug and info) to the console. Info and above goes to a file for later perusal if needed.

if I have Log.info (http://Log.info) defined, why is chef spamming stdout instead of using the logger? Wouldn’t it make more sense to have one of the following two options:

Be able to disable live_stream from the resource

Have live_stream asynchronously call the logger to print log lines

The pain point here is that most of the commands that I run from chef using the execute resource are not verbose (tar, cp, etc.). However, the apt LWRP uses execute for apt-get update, etc. which is incredibly verbose and which therefore drowns out all useful info in a parallel launch. Any suggestions on how to reduce this is appreciated. The option I think I’m going to have to go with at the moment is to start monkey-patching from within client.rb, but I’d like it if there was a clearer way to deal with this.

Thanks,

-Peter
The idea is that when you’re watching the output (stdout is a tty and Chef isn’t daemonized) and the log level is info or debug[0], the live stream is helpful to see what’s going on with long running commands. You can disable it for everything by piping output through cat.

Dan is probably right about Rabbit. IIRC its configuration is tied to
its hostname so if you bring it up again with a new hostname, it will
ignore the config. Re-applying the configuration should sort it out.

I’m attempting to create a clone of the running chef server. Since it’s running in AWS, I created an EBS/AMI and re-launched it. Everything almosts, but when I attempt to do “knife cookbook upload mycookbook -V” I get the following error:

I suspect that “chef-client” is going to fail as well. As near as my rudimentary google search could ascertain, this appears related to rabbitmq’s DB. What’s the safe way to change chef hosts without regenerating all of my validation.pem and client.pem files? I have the original instance still running, so I can copy any files from the original if need be.

HTH,

I’m doing some work to make launching multiple systems more friendly to
human observation. My specific goal right now is to force more things to go
through the logger, and to only log warning or error (and not debug and
info) to the console. Info and above goes to a file for later perusal if
needed.

if I have Log.info (http://Log.info) defined, why is chef spamming
stdout instead of using the logger? Wouldn’t it make more sense to have one
of the following two options:

Be able to disable live_stream from the resource

Have live_stream asynchronously call the logger to print log lines

The pain point here is that most of the commands that I run from chef
using the execute resource are not verbose (tar, cp, etc.). However, the
apt LWRP uses execute for apt-get update, etc. which is incredibly verbose
and which therefore drowns out all useful info in a parallel launch. Any
suggestions on how to reduce this is appreciated. The option I think I’m
going to have to go with at the moment is to start monkey-patching from
within client.rb, but I’d like it if there was a clearer way to deal with
this.

Thanks,

-Peter
The idea is that when you’re watching the output (stdout is a tty and Chef
isn’t daemonized) and the log level is info or debug[0], the live stream is
helpful to see what’s going on with long running commands. You can disable
it for everything by piping output through cat.

But I don’t want to ignore everything. My assumption is that if you have a
logger, and logging levels, etc. you want to use them. What I want to
happen is that I can log info to a file, and warning to the console if I’m
working interactively (e.g. a bootstrap). What happens in this case is
that instead of using the logger, chef spits to stdout, defeating the
purpose of configuring logging.

What you consider spam other users may consider useful relevant
information accurately pertaining to a failure mode.

In some cases you will be asked to provide logs, often-times these are
grabbed from chef running on an interactive terminal after a
failure/crash. I don’t really see the benefit here – what are you
trying to do?

I’m doing some work to make launching multiple systems more friendly to
human observation. My specific goal right now is to force more things to go
through the logger, and to only log warning or error (and not debug and
info) to the console. Info and above goes to a file for later perusal if
needed.

if I have Log.info (http://Log.info) defined, why is chef spamming
stdout instead of using the logger? Wouldn’t it make more sense to have one
of the following two options:

Be able to disable live_stream from the resource

Have live_stream asynchronously call the logger to print log lines

The pain point here is that most of the commands that I run from chef
using the execute resource are not verbose (tar, cp, etc.). However, the apt
LWRP uses execute for apt-get update, etc. which is incredibly verbose and
which therefore drowns out all useful info in a parallel launch. Any
suggestions on how to reduce this is appreciated. The option I think I’m
going to have to go with at the moment is to start monkey-patching from
within client.rb, but I’d like it if there was a clearer way to deal with
this.

Thanks,

-Peter
The idea is that when you’re watching the output (stdout is a tty and Chef
isn’t daemonized) and the log level is info or debug[0], the live stream is
helpful to see what’s going on with long running commands. You can disable
it for everything by piping output through cat.

But I don’t want to ignore everything. My assumption is that if you have a
logger, and logging levels, etc. you want to use them. What I want to
happen is that I can log info to a file, and warning to the console if I’m
working interactively (e.g. a bootstrap). What happens in this case is that
instead of using the logger, chef spits to stdout, defeating the purpose of
configuring logging.

My use case is launching multiple systems (approx 8 right now). There are,
further, 2 sub-use-cases. The first is for automated deployment. This is
covered by notifications so I’m not concerned about failures here. The
second is developers bootstrapping, where they launch and are presented
with a matrix-like cascade of text. I want that text to be available to
investigate a failure. I also want the developer to know interactively
whether they should ^C the launch and kill the nodes because one has
failed. Some apps take 2.5 minutes to launch, and some take 15 minutes,
and I want to let them fail and investigate as fast as they can. The
solution I’ve come up with is to have messages at the info level sent to a
file, and to send warning to the console.

However, my use case may be a bit of a distraction from the core issue that
I’m trying to bring up:

Code like this (from chef’s lib/chef/provider/execute.rb, with similar code
in mixin/shell_out.rb, etc.) says that even If I configure chef to log to
debug to a file, and configure client.rb to log to a file (not via my
patching mechanism) my settings will be ignored and output will hit stdout.
I’m calling this spam because there’s no way to configure it to actually
log to anywhere in particular - it’s depending on 3 undocumented factors
(the state of your tty, the running mode of chef at the moment, and whether
logging is set up to lot at info) including the state of logging, and then
it’s bypassing logging. Wat?
(https://www.destroyallsoftware.com/talks/watfor your amusement).

What you consider spam other users may consider useful relevant
information accurately pertaining to a failure mode.

In some cases you will be asked to provide logs, often-times these are
grabbed from chef running on an interactive terminal after a
failure/crash. I don’t really see the benefit here – what are you
trying to do?

But I don’t want to ignore everything. My assumption is that if you
have a
logger, and logging levels, etc. you want to use them. What I want to
happen is that I can log info to a file, and warning to the console if
I’m
working interactively (e.g. a bootstrap). What happens in this case is
that
instead of using the logger, chef spits to stdout, defeating the purpose
of
configuring logging.

I’m attempting to create a clone of the running chef server. Since it’s
running in AWS, I created an EBS/AMI and re-launched it. Everything almosts,
but when I attempt to do “knife cookbook upload mycookbook -V” I get the
following error:

I suspect that “chef-client” is going to fail as well. As near as my
rudimentary google search could ascertain, this appears related to
rabbitmq’s DB. What’s the safe way to change chef hosts without regenerating
all of my validation.pem and client.pem files? I have the original instance
still running, so I can copy any files from the original if need be.