how do you handle package updates with configuration management tools like puppet?

It's easy to install new software with a configuration management tool. With the installation directive you would also make sure, that your configuration files will be deployed.
But when you update an existing package, how do you handle stuff like update configurations? I mean you cannot run "etc-update".

CONFIG_PROTECT will make sure that any existing configuration won't be touched, but

how do you update configuration

is there a way to prevent the creation of ._cfg* files or how do you get rid of them

Or don't you do normal "updates", instead you write your directives to install a specific version. Whenever you want to install an update, you would update that directive. This way would ensure that the cm tool will always check the configuration (but you would still collect ._cfg* files, wouldn't you?)...

If you're using a tool like puppet, then you're probably pushing out config files through puppet. You have the option of specifying that a package should be a specific version, the latest version, or merely installed without regard to version. I've seen a couple strategies.

lock a package to a version in puppet and update it manually, changing the config files as necessary, and translate those changes back to puppet. Not that useful for multi machine deployments.

Have a testing box where you update a package and test out any configuration changes, then put those changes into puppet, where they will be pushed out to your machines.

Specify latest and hope for the best.

Either way, you'll probably want to use CONFIG_PROTECT_MASK to exclude the current contents of CONFIG_PROTECT.

Config files are always manually administration in my gnu-linux world, but there is a standard configuration, which can be used when you run etc-update manually and hit the 5 option.

etc-update is not an option, because it is interactive, isn't it? If you have >10 clients you cannot run this command manully on all systems.

gotyaoi wrote:

If you're using a tool like puppet, then you're probably pushing out config files through puppet. You have the option of specifying that a package should be a specific version, the latest version, or merely installed without regard to version. I've seen a couple strategies.

lock a package to a version in puppet and update it manually, changing the config files as necessary, and translate those changes back to puppet. Not that useful for multi machine deployments.

Have a testing box where you update a package and test out any configuration changes, then put those changes into puppet, where they will be pushed out to your machines.

Specify latest and hope for the best.

Either way, you'll probably want to use CONFIG_PROTECT_MASK to exclude the current contents of CONFIG_PROTECT.

Wait, currently "/etc" for example is in CONFIG_PROTECT, so any changes in /etc, which cannot be automerged (depends on my etc-update/dispatch-conf settings) have to be merged with dispatch-conf/etc-update, right? So you are saying I should move /etc to CONFIG_PROTECT_MASK? This way, emerge would replace any existing configuration when updating a package, right? This could work, when I would (re-)deploy my configuration right after the update... but I am not sure if it would be safer to go with the ._cfg* files and clean them out from time to time...

My thinking is that emerge may replace the config files, but if you've got the ordering in puppet such that the service definition requires both the package definition and the config definition (and perhaps the config definition requiring the package definition), puppet would always make sure that the config file was as specified before.

On the other hand, you could just use a tidy resource in puppet to clean up the ._cfg* files.