I have Berksshelf working now, so that is NOT the issue. Neither is this about the Berks workflow blogged by Seth Vargo. I just don’t think I’m using it properly since it feels cumbersome the way I’m using, so it must be the way I’m using.

Step 1:
When I submit cookbooks and cookbook updates into my git server, I run a Jenkins job that fetches the HEAD (for now) and does a berks install and berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has ALL my cookbooks. I have have 50 cookbooks in these files. This generates a Berksfile.lock file

Step 2:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef dev organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

Step 3:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef QA organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing the concept of using the generated Berksfile.lock file?

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

Chef environments and roles are problematic (they have been for us) since they are not versioned like cookbooks in the Chef server. Therefore we’ve refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not organisations.
You can use cookbook version locks on environments to control updates to cookbooks, which can be applied easily using either A) berks apply B) berkflow or, newer and integrated with Chef, C) Policyfiles
That way you have one build process to upload your cookbook versions and then another process to control the release of those to environments.
The way you’re doing it now is just work duplication, there’s no need to have orgs split like that.

On Wed, May 20, 2015 at 9:00 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
I have Berksshelf working now, so that is NOT the issue. Neither is this about the Berks workflow blogged by Seth Vargo. I just don’t think I’m using it properly since it feels cumbersome the way I’m using, so it must be the way I’m using.

Step 1:
When I submit cookbooks and cookbook updates into my git server, I run a Jenkins job that fetches the HEAD (for now) and does a berks install and berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has ALL my cookbooks. I have have 50 cookbooks in these files. This generates a Berksfile.lock file

Step 2:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef dev organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

Step 3:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef QA organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing the concept of using the generated Berksfile.lock file?

But you’re not setting attributes in the environment, you’re setting
cookbook version requirements. These are obtained from your Berksfile.lock
which should be in source control for an environment cookbook. Running
berks apply from a Berksfile.lock from a specific cookbook version would
roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there’s a
problem the damage is usually done, rolling back to a previous cookbook
version isn’t necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just do
an upload to one org and then control the release of those cookbooks to
different environments.

Berkshelf is a dependency management tool with a few extra features. If
you’ve chosen a workflow where you have to move objects between
organisations it’s not going to help you with that any more than it already
is (which is pulling in the dependencies like you say in each bit).

Chef environments and roles are problematic (they have been for us)
since they are not versioned like cookbooks in the Chef server. Therefore
we’ve refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently
than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

But you’re not setting attributes in the environment, you’re setting cookbook version requirements. These are obtained from your Berksfile.lock which should be in source control for an environment cookbook. Running berks apply from a Berksfile.lock from a specific cookbook version would roll back to the version locks in that cookbook version.
In any case once an updated cookbook version is applied if there’s a problem the damage is usually done, rolling back to a previous cookbook version isn’t necessarily going to fix your problem.
So instead of having a multiple berks uploads to different orgs you just do an upload to one org and then control the release of those cookbooks to different environments.
Berkshelf is a dependency management tool with a few extra features. If you’ve chosen a workflow where you have to move objects between organisations it’s not going to help you with that any more than it already is (which is pulling in the dependencies like you say in each bit).

On Wed, May 20, 2015 at 11:52 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
Chef environments and roles are problematic (they have been for us) since they are not versioned like cookbooks in the Chef server. Therefore we’ve refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not organisations.
You can use cookbook version locks on environments to control updates to cookbooks, which can be applied easily using either A) berks apply B) berkflow or, newer and integrated with Chef, C) Policyfiles
That way you have one build process to upload your cookbook versions and then another process to control the release of those to environments.
The way you’re doing it now is just work duplication, there’s no need to have orgs split like that.

On Wed, May 20, 2015 at 9:00 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
I have Berksshelf working now, so that is NOT the issue. Neither is this about the Berks workflow blogged by Seth Vargo. I just don’t think I’m using it properly since it feels cumbersome the way I’m using, so it must be the way I’m using.

Step 1:
When I submit cookbooks and cookbook updates into my git server, I run a Jenkins job that fetches the HEAD (for now) and does a berks install and berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has ALL my cookbooks. I have have 50 cookbooks in these files. This generates a Berksfile.lock file

Step 2:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef dev organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

Step 3:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef QA organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing the concept of using the generated Berksfile.lock file?

Then I think your best option is to move the task of migrating the
cookbooks between organisations to some sort of job in a build server like
Jenkins, or take a look at Policyfiles instead of using Berkshelf.

But you’re not setting attributes in the environment, you’re setting
cookbook version requirements. These are obtained from your Berksfile.lock
which should be in source control for an environment cookbook. Running
berks apply from a Berksfile.lock from a specific cookbook version would
roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there’s a
problem the damage is usually done, rolling back to a previous cookbook
version isn’t necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just
do an upload to one org and then control the release of those cookbooks to
different environments.

Berkshelf is a dependency management tool with a few extra features. If
you’ve chosen a workflow where you have to move objects between
organisations it’s not going to help you with that any more than it already
is (which is pulling in the dependencies like you say in each bit).

Chef environments and roles are problematic (they have been for us) since
they are not versioned like cookbooks in the Chef server. Therefore we’ve
refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently
than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

Then I think your best option is to move the task of migrating the
cookbooks between organisations to some sort of job in a build server like
Jenkins, or take a look at Policyfiles instead of using Berkshelf.

But you’re not setting attributes in the environment, you’re setting
cookbook version requirements. These are obtained from your Berksfile.lock
which should be in source control for an environment cookbook. Running
berks apply from a Berksfile.lock from a specific cookbook version would
roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there’s a
problem the damage is usually done, rolling back to a previous cookbook
version isn’t necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just
do an upload to one org and then control the release of those cookbooks to
different environments.

Berkshelf is a dependency management tool with a few extra features. If
you’ve chosen a workflow where you have to move objects between
organisations it’s not going to help you with that any more than it already
is (which is pulling in the dependencies like you say in each bit).

Chef environments and roles are problematic (they have been for us) since
they are not versioned like cookbooks in the Chef server. Therefore we’ve
refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently
than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload
them to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload
them to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

On Wed, May 20, 2015 at 3:22 PM, Yoshi Spendiff <yoshi.spendiff@indochino.commailto:yoshi.spendiff@indochino.com> wrote:
Then I think your best option is to move the task of migrating the cookbooks between organisations to some sort of job in a build server like Jenkins, or take a look at Policyfiles instead of using Berkshelf.

But you’re not setting attributes in the environment, you’re setting cookbook version requirements. These are obtained from your Berksfile.lock which should be in source control for an environment cookbook. Running berks apply from a Berksfile.lock from a specific cookbook version would roll back to the version locks in that cookbook version.
In any case once an updated cookbook version is applied if there’s a problem the damage is usually done, rolling back to a previous cookbook version isn’t necessarily going to fix your problem.
So instead of having a multiple berks uploads to different orgs you just do an upload to one org and then control the release of those cookbooks to different environments.
Berkshelf is a dependency management tool with a few extra features. If you’ve chosen a workflow where you have to move objects between organisations it’s not going to help you with that any more than it already is (which is pulling in the dependencies like you say in each bit).

On Wed, May 20, 2015 at 11:52 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
Chef environments and roles are problematic (they have been for us) since they are not versioned like cookbooks in the Chef server. Therefore we’ve refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not organisations.
You can use cookbook version locks on environments to control updates to cookbooks, which can be applied easily using either A) berks apply B) berkflow or, newer and integrated with Chef, C) Policyfiles
That way you have one build process to upload your cookbook versions and then another process to control the release of those to environments.
The way you’re doing it now is just work duplication, there’s no need to have orgs split like that.

On Wed, May 20, 2015 at 9:00 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
I have Berksshelf working now, so that is NOT the issue. Neither is this about the Berks workflow blogged by Seth Vargo. I just don’t think I’m using it properly since it feels cumbersome the way I’m using, so it must be the way I’m using.

Step 1:
When I submit cookbooks and cookbook updates into my git server, I run a Jenkins job that fetches the HEAD (for now) and does a berks install and berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has ALL my cookbooks. I have have 50 cookbooks in these files. This generates a Berksfile.lock file

Step 2:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef dev organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

Step 3:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef QA organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing the concept of using the generated Berksfile.lock file?

Then I think your best option is to move the task of migrating the
cookbooks between organisations to some sort of job in a build server like
Jenkins, or take a look at Policyfiles instead of using Berkshelf.

But you’re not setting attributes in the environment, you’re setting
cookbook version requirements. These are obtained from your Berksfile.lock
which should be in source control for an environment cookbook. Running
berks apply from a Berksfile.lock from a specific cookbook version would
roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there’s a
problem the damage is usually done, rolling back to a previous cookbook
version isn’t necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just
do an upload to one org and then control the release of those cookbooks to
different environments.

Berkshelf is a dependency management tool with a few extra features. If
you’ve chosen a workflow where you have to move objects between
organisations it’s not going to help you with that any more than it already
is (which is pulling in the dependencies like you say in each bit).

Chef environments and roles are problematic (they have been for us) since
they are not versioned like cookbooks in the Chef server. Therefore we’ve
refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently
than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

and as soon you start tgz’ing your cookbooks with Berks package and
treating them as versioned artifacts, then you may notice that you simply
need to fetch a particular version of you cookbooks artifacts directly from
your server, or push it into them and then run chef client, or knife
bootstrap from a build box.

Then you discard the use of chef environments, chef environment attributes,
cookbook constrains and the chef server itself as you can retrieve metadata
from other sources and consume it within your cookbooks.

now you have a fairly simple workflow where you can mix and match servers
running different levels of chef code and upgrade them as needed in a
controlled manner.
On 21 May 2015 00:29, “Yoshi Spendiff” yoshi.spendiff@indochino.com
wrote:

You can run berks package to make an archive of all the cookbooks and
dependencies, including the lock file, and then use that as a sort of
artifact to deploy to your other orgs.

I think easiest way to install this may be be with *blo install *in
berkflow.

That way you have a consistent lock files instead of running it 3 times
plus you can also use *blo up *on the _default environment to move
between versions

Then I think your best option is to move the task of migrating the
cookbooks between organisations to some sort of job in a build server like
Jenkins, or take a look at Policyfiles instead of using Berkshelf.

But you’re not setting attributes in the environment, you’re setting
cookbook version requirements. These are obtained from your Berksfile.lock
which should be in source control for an environment cookbook. Running
berks apply from a Berksfile.lock from a specific cookbook version would
roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there’s a
problem the damage is usually done, rolling back to a previous cookbook
version isn’t necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just
do an upload to one org and then control the release of those cookbooks to
different environments.

Berkshelf is a dependency management tool with a few extra features. If
you’ve chosen a workflow where you have to move objects between
organisations it’s not going to help you with that any more than it already
is (which is pulling in the dependencies like you say in each bit).

Chef environments and roles are problematic (they have been for us) since
they are not versioned like cookbooks in the Chef server. Therefore we’ve
refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently
than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload
them to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload
them to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

and as soon you start tgz’ing your cookbooks with Berks package and treating them as versioned artifacts, then you may notice that you simply need to fetch a particular version of you cookbooks artifacts directly from your server, or push it into them and then run chef client, or knife bootstrap from a build box.
Then you discard the use of chef environments, chef environment attributes, cookbook constrains and the chef server itself as you can retrieve metadata from other sources and consume it within your cookbooks.
now you have a fairly simple workflow where you can mix and match servers running different levels of chef code and upgrade them as needed in a controlled manner.

This is why I do not use a chef server. I use chef-solo, and Berksfile.lock is in a git repository on the local host. That gives me complete control, on each host, of exactly which versions of the cookbooks are in play, and I don’t have to manually deduce and re-deduce dependency updates.

I don’t get chef ‘search’ functions, but that’s fine for small or dynamic development environments.

But you’re not setting attributes in the environment, you’re setting cookbook version requirements. These are obtained from your Berksfile.lock which should be in source control for an environment cookbook. Running berks apply from a Berksfile.lock from a specific cookbook version would roll back to the version locks in that cookbook version.
In any case once an updated cookbook version is applied if there’s a problem the damage is usually done, rolling back to a previous cookbook version isn’t necessarily going to fix your problem.
So instead of having a multiple berks uploads to different orgs you just do an upload to one org and then control the release of those cookbooks to different environments.
Berkshelf is a dependency management tool with a few extra features. If you’ve chosen a workflow where you have to move objects between organisations it’s not going to help you with that any more than it already is (which is pulling in the dependencies like you say in each bit).

On Wed, May 20, 2015 at 11:52 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
Chef environments and roles are problematic (they have been for us) since they are not versioned like cookbooks in the Chef server. Therefore we’ve refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not organisations.
You can use cookbook version locks on environments to control updates to cookbooks, which can be applied easily using either A) berks apply B) berkflow or, newer and integrated with Chef, C) Policyfiles
That way you have one build process to upload your cookbook versions and then another process to control the release of those to environments.
The way you’re doing it now is just work duplication, there’s no need to have orgs split like that.

On Wed, May 20, 2015 at 9:00 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
I have Berksshelf working now, so that is NOT the issue. Neither is this about the Berks workflow blogged by Seth Vargo. I just don’t think I’m using it properly since it feels cumbersome the way I’m using, so it must be the way I’m using.

Step 1:
When I submit cookbooks and cookbook updates into my git server, I run a Jenkins job that fetches the HEAD (for now) and does a berks install and berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has ALL my cookbooks. I have have 50 cookbooks in these files. This generates a Berksfile.lock file

Step 2:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef dev organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

Step 3:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef QA organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing the concept of using the generated Berksfile.lock file?

Chef Solo was the original Chef. Remember the bad old days before the Chef server existed as a product, and the only way to use Chef was to scp (or worse, ftp) giant tarballs of recipes & cookbooks from system to system? Five years later, we not only...

Question if you really need Chef Server as part of your workflow. Depending
on what you
want to achieve you might definitely need it. If you don’t really need it,
it can make your
workflow much simpler.

This is why I do not use a chef server. I use chef-solo, and
Berksfile.lock is in a git repository on the local host. That gives me
complete control, on each host, of exactly which versions of the cookbooks
are in play, and I don’t have to manually deduce and re-deduce dependency
updates.

I don’t get chef ‘search’ functions, but that’s fine for small or dynamic
development environments.

But you’re not setting attributes in the environment, you’re setting
cookbook version requirements. These are obtained from your Berksfile.lock
which should be in source control for an environment cookbook. Running
berks apply from a Berksfile.lock from a specific cookbook version would
roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there’s a
problem the damage is usually done, rolling back to a previous cookbook
version isn’t necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just
do an upload to one org and then control the release of those cookbooks to
different environments.

Berkshelf is a dependency management tool with a few extra features. If
you’ve chosen a workflow where you have to move objects between
organisations it’s not going to help you with that any more than it already
is (which is pulling in the dependencies like you say in each bit).

Chef environments and roles are problematic (they have been for us) since
they are not versioned like cookbooks in the Chef server. Therefore we’ve
refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently
than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload them
to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

if you are going down the path where every cookbook version has to be
locked for every nodes, explicitly, its gonna be a pain sooner or later.
cloning git repo has its own merits and disadvantages. you are now forced
to share you git credentials and development history (and everything thats
in the repo but not needed by chef-client) with every nodes. Code !=
artifact. Artifacts are numerically versioned independent entity, while
code in SCM (like git) is coupled with its own history (i.e. deltas) and
not numerically versioned. As you have mentioned, in smaller deployments
you might find git cloning appealing because you cant get berks to adapt
your workflow, but its not clear how you’ll update the git repo, or
maintain node metadata (like whats the equivalent of knife status?) is
very unclear. in puppet world this was very popular, but thats more due to
puppet using git repo internally, which chef does not.

I use chef-solo /chef-client -z extensively for volatile, on-demand
infrastructure, or where i hand off the servers after initial provisioning
(i.e. they are never updated, configured after provisioning). Even in those
cases i find the git clone style a pain. you can create a debian or rpm
installer with a single command using FPM. this allows versioning, metadata
management (by dpkg or rpm) etc.

i think theres a lot of scenarios where using chef-client -z/chef-solo
shines, but git clone on the node itself is bad move, there are lot
other/easier ways to distribute a fixed set of cookbooks, along with
berksfile.lock.

Question if you really need Chef Server as part of your workflow.
Depending on what you
want to achieve you might definitely need it. If you don’t really need it,
it can make your
workflow much simpler.

This is why I do not use a chef server. I use chef-solo, and
Berksfile.lock is in a git repository on the local host. That gives me
complete control, on each host, of exactly which versions of the cookbooks
are in play, and I don’t have to manually deduce and re-deduce dependency
updates.

I don’t get chef ‘search’ functions, but that’s fine for small or dynamic
development environments.

But you’re not setting attributes in the environment, you’re setting
cookbook version requirements. These are obtained from your Berksfile.lock
which should be in source control for an environment cookbook. Running
berks apply from a Berksfile.lock from a specific cookbook version would
roll back to the version locks in that cookbook version.

In any case once an updated cookbook version is applied if there’s a
problem the damage is usually done, rolling back to a previous cookbook
version isn’t necessarily going to fix your problem.

So instead of having a multiple berks uploads to different orgs you just
do an upload to one org and then control the release of those cookbooks to
different environments.

Berkshelf is a dependency management tool with a few extra features. If
you’ve chosen a workflow where you have to move objects between
organisations it’s not going to help you with that any more than it already
is (which is pulling in the dependencies like you say in each bit).

Chef environments and roles are problematic (they have been for us) since
they are not versioned like cookbooks in the Chef server. Therefore we’ve
refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently
than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not
organisations.

You can use cookbook version locks on environments to control updates to
cookbooks, which can be applied easily using either A) berks apply B)
berkflow or, newer and integrated with Chef, C) Policyfiles

That way you have one build process to upload your cookbook versions and
then another process to control the release of those to environments.

The way you’re doing it now is just work duplication, there’s no need to
have orgs split like that.

I have Berksshelf working now, so that is NOT the issue. Neither is this
about the Berks workflow blogged by Seth Vargo. I just don’t think I’m
using it properly since it feels cumbersome the way I’m using, so it must
be the way I’m using.

When I submit cookbooks and cookbook updates into my git server, I run a
Jenkins job that fetches the HEAD (for now) and does a berks install and
berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has
ALL my cookbooks. I have have 50 cookbooks in these files. This generates a
Berksfile.lock file

Step 2:

Now I want to fetch my cookbooks from the BerksAPI server, and upload
them to my Chef dev organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

Step 3:

Now I want to fetch my cookbooks from the BerksAPI server, and upload
them to my Chef QA organization. I use a different set of
Berksfile/metadata.rb files that ONLY contains the top level role cookbooks
to take advantage of Berksshelf’s transitive cookbook dependency
resolution. So I may only have 10 cookbooks in this files, and these 10
cookbooks, via Berkshelf dependency management, will eventually
install/upload all 50 cookbooks to my chef server. This process generates a
Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at
least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing
the concept of using the generated Berksfile.lock file?

“if you are going down the path where every cookbook version has to be locked for every nodes…”

No this is NOT what I want. However I want a set of cookbooks locked for every build-candidate we have, and the build candidate can happen at least once a day. This build candidate is deployed to multiple products in the lab for development and testing. I then want a set of cookbooks that correspond to this build candidate. These cookbooks are NOT packaged, that is not part of our ISO, since we use Chef as an internal deployment tool only and not used by the customers.

I chose the central Chef server implementation because I want to be able to share cookbooks easily to all 20 product installations.

You have your reasons for NOT using a central Chef server, I have my reasons for using it. Since your product/process may not match mine, it’s futile to tell me otherwise.

I will be looking into berks flow and policy files as folks have suggested.

if you are going down the path where every cookbook version has to be locked for every nodes, explicitly, its gonna be a pain sooner or later. cloning git repo has its own merits and disadvantages. you are now forced to share you git credentials and development history (and everything thats in the repo but not needed by chef-client) with every nodes. Code != artifact. Artifacts are numerically versioned independent entity, while code in SCM (like git) is coupled with its own history (i.e. deltas) and not numerically versioned. As you have mentioned, in smaller deployments you might find git cloning appealing because you cant get berks to adapt your workflow, but its not clear how you’ll update the git repo, or maintain node metadata (like whats the equivalent of knife status?) is very unclear. in puppet world this was very popular, but thats more due to puppet using git repo internally, which chef does not.

I use chef-solo /chef-client -z extensively for volatile, on-demand infrastructure, or where i hand off the servers after initial provisioning (i.e. they are never updated, configured after provisioning). Even in those cases i find the git clone style a pain. you can create a debian or rpm installer with a single command using FPM. this allows versioning, metadata management (by dpkg or rpm) etc.

i think theres a lot of scenarios where using chef-client -z/chef-solo shines, but git clone on the node itself is bad move, there are lot other/easier ways to distribute a fixed set of cookbooks, along with berksfile.lock.

On Thu, May 21, 2015 at 3:24 PM, Torben Knerr <mail@tknerr.demailto:mail@tknerr.de> wrote:
Doesn’t help the discussion pretty much, but just wanted to drop my +1 to what Nico said,
as I have a quite similar same setup for most of my Chef projects

I would go even one step further and use chef-zero / chef-client -z in favor of chef-solo though,
which gives you compatibility with cookbooks using search:Chef Blog – 24 Jun 14

Chef Solo was the original Chef. Remember the bad old days before the Chef server existed as a product, and the only way to use Chef was to scp (or worse, ftp) giant tarballs of recipes & cookbooks from system to system? Five years later, we not only...

Question if you really need Chef Server as part of your workflow. Depending on what you
want to achieve you might definitely need it. If you don’t really need it, it can make your
workflow much simpler.

HTH, Torben

On Thu, May 21, 2015 at 7:17 PM, Nico Kadel-Garcia <nkadel@skyhookwireless.commailto:nkadel@skyhookwireless.com> wrote:
This is why I do not use a chef server. I use chef-solo, and Berksfile.lock is in a git repository on the local host. That gives me complete control, on each host, of exactly which versions of the cookbooks are in play, and I don’t have to manually deduce and re-deduce dependency updates.

I don’t get chef ‘search’ functions, but that’s fine for small or dynamic development environments.

But you’re not setting attributes in the environment, you’re setting cookbook version requirements. These are obtained from your Berksfile.lock which should be in source control for an environment cookbook. Running berks apply from a Berksfile.lock from a specific cookbook version would roll back to the version locks in that cookbook version.
In any case once an updated cookbook version is applied if there’s a problem the damage is usually done, rolling back to a previous cookbook version isn’t necessarily going to fix your problem.
So instead of having a multiple berks uploads to different orgs you just do an upload to one org and then control the release of those cookbooks to different environments.
Berkshelf is a dependency management tool with a few extra features. If you’ve chosen a workflow where you have to move objects between organisations it’s not going to help you with that any more than it already is (which is pulling in the dependencies like you say in each bit).

On Wed, May 20, 2015 at 11:52 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
Chef environments and roles are problematic (they have been for us) since they are not versioned like cookbooks in the Chef server. Therefore we’ve refrained from using roles completely, and environments sparingly.

So ideally we upload cookbooks to the Dev organization more frequently than the QA organization. It’s a promotion process.

I would suggest using different environments for, well environments, not organisations.
You can use cookbook version locks on environments to control updates to cookbooks, which can be applied easily using either A) berks apply B) berkflow or, newer and integrated with Chef, C) Policyfiles
That way you have one build process to upload your cookbook versions and then another process to control the release of those to environments.
The way you’re doing it now is just work duplication, there’s no need to have orgs split like that.

On Wed, May 20, 2015 at 9:00 AM, Fouts, Chris <Chris.Fouts@sensus.commailto:Chris.Fouts@sensus.com> wrote:
I have Berksshelf working now, so that is NOT the issue. Neither is this about the Berks workflow blogged by Seth Vargo. I just don’t think I’m using it properly since it feels cumbersome the way I’m using, so it must be the way I’m using.

Step 1:
When I submit cookbooks and cookbook updates into my git server, I run a Jenkins job that fetches the HEAD (for now) and does a berks install and berks upload to my BerksAPI server. I use a Berksfile/metadata.rb that has ALL my cookbooks. I have have 50 cookbooks in these files. This generates a Berksfile.lock file

Step 2:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef dev organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

Step 3:
Now I want to fetch my cookbooks from the BerksAPI server, and upload them to my Chef QA organization. I use a different set of Berksfile/metadata.rb files that ONLY contains the top level role cookbooks to take advantage of Berksshelf’s transitive cookbook dependency resolution. So I may only have 10 cookbooks in this files, and these 10 cookbooks, via Berkshelf dependency management, will eventually install/upload all 50 cookbooks to my chef server. This process generates a Berksfile.lock file too.

So it seems like I’m repeating the SAME process in all 3 steps, or at least Steps 2 and 3, and I feel like I shouldn’t? I feel like I’m missing the concept of using the generated Berksfile.lock file?

I will be looking into berks flow and policy files as folks have suggested.

Chris
Hi Chris, just want to give you a quick update on where Policyfiles are completeness-wise. Firstly, there is a small data migration required to create default permissions for Policyfile things on the server, which will be included in Chef Server 12.1 (expect a release candidate next week). If you want to build a new Chef Server, you can install 12.0.8 and follow the instructions here to enable: https://www.chef.io/blog/2015/03/27/chef-server-12-0-7-released/ FWIW, we’ve run the migration in Hosted Chef and enabled Policyfile APIs there and haven’t seen any problems thus far.

I will be looking into berks flow and policy files as folks have
suggested.

Chris
Hi Chris, just want to give you a quick update on where Policyfiles are
completeness-wise. Firstly, there is a small data migration required to
create default permissions for Policyfile things on the server, which will
be included in Chef Server 12.1 (expect a release candidate next week). If
you want to build a new Chef Server, you can install 12.0.8 and follow the
instructions here to enable:https://www.chef.io/blog/2015/03/27/chef-server-12-0-7-released/ FWIW,
we’ve run the migration in Hosted Chef and enabled Policyfile APIs there
and haven’t seen any problems thus far.

I glossed over it in the blog post, but there is a chef export command that puts your policy lock and cookbooks into a directory structure that is understood by local mode. This is how the test kitchen driver works. Note that Chef Zero doesn’t have the “native” policyfile APIs yet, so this works in compatibility mode. It also requires you to set the versioned_cookbooks setting in your client.rb. If you find it helpful, the code for the TK provisioner is here: https://github.com/chef/chef-dk/blob/master/lib/kitchen/provisioner/policyfile_zero.rb