This is a guest post by Brock Spalding, Director of Sales and Marketing at Ostrato

On March 12th we’re excited to bring together OpenWhere, Chef and Ostrato on a webinar to discuss how this geospatial analytics startup is able to compete in the hyper-competitive defense marketplace. Discover how cloud computing expertise has enabled OpenWhere to achieve success, specifically through the use of two interlocking tools – Chef for infrastructure automation and Ostrato for cloud management.

OpenWhere solves some of the hardest problems in commercial satellite imaging, geospatial analytics and activity-based intelligence (literally rocket science). By leveraging Chef and Ostrato across AWS development, test, and production environments, OpenWhere has accelerated the design, build, deploy and test cycles while diligently tracking costs and minimizing cloud spend.

Chef provides the crucial configuration management glue and allows OpenWhere to de-risk infrastructure deployments and focus systems development efforts on building high-value IP. Ostrato’s Chef integration, self-service marketplace and single-click deploy features enable OpenWhere to manage an explosion of AWS resources while still governing infrastructure costs and providing transparency across the organization.

Today, we’re excited to announce that Chef Client 12.1.0 is now available. This release brings with it many new features and bug fixes. Below are some of the highlights. For more information, check out the changelog, release notes, and doc changes.

What’s New

chef_gem deprecation of installation at compile time

A compile_time flag has been added to the chefgem resource to control if it is installed at compiletime or not. The prior behavior has been that this
resource forces itself to install at compiletime which is problematic since if the gem is native it forces buildessentials and other dependent libraries
to have to be installed at compile_time in an escalating war of forcing compile time execution. This default was engineered before it was understood that a better
approach was to lazily require gems inside of provider code which only ran at converge time and that requiring gems in recipe code was bad practice.

The default behavior has not changed, but every chef_gem resource will now emit out a warning:

[2015-02-06T13:13:48-08:00] WARN: chef_gem[aws-sdk] chef_gem compile_time installation is deprecated
[2015-02-06T13:13:48-08:00] WARN: chef_gem[aws-sdk] Please set `compile_time false` on the resource to use the new behavior.
[2015-02-06T13:13:48-08:00] WARN: chef_gem[aws-sdk] or set `compile_time true` on the resource if compile_time behavior is required.

The preferred way to fix this is to make every chefgem resource explicit about compiletime installation (keeping in mind the best-practice to default to false
unless there is a reason):

chef_gem 'aws-sdk' do
compile_time false
end

There is also a Chef::Config[:chefgemcompile_time] flag which has been added. If this is set to true (not recommended) then chef will only emit a single
warning at the top of the chef-client run:

It will behave like Chef 10 and Chef 11 and will default chefgem to compiletime installations and will suppress
subsequent warnings in the chef-client run.

If this setting is changed to ‘false’ then it will adopt Chef-13 style behavior and will default all chefgem installs to not run at compiletime by default. This
may break existing cookbooks.

All existing cookbooks which require compile_time true MUST be updated to be explicit about this setting.

To be considered high quality, cookbooks which require compile_time true MUST be rewritten to avoid this setting.

All existing cookbooks which do not require compile_time true SHOULD be updated to be explicit about this setting.

For cookbooks that need to maintain backwards compatibility a respond_to? check should be used:

chef_gem 'aws-sdk' do
compile_time false if respond_to?(:compile_time)
end

Experimental Audit Mode Feature

This is a new feature intended to provide infrastructure audits. Chef already allows you to configure your infrastructure
with code, but there are some use cases that are not covered by resource convergence. What if you want to check that
the application Chef just installed is functioning correctly? If it provides a status page an audit can check this
and validate that the application has database connectivity.

Audits are performed by leveraging Serverspec and RSpec on the
node. As such the syntax is very similar to a normal RSpec spec.

Syntax

control_group "Database Audit" do
control "postgres package" do
it "should not be installed" do
expect(package("postgresql")).to_not be_installed
end
end
let(:p) { port(111) }
control p do
it "has nothing listening" do
expect(p).to_not be_listening
end
end
end

Using the example above I will break down the components of an Audit:

control_group – This named block contains all the audits to be performed during the audit phase. During Chef convergence
the audits will be collected and ran in a separate phase at the end of the Chef run. Any control_group block defined in
a recipe that is ran on the node will be performed.

control – This keyword describes a section of audits to perform. The name here should either be a string describing
the system under test, or a Serverspec resource.

it – Inside this block you can use RSpec expectations to
write the audits. You can use the Serverspec resources here or regular ruby code. Any raised errors will fail the
audit.

Output and error handling

Output from the audit run will appear in your Chef::Config[:log_location]. If an audit fails then Chef will raise
an error and exit with a non-zero status.

Further reading

OpenBSD Package provider was added

The package resource on OpenBSD is wired up to use the new OpenBSD package provider to install via pkg_add on OpenBSD systems.

Case Insensitive URI Handling

Previously, when a URI scheme contained all uppercase letters, Chef
would reject the URI as invalid. In compliance with RFC3986, Chef now
treats URI schemes in a case insensitive manner.

File Content Verification (RFC 027)

Per RFC 027, the file and file-like resources now accept a verify
attribute. This attribute accepts a string(shell command) or a ruby
block (similar to only_if) which can be used to verify the contents
of a rendered template before deploying it to disk.

Drop SSL Warnings

Now that the default for SSL checking is on, no more warning is emitted when SSL
checking is off.

Multi-package Support

The package provider has been extended to support multiple packages. This
support is new and and not all subproviders yet support it. Full support for
apt and yum has been implemented.

As an example, you can now do something like this:

apt_package ["xml2-dev", "xslt-dev"]

Knife Bootstrap Validatorless Bootstraps and Chef Vault integration

The knife bootstrap command now supports validatorless bootstraps. This can be enabled via deleting the validation key.
When the validation key is not present, knife bootstrap will use the user key in order to create a client for the node
being bootstrapped. It will also then create a node object and set the environment, runlist, initial attributes, etc (avoiding
the problem of the first chef-client failing and not saving the node’s runlist correctly).

Also knife vault integration has been added so that knife bootstrap can use the client key to add chef vault items to
the node, reducing the number of steps necessary to bootstrap a node with chef vault.

There is no support for validatorless bootstraps when the node object has been precreated by the user beforehand, as part
of the process any old node or client will be deleted when doing validatorless bootstraps. The old process with the validation
key still works for this use case. The setting of the run_list, environment and json attributes first via knife bootstrap
should mitigate some of the need to precreate the node object by hand first.

Windows service now has a configurable timeout

You can now set the amount of time a chef-client run is allowed when running the provided windows service. This can be configured by
setting windows_service.watchdog_timeout in your client.rb to the number of seconds desired. The default value is 2 hours.

Last week we announced a new partnership with Microsoft, which was driven in large part by the intense demand we’re seeing in the enterprise for Chef x Azure, Chef x Powershell, and Chef x Visual Studio.

When we asked past ChefConf attendees, customers, and the wider Chef community what they wanted to see at this year’s show, “more Windows” was a resounding front-runner. So, we connected with our friends in Redmond to deliver a wide range of Microsoft x Chef content at ChefConf – read below for the goods.

This is the fourth entry in our ongoing, bi-weekly series examining our customer Standard Bank’s DevOps journey. You can read the first entry here, the second entry here and the third entry here. Continue below for part four.

This post discusses how Standard Bank’s adoption of DevOps and Chef has affected its approach to building infrastructure. We’ll also see that changes in culture and the use of automation had far-reaching effects on both operations and development.

Creating consistent environments had long been difficult for Standard Bank. Mike Murphy, Head of IT Operations for the Standard Bank Group, described the process.

“We could spin up VMs fairly quickly. That was never the real issue. The issue had more to do with creating the machines in a predictable, standard and consistent fashion. The machines spun up relied, to a degree, on humans doing the right thing and we know that, oftentimes, that doesn’t work. Also, spinning up a cluster of machines to create an environment was not something we contemplated. Machines were literally spun up one by one, on their own, and in their own ways. The consistency simply wasn’t there.

For example, if we had an application that was built from scratch and deployed onto a virtual environment in production (with its associated high-availability (HA) and disaster recovery (DR) elements), we’d sometimes encounter a problem when invoking either the HA or DR component. This was, more often than not, as a result of differences in the configuration of the three environments that was caused by reliance on manual work. We didn’t really have peace of mind that either the HA or DR capability would operate as designed.”

Updating Users

This release fixes Issue 66. Previously, users in LDAP-enabled installations would be unable to log in after resetting their API key or otherwise updating their user
record.

This resolves the issue for new installations and currently unaffected user accounts. However, if your installation has users who have already been locked out, please contact Chef Support (support@chef.io) for help repairing their accounts.

This fix has resulted in a minor change in behavior: once a user is placed into recovery mode to bypass LDAP login, they will remain there until explicitly taken out of recovery mode. For more information on how to do that, see this section of chef-server-ctl documentation.

Key Rotation

We’re releasing key rotation components as we complete and test them. This week, we’ve added API POST support, allowing you to create keys for a user or client via the API.

Key Rotation Is Still A Feature In Progress

Until key rotation is feature-complete, we continue to recommend that you manage your keys via the users and clients endpoints as is done traditionally.

Policyfile

Work on Policyfile support continues to evolve at a rapid pace. This update includes new GET and POST support to named cookbook artifact identifiers. Policyfile is disabled by default, but if you want to familiarize yourself with what we’re trying to do, this RFC is a good place to start.

TL;DR, We found a bug in our bento boxes where the SSL certificates for AWS S3 couldn’t be verified by openssl and yum on our CentOS 5.11, CentOS 6.6, and Fedora 21 “bento” boxes because the VeriSign certificates were removed by the upstream curl project. Update your local boxes. First remove the boxes with vagrant box remove, then rerun test kitchen or vagrant in your project.

We publish Chef Server 12 packages to a great hosted package repository provider, Package Cloud. They provide secure, properly configured yum and apt repositories with SSL, GPG, and all the encrypted bits you can eat. In testing the chef-server cookbook for consuming packages from Package Cloud, I discovered a problem with our bento-built base boxes for CentOS 5.11, and 6.6.

Note that the baseurl is https – most package repositories probably aren’t going to run into this because most use http. The thing is, despite Package Cloud having a valid SSL certificate, we’re getting a verification failure in the certificate chain. Let’s look at this with OpenSSL:

I say past tense because we’ve since removed this from the ks.cfg on the affected platforms and rebuilt the boxes. This issue was particularly perplexing at first because the problem didn’t happen on our CentOS 5.10 box. The point in time when that box was built, the cacert.pem bundle had the VeriSign certificates, but they were removed when we retrieved the cacert.pem for 5.11 and 6.6 base boxes.

Why were we retrieving the bundle in the first place? It’s hard to say – that wget line has always been in the ks.cfg for the bento repository. At some point in time it might have been to work around invalid certificates being present in the default package from the distribution, or some other problem. The important thing is that the distribution’s package has working certificates, and we want to use that.

So what do you need to do? You should remove your opscode-centos vagrant boxes, and re-add them. You can do this:

We’re five weeks away from ChefConf 2015 and we’re filling seats fast, so, if you haven’t already, register today and guarantee your seat at the epicenter of DevOps.

Continuing our series of spotlights on the tremendous talks, workshops, and sponsors at this year’s show, today we focus on Awesome Chef Charity Majors – a production engineer at Parse (now part of Facebook) – and her session, “There and back again: how we drank the Chef koolaid, sobered up, and learned to cook responsibly” – a must-see based on the title alone!

Here’s the download on Charity’s talk:

When we first began using Chef at Parse, we fell in love with it. Chef became our source of truth for everything. Bootstrapping, config files, package management, deploying software, service registration & discovery, db provisioning and backups and restores, cluster management, everything. But at some point we reached Peak Chef and realized our usage model was starting to cause more problems than it was solving for us. We still love the pants off off Chef, but it is not the right tool for every job in every environment. I’ll talk about the evolution of Parse’s Chef infrastructure, what we’ve opted to move out of Chef, and some of the tradeoffs involved in using Chef vs other tools.

This will be a great session for all of you looking for guidance on tooling, or even a friendly debate about the subject. It will also provide patterns of success from some seriously smart and active Chefs over at Parse/Facebook.

As for the presenter herself, Charity is happily building out the next generation of mobile platform technology. She likes free software, free speech and single malt scotch.

This post concludes our bi-weekly blog series on Awesome Chef Paul Comtois’ DevOps Story. You can read the final part below, while part one is here and part two is here. Thank you to Pauly for sharing his tale with us!

Leveling Up the Sys Admins

The last hurdle was that, even with all we’d accomplished, we still weren’t reaching the sys admins. I had thought they would be my vanguard, we would charge forward, and we were going to show all this value. Initially, it turned out they didn’t want to touch Chef at all! Jumpstart, Kickstart and shell scripts were still the preferred method of managing infrastructure.

About the same time that the release team was getting up to speed, the database team decided that they wanted a way to get around the sys admin team because it took too long for changes to happen. One guy on the database team knew a guy on the apps team who had root access and that guy began to make the changes for the database team with Chef. The sys admins were cut out and the apps team felt resentful because the sys admins weren’t doing their job.

That started putting pressure on the sys admins. The app team was saying, “Hey, you guys in sys admin, you can’t use that shell script any more to make the change for DNS. Don’t use Kickstart and Jumpstart because they only do it once, and we don’t have access. We need to be able to manage everything going forward across ALL pods, not one at a time and we need to do it together.” It was truly great to see the app team take the lead and strive to help, rather than argue.

This week our friends at IBM are hosting their InterConnect 2015 conference and we’re pleased to announce expanding (and existing) support for a wide variety of their products. IBM is synonymous with the Enterprise and they have embraced Chef in a big way. By using Chef across your IBM infrastructure, efficiency is improved and risk reduced as you can pick the right environment for your applications. Whether it’s AIX, POWER Linux or an OpenStack or SoftLayer Cloud, Chef has you covered by providing one tool to manage them all.

In Chef 12 we officially addedAIX support and there has been tremendous interest because many large enterprise customers have a significant investment in the platform. By providing full support for AIX Resources such as SRC services, BFF and RPM packages and other platform-specific features, AIX systems become part of the larger computing fabric managed by Chef. The AIX cookbook expands functionality and there is even a knife-lpar plugin for managing POWER architecture logical partitions.

In addition to supporting AIX on POWER, we’re also currently working on providing official Chef support for Linux on POWER for Ubuntu LE and Red Hat Enterprise Linux 7 BE and LE. We plan to release initial Chef client support for all 3 platforms by ChefConf. Once the clients are available the Chef server will be ported to these platforms and we expect to release it early this summer.

With the Chef client on AIX, the client and server on Linux on POWER, and nodes being managed on OpenStack and SoftLayer clouds; administrators with IBM systems have many options when it comes to managing their infrastructure with Chef. We’ve enjoyed working with them and expect to continue making substantial investments integrating IBM’s platforms to meet Chef customers’ automation needs across diverse infrastructures.