Chef Blogs

Set up a Patch Management System Using Chef Automate

Nick Rycar | Posted on | Chef InSpec | compliance | cookbooks

How does Chef handle patch management?

The short answer is: it depends.

For some organizations, patch management is simply a matter of running vendor-recommended updates on a fairly regular interval, while having the flexibility to install on-demand updates as vulnerabilities like 0-days require. For others, environments must be entirely air gapped, and multiple internal repositories and channels are used to enforce a fine level of granularity for what packages can be consumed and where. Most fall somewhere between the two, requiring some rigor to the validation and promotion of software updates, but perhaps not to the extent of organizations subject to the strictest of regulatory requirements. As such, while Chef should be able to help automate just about any process under the sun, what patching software, services, or strategies make the most sense for your organization can vary significantly based on where on this spectrum you fall.

Still, “it depends” is a pretty unfulfilling answer. I think we can do better than that! It’s true, the details of any given implementation may vary considerably. However, the overall shape of what needs to happen generally doesn’t. No matter the specifics of your patching requirements, you’ll likely need to do roughly the following:

  1. Identify which patches are required
  2. Apply and evaluate the patches in a testing environment
  3. Distribute the patches to the rest of the fleet
  4. Confirm the patches have been applied successfully

At a general level, this process is the same for any software artifact. With Chef Automate, we have the means to meet these requirements. Compliance allows us to evaluate our systems against expectations to determine which, if any, need to be updated. If we do need to apply new patches, Workflow provides us a means to test and promote those patches. This is done through a series of environments safely, before ultimately updating production. Finally, we can use Compliance again to ensure once we’ve deployed successfully, there are no remaining updates to apply.

To illustrate what this might look like in action, I’ve put together a simple patching project for us to run through in greater depth. The work I’ve done here is based on the demonstration Michael Ducy, Bill Meyer, and Ricardo Lupo put together this past spring, as well as John Byrne’s delivery-bag build cookbook.

The following video is a companion to this article. You may find it helpful to watch the demo first before diving into the details of this blog post.

 

Let’s dive in!

1) Identify which patches are required

In the interest of keeping our focus on the overarching process, my actual implementation is quite simple. In my environment I’m using Ubuntu’s default package repositories, and simply need to report on whether I’m running the latest available versions of each installed piece of software. As luck would have it, our Compliance team has produced an InSpec profile that does just that!

Linux Patch Benchmark

With that in hand, there are a number of ways we can apply it to our infrastructure. If I have a Compliance server in place, I can scan my nodes from there directly, or configure recurring scans (see our compliance tutorial for more details). For my environment, however, I want to run my Compliance scans every time chef-client checks in so that I can ensure that the changes Chef is making aren’t creating any compliance regressions. To that end, I can make use of the Audit cookbook, which allows me to trigger Compliance scans as part of my chef-client runs, and send the results to my Chef Automate server as a weighted report, where I can reference them as needed. In the case that there are new software updates to be evaluated, I’ll see some failures that look like this:

In the results we can see that each out of date package is listed, along with what version is the latest available from our repository. Now that we know what updates are required, we can move onto the next step…

2) Apply and evaluate the patches in a testing environment

For longtime Chef users, this part’s probably the most straightforward. Installing software is what Chef’s been automating since day one, and the ChefDK provides tools like Test Kitchen which can be used to quickly validate that our updates behave as expected. Once again, I’ll be keeping things simple here, and it doesn’t get much simpler than the package resource.

After all, I know what packages I need, and what versions are required, so to ensure they’re in place, all I need is something to the effect of:

package 'PACKAGENAME' do
 version 'VERSIONNUMBER'
 action :install
 end

Add in the ability to iterate over a list of those package/version pairs, and we have everything we need to remediate the issues reported in step one. To that end, I created the aptly-named simple_patcher cookbook. This cookbook allows you to specify a list of required packages, with their respective versions, in a data bag item on your Chef server. It will then iterate over that data bag item’s contents, and execute a package resource for each one it finds. It also includes some mock data bag information and inspec rules so that right out of the gate running “kitchen test” should produce output similar to the following:

So far, so good! At least, when I update my local setup to address the failures Compliance reported to me, things seem to work. But we’re not done yet. After all, there’s a reason the phrase “it worked on my machine” has reached novelty t-shirt levels of ubiquity! It’s time for the real fun: the road to production!

3) Distribute the patches to the rest of the fleet

Even though my local test looks promising, we’re not quite ready to push things directly into production. I’m likely to have no end of applications, services and databases running that could potentially be adversely affected by an unexpected incompatibility with some newly installed software package or system library. I need a way to stage my updates so that they’re first validated in my non-production environments before I move up to my more sensitive servers.

Chef Automate provides a workflow for helping design this kind of incremental promotion. Workflow helps achieve continuous delivery of changes by providing a consistent shape for that work to take, while allowing for a diverse array of methodologies within that shape. Sound familiar?

At a high level, every Workflow pipeline consists of six discrete stages, each with their own associated phases. The first two stages, Verify and Build, are concerned with the testing of our source code itself. Is it syntactically valid? Do my unit tests pass? If so, we can build out an artifact, and publish it so that it can be deployed. The remaining four stages, Acceptance, Union, Rehearsal, and Delivered, are concerned with then deploying and testing those artifacts in increasingly more production-like environments. See below for a full diagram of this process:

While Chef is very prescriptive about this pipeline shape, each individual phase is designed to be completely customizable. This is achieved by employing a specific type of Chef cookbook called a “build cookbook”. A build cookbook’s recipes describe what specific actions should be taken in each phase of the pipeline, allowing us to use the right tools for each specific project while still maintaining a consistent workflow throughout.

If you run through the tutorials provided on learn.chef.io, you’ll be introduced to delivery-truck, a build cookbook designed for deploying cookbooks to a Chef server. What I’ve created for this project is a slightly different kind of build cookbook. Think back to our use case — if we want to apply new patches to our cookbook from step two, what do we actually change? Is it the cookbook itself? No! It’s the data bag the cookbook is consuming. So our needs are going to be slightly different than what stock delivery-truck can provide.

Once again, here’s a simple implementation of what a minimum viable product might look like: data_bag_patcher.

Our project code in this instance is simply the contents of the “data_bags” directory, containing an example data bag item for the Ubuntu servers in our example environment. This is the artifact we’ll be promoting. The roles directory contains an example role that might be in the run list of one of our target nodes. It configures the audit cookbook to run our Linux Patch Benchmark profile as well as applies our patching recipe from step two. Finally, the .delivery directory contains our Workflow configuration file and build cookbook. It’s configured to take the following actions:

Lint: Ensure the data bag is validly formatted JSON
Syntax: Ensure the data bag has a properly formatted id.
Publish: Upload the data bag to the Chef server.
Provision: Create a new data bag item for the stage’s associated environment.
Deploy: Apply the updates via chef-client.

The rest of our stages — Unit, Security, etc — have been skipped in the project’s config.json file. Like any Chef cookbook, we can create build cookbooks iteratively, and enable the remaining phases as we create the recipes to achieve them. For now, though, having a means of deployment is sufficient for my needs.

Now that I had a project created in my Chef Automate server, the process for evaluating a new set of patches was fairly straightforward.

1) Check out a new branch, and add the appropriate updates to my data bag’s JSON file.

2) Add and commit my change, and run “delivery review” to submit it to my Chef Automate server.

# git add -u
# git commit -m "Update Patchlist."
# delivery review

The Pipeline Run

At that point, a browser window will open to let me know the status of my change. For each phase of the pipeline, Chef Automate will kick off a job on one of my build servers to execute the appropriate recipe from my build cookbook. As each phase executes, we can verify the output of its run from the Workflow UI:

As each phase completes, it will let us know it was successful by displaying a green checkmark next to itself before moving on to the subsequent phase. Once the Verify stage completes, the pipeline will pause, giving us the chance to do some formal approval and review a diff of what’s being proposed in this change:

During the Publish phase, our project will ensure that the required data bag exists on the Chef server, and create a template data bag item with our newly-minted updates:

Then, during each stage’s Provision phase that data bag item gets copied into an environment-specific item so that once the pipeline is complete, we have a data bag item unique to each stage we’re deploying to. This allows us to control when the changes are promoted from one set of environments to the next.

Finally, the Deploy phase is making use of push jobs to kick off a chef-client run on any node in the current stage’s environments that has the “simple_patcher” cookbook in its run list.

After the Acceptance environment completes, we hit our second and final human gate, where we can do whatever manual evaluation is required of our environment post-deploy before pulling the trigger to run through the same process in Union, Rehearsal, and finally Delivered. A more throurough accounting of each stage’s specific purpose can be found in our Workflow documentation

4) Confirm the patches have been applied successfully

Now we get to wrap things up by making sure we can easily confirm all of our updates have been applied. To accomplish this, all we need to do is use the same Linux Patch Benchmark InSpec profile we used back in step one to identify which packages needed updating. Since I’ve configured my servers to automatically run that audit every time they run chef-client, and chef-client ran as part of the deployments in Workflow, we should already be able to confirm simply by navigating to the “Compliance Status” tab in Chef Automate’s Visibility UI:

Success! My updates have been applied, and I no longer have any non-compliant nodes! Not too bad for a day’s work, if I don’t say so myself.