Chef Habitat provides automation capabilities for building and delivering applications to any environment. Both operations and development teams benefit from the adoption of a consistent configuration process used across all applications.
Developers define and build a package for their application, with the dependencies needed at build time and runtime. Build packages can be exported to whatever infrastructure they want: Docker, Kubernetes, Mesos, bare metal, etc. – eliminating the need to create and maintain different configuration builds for different environments. Automated agents are included as part of the configuration and can then be leveraged operations teams to automate ongoing changes and updates to the applications.
The Habitat Builder provides a set of functionality that includes package storage, search, and an API for clients. The Habitat Builder was first launched as a cloud service and as the repository of all available plans built by Chef and the supporting community. Due to the fact that the application source code is stored alongside the build package many clients have expressed a preference for storing packages and running the builder on-premises. As a result, Chef has been focused on improving the on-premises experience which has included optimizing what and how plans are shared across the SaaS and on-premises version of the builder.
The release of Chef Habitat 0.88.0 (https://www.habitat.sh/docs/install-habitat/) introduces two new native commands (hab pkg download and hab pkg bulkupload) that ease the movement of packages between the public SaaS and on-premises Builder environments.
Designed for Chef Habitat Builder on-prem environments, (https://github.com/habitat-sh/on-prem-builder) these commands simplify and replace the existing bootstrapping workflow with streamlined package installation and maintenance processes. The new commands enable the transfer of packages without an external internet connection, making them suitable for airgap environments.
Below are a range of scenarios that illustrate the use of the new commands with Builder on-prem.
Instead of downloading a large, predetermined bootstrap package we use the hab pkg download command to populate a directory with the required packages (and their associated origin’s public package signing keys).
This command takes as input a set of packages to seed the directory with. This package set does not need to contain any of the package dependencies, as those are fetched automatically.
As a (contrived) example, let’s say we are interested in Rust development and only need any packages related to that. We create a file called my_package_set with the following contents:
core/rust
core/cargo-nightly
This file lets us tell hab pkg download what packages to start seeding the download with, so we can issue the command as follows:
$ hab pkg download --target x86_64-linux --download-directory my_bootstrap_directory --file my_package_set
This will download the needed packages into the my_bootstrap_directory
folder. The entire download will take a very short time.
If we check the contents of my_bootstrap_directory
after the conclusion of the download command, we see all the needed packages and keys in one place:
~/my_bootstrap_directory $ tree
.
├── artifacts
│ ├── core-binutils-2.31.1-20190115003743-x86_64-linux.hart
│ ├── core-busybox-static-1.29.2-20190115014552-x86_64-linux.hart
│ ├── core-cacerts-2018.12.05-20190115014206-x86_64-linux.hart
│ ├── core-cargo-nightly-0.16.0-20190117180323-x86_64-linux.hart
│ ├── core-gcc-8.2.0-20190115004042-x86_64-linux.hart
│ ├── core-gcc-libs-8.2.0-20190115011926-x86_64-linux.hart
│ ├── core-glibc-2.27-20190115002733-x86_64-linux.hart
│ ├── core-gmp-6.1.2-20190115003943-x86_64-linux.hart
│ ├── core-libmpc-1.1.0-20190115004027-x86_64-linux.hart
│ ├── core-linux-headers-4.17.12-20190115002705-x86_64-linux.hart
│ ├── core-mpfr-4.0.1-20190115004008-x86_64-linux.hart
│ ├── core-rust-1.38.0-20190930155321-x86_64-linux.hart
│ └── core-zlib-1.2.11-20190115003728-x86_64-linux.hart
└── keys
└── core-20180119235000.pub
2 directories, 14 files
Looking at the directory size, we see that it is about 1/30th the size of the full set of core packages.
~/my_bootstrap_directory $ du -h .
516M ./artifacts
8.0K ./keys
516M .
Now we can move this set of packages to a location from which to upload to a local on-premises Builder any way we want – that could be by tar-ing up the directory and copying it to some other location, using a USB drive to move across an air-gapped environment, or even using a system like Artifactory to do the propagation to a desired location.
Now that we have a cleanly built directory with the packages and keys we want, we can use the hab pkg bulkupload command to perform the upload to a local Builder environment:
$ hab pkg bulkupload -u localhost -c stable my_bootstrap_directory
This command (along with a valid Habitat auth token) will upload the contents of the my_bootstrap directory to the on-premises Builder that is located on localhost, and make sure that they are promoted to the stable channel so that they can start being consumed immediately.
Once the initial package bootstrap has been completed, there is also a need to keep the packages synchronized across environments – for example, from the public Builder SaaS to on-premises Builder.
Fortunately, the same download and bulk upload strategy can be used to keep the packages in sync. If the hab pkg download command is run a second time and pointed to the same my_bootstrap directory, it will download and update only the packages that have newer versions.
Since it is now possible to specify a smaller set of more targeted packages, the synchronization should be fairly fast (especially as compared to the current on-prem-archive script).
Putting everything we’ve learnt so far together, let’s see how the workflow might function in an airgap environment.
The diagram below shows a possible airgap environment. This environment has multiple Builder on-premise installations, which need to be seeded (and kept up to date) with packages from the SaaS public Builder.
Since the Builder on-premises environments do not have any external connectivity, the only mechanism to move packages is via means that are approved by the enterprise policy.
In this scenario, the hab pkg download could be used to generate the set of desired artifacts that could then be zipped and moved to the internal environments, and then uploaded via the hab pkg bulkupload command.
With the new download and bulk upload Habitat commands, we now have the capability to:
We will be tuning and updating these capabilities in future releases.
As always, we appreciate your feedback in our Habitat Forum, Slack Channel, or logged as issues on one of the Habitat repositories.