This is the fourth post in our series, Why Habitat? You can catch up with Part 1, Part 2, and Part 3.
In our last post, we began covering the Habitat supervisor. We explored how it allows you to manage the runtime life cycle of your application. In this post, we will cover more of the operational aspects that the Habitat supervisor allows you to manage.
When you deploy an application in production, you never deploy just one instance. Typically, a whole host of instances will need to be configured in a similar fashion. A config management tool is an ideal choice to setup many identical instances. While that has been an effective approach for many years, Habitat provides a better alternative.
Habitat provides automation that helps to configure groups of Habitat supervisors. Service Groups allow for Habitat supervisors to exchange information about each other and Topologies allow for supervisors to self organize.
Service Groups
As mentioned earlier, the Habitat supervisor is network aware. That awareness allows you to inject configuration into running supervisors via any member of the ring. You can also start new supervisors and, at any point, connect them to already running supervisors. The new supervisors will read configs from their existing peers and allow you to dynamically expand management of your running services.
The interconnected supervisors create what Habitat calls a “service group”. The name of this group is, by default, the same as the application service you are creating. So in our example case, the name of our service group is myapplication.default service. As we grow our ring and supervisors connect to one another they exchange the configurations of all existing groups. Any new supervisors will then inherit the config of any already groups.
That inheritance model allows you to scale application services fast. As you start or add new supervisors, they automatically configure themselves based on the available data about their environment. You can run multiple instances of an application service as well. Starting a Habitat application with the --group
option allows you to specify a group.
If you think about this in the context of continuous delivery, it makes it easy to spin up new applications in an environment. Going back to the principles of 12 Factor Apps, this is number 3: “Store config in the environment.”
Topologies
Beside passing configuration to peers, the supervisor can help configure various clustering topologies. At present, Habitat has two topologies built into the supervisor: standalone & leader/follower. Standalone works well for stateless applications where peers are performing application logic. Leader/follower works well stateful services such as database servers.
In the case of leader/follower, the Habitat supervisors will wait until quorum is reached. Once there is quorumn election automatically occurs that makes one supervisor the leader and the rest followers. Take for instance a database server where one instance is the master and the rest the replicas.
Supervisor API
The supervisor has many responsibilities, so it’s helpful to have an API to query the supervisor state at any given time. The Habitat supervisor provides a RESTful API service that gives you visibility into what’s happening. The API provides information through the following endpoints:
- /census – Returns the current census of services on the ring.
- /services – Returns an array of all the services running under this supervisor.
- /services/{name}/{group}/config – Returns the current configuration of service group {name}.
- /services/{name}/{group}/{organization}/config – Same as above, but includes an organization.
- /services/{name}/{group}/health – Returns the current health check status for this service.
- /services/{name}/{group}/{organization}/health – Same as above, but includes an organization.
- /butterfly – Debug information about the rumors stored via the gossip layer.
While Habitat includes many defaults, you can extend that API to perform health checks of your particular application service. Placing a file called health_check
in your plan’s hooks
directory will allow you to customize a healthcheck for your particular application. The health check API gives you a standardized interface to check on the status of your application and on the state of the automation Habitat allows you to provide.
Habitat: The Complete Picture
We’ve explored a wide variety of topics over the last four blog posts in this series, so let’s review what we’ve learned so far.
Habitat provides a common way to package your application. Through the use of a plan.sh
, Habitat allows you to define your entire application build lifecycle. Habitat allows allows you to define various phases of your application’s run lifecycle through the use of hooks. You can (and likely, should) also include configuration for your applications in the run phase of your lifecycle through the use of constructs such as templates and a default.toml
. By using these design features in tandem, Habitat gives you a single application artifact that contains the application, the required components, and the triggerable means to run your applications.
Habitat packages are run via the Habitat supervisor. The supervisor starts the application contained in a Habitat package, generates the required configs, and is responsible for ensuring your application behaves correctly during any given part of its lifecycle. Supervisors communicate with other supervisors to exchange configuration information about one another and self organize in application topologies. Supervisors also expose an API that should be used to query status about the current state, health, and configuration about your managed applications.
So far, we’ve explored a lot of the “how” and mechanics behind Habitat, with a limited focus on the “why” of its design principles. In our next (and final) post, we will explore how and why you would use Habitat in real world use cases and why these mechanics make Habitar a unique and particularly effective tool for managing modern applications.