We recently released a new Habitat Package for deploying JavaEE Web application on Tomcat 8. The best way to learn how to take advantage of this new package is with an example, so that’s what we’ll do in this post.
As part of this example, we’ll take the perspective of the Java application developer. We’ll make design decisions that are important to a developer, configure our plan in such a way that it best accommodates our developer-centric workflow, and still be able to provide all the automation necessary to deploy our application when handed off to other teams.
The following video is a companion to this article. You may find it helpful to watch the video first before diving into the details of this blog post.
Our example application is a simple JavaEE Web application that renders a map of all the National Parks in the United States via a map. Users can browse the map, zoom in, and click on any park to get information about the park.
The application utilizes a MongoDB backend that contains the information about the National Parks. A jQuery front end handles the map rendering and calls to the RESTful backend.
As with most Java application, Maven is used to compile the source files into a WAR file suitable for deployment into Apache Tomcat or various other application containers.
Finally, the sample application is available on GitHub at https://github.com/billmeyer/national-parks.
Choosing Habitat to deploy Java web applications has some great benefits worth considering:
plan.sh
Lets take a look at the plan.sh
file that we’ll be using to package up our national-parks application.
We have a couple of requirements we want to meet as part of its design:
So, with these considerations in mind, we can begin to step through our plan.sh
file.
pkg_name=national-parks pkg_description="A sample JavaEE Web app deployed in the Tomcat8 package" pkg_origin=billmeyer pkg_version=0.1.3 pkg_maintainer="Bill Meyer <bill@chef.io>" pkg_license=('Apache-2.0') pkg_source=https://github.com/billmeyer/national-parks pkg_deps=(core/tomcat8 billmeyer/mongodb) pkg_build_deps=(core/git core/maven) pkg_expose=(8080) pkg_svc_user="root"
The majority of these entries are self-explanatory. Worth noting are the following:
In the next section of plan.sh
, we supply our own implementation of the available callbacks.
We override the do_download() callback simply because we want to pull from GitHub:
do_download() { build_line "do_download() ==================================================" cd ${HAB_CACHE_SRC_PATH} build_line "\$pkg_dirname=${pkg_dirname}" build_line "\$pkg_filename=${pkg_filename}" if [ -d "${pkg_dirname}" ]; then rm -rf ${pkg_dirname} fi mkdir ${pkg_dirname} cd ${pkg_dirname} GIT_SSL_NO_VERIFY=true git clone --branch v${pkg_version} https://github.com/billmeyer/national-parks.git return 0 }
As you can see, this is as simple as creating a new package directory to store our source code in and cloning the appropriate repository from GitHub.
Note: as a matter of convention, our plan’s pkg_version matches a git tag we create in GitHub. This allows us to pull a specific release from GitHub that matches the version this plan file was written for.
Next we provide our own implementation of do_clean() and do_unpack():
do_clean() { build_line "do_clean() ====================================================" return 0 } do_unpack() { # Nothing to unpack as we are pulling our code straight from github return 0 }
Again, these are based on the fact that we are pulling from GitHub so we don’t want the default implementation to do anything.
Next is the do_build() callback. We need to supply a version that will build our application using Maven:
do_build() { build_line "do_build() ====================================================" # Maven requires JAVA_HOME to be set, and can be set via: export JAVA_HOME=$(hab pkg path core/jdk8) cd ${HAB_CACHE_SRC_PATH}/${pkg_dirname}/${pkg_filename} mvn package }
We building via Habitat, we’ve declared our dependency on core/jdk8 and core/maven so we can ask Habitat for the location of these packages via the hab pkg path command.
For example, to set JAVA_HOME, we can ask Habitat where the JDK8 installation resides:
[7][default:/src:0]# hab pkg path core/jdk8 /hab/pkgs/core/jdk8/8u92/20160620143238
and set our _JAVA_HOME to point to it:
export JAVA_HOME=$(hab pkg path core/jdk8)
With the application source compiled into a WAR file, its time to copy the national-parks.war
file to the Tomcat8 webapps
directory. We do this by overriding the do_install() callback.
do_install() { build_line "do_install() ==================================================" # Our source files were copied over to the HAB_CACHE_SRC_PATH in do_build(), # so now they need to be copied into the root directory of our package through # the pkg_prefix variable. This is so that we have the source files available # in the package. local source_dir="${HAB_CACHE_SRC_PATH}/${pkg_dirname}/${pkg_filename}" local webapps_dir="$(hab pkg path core/tomcat8)/tc/webapps" cp ${source_dir}/target/${pkg_filename}.war ${webapps_dir}/ # Copy our seed data so that it can be loaded into Mongo using our init hook cp ${source_dir}/national-parks.json $(hab pkg path ${pkg_origin}/national-parks)/ }
Here we set a couple of local variables– one that references where our application war file resides and the other where the webapps
directory exists in Tomcat. Then we simply copy the file over to Tomcat.
We also need to copy our national park seed data (national-parks.json
) to install directory so we can load it into MongoDB when our application is initialized.
Lastly, we override the do_verify() callback because we are cloning out of GitHub and there is no source tarball to compare the sha sum to.
do_verify() { build_line "do_verify() ===================================================" return 0 }
Now we can begin implementing the hooks we need to automate the deployment and configuration of our application.
Our application comes with seed data that we must load into our MongoDB instance. To do this, we supply our own hooks/init
file that will use the mongoimport tool to load the data from our national-parks.json
file:
#!/bin/bash exec 2>&1 echo "Seeding Mongo Collection" MONGODB_HOME=$(hab pkg path billmeyer/mongodb) source {{pkg.svc_config_path}}/mongoimport-opts.conf echo "\$MONGOIMPORT_OPTS=$MONGOIMPORT_OPTS" # billmeyer/mongodb requirement to run mongoimport properly : ln -s $(hab pkg path core/glibc)/lib/ld-2.22.so /lib/ld-linux-x86-64.so.2 2>/dev/null ${MONGODB_HOME}/bin/mongoimport --drop -d demo -c nationalparks --type json \\ --jsonArray --file $(hab pkg path billmeyer/national-parks)/national-parks.json ${MONGOIMPORT_OPTS}
NOTE: the
mongoimport-opts.conf
file is NOT a configuration file from Mongo’s perspective, its a necessity on the Habitat side to get files in the/config
directory to have their variable substitution at runtime. When Habitat runs an application, it will only look for file in the/config
directory ending in a.conf
extension. All other extensions will be ignored. Because we want to use Habitat’s Runtime Binding to enable us to locate our running MongoDB instance, we need to supply a file with a.conf
extension so that Habitat will perform the variable substitution we need for runtime binding to work properly. Future releases of Habitat will hopefully remedy this.
We can now look at the hooks/run
file which will be responsible for starting our application. In our case we simply start up Tomcat to run the application.
#!/bin/bash exec 2>&1 echo "Starting Apache Tomcat" export JAVA_HOME=$(hab pkg path core/jdk8) export TOMCAT_HOME="$(hab pkg path core/tomcat8)/tc" source {{pkg.svc_config_path}}/catalina-opts.conf echo "\$CATALINA_OPTS=$CATALINA_OPTS" exec ${TOMCAT_HOME}/bin/catalina.sh run
With our hooks implemented, we can begin implementing the configurable settings we want to allow to be overridden at runtime.
As mentioned in the Considerations section, our application needs to be told where to find the hostname and port number where our MongoDB instance is running. Furthermore, we want to take advantage of Habitat’s Runtime Binding to have the Habitat Supervisor running MongoDB be able to share the hostname and port number when we start up our Tomcat instance as its peer.
As mentioned above, config/mongoimport-opts.conf
isn’t a formal configuration file, but rather its a way we can build a file dynamically at runtime that we can then ‘source’ into our hooks/init
script to supply the dynamically supplied values (in our case, the values for the MongoDB host and port) to mongoimport.
In this example, we start Tomcat with a command similiar to:
$ hab start billmeyer/national-parks --peer 172.17.0.2 --bind database:mongodb.default
By doing this, we create a binding within our Habitat Supervisor with the name database that will be assigned all of the service group information that we are interested in. We can then access the elements of service group (ie., {{ip}} and {{port}}) to tell mongoimport where to connect to.
{{~#if bind.has_database}} {{~#each bind.database.members}} export MONGOIMPORT_OPTS="--host={{ip}} --port={{port}}" {{~/each}} {{~/if}}
Like the config/mongoimport-opts.conf
file above, we need to pass the database host and port via a Java Environment variable (-D) on the command line.
{{~#if bind.has_database}} {{~#each bind.database.members}} export CATALINA_OPTS="-DMONGODB_SERVICE_HOST={{ip}} -DMONGODB_SERVICE_PORT={{port}}" {{~/each}} {{~/if}}
Tomcat will use the CATALINA_OPTS environment variable to push these -D values to the JVM where they can be read by our java code.
With all of the necessary plan files authored, we can build our Habitat package.
NOTE: The entire Habitat Plan for this example can be pulled from GitHub:
$ cd ~ $ git clone https://github.com/billmeyer/national-parks-plan.git
~/national-parks-plan
directory:
On macOS, run: $ hab studio enter On Linux, run: $ sudo hab studio enter
[1][default:/src:0]# build : Loading /src/plan.sh national-parks: Plan loaded national-parks: hab-plan-build setup national-parks: Using HAB_BIN=/hab/pkgs/core/hab/0.9.0/20160815225003/bin/hab for installs, signing, and hashing national-parks: Resolving dependencies » Installing core/git ↓ Downloading core/git/2.7.4/20160729215550 ... » Signing /hab/cache/artifacts/.billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.tar.xz ☛ Signing /hab/cache/artifacts/.billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.tar.xz with billmeyer-20160629135755 to create /hab/cache/artifacts/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart ★ Signed artifact /hab/cache/artifacts/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart. mkdir: created directory '/src/results' '/hab/cache/artifacts/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart' -> '/src/results/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart' national-parks: hab-plan-build cleanup national-parks: national-parks: Source Cache: /hab/cache/src/national-parks-0.1.3 national-parks: Installed Path: /hab/pkgs/billmeyer/national-parks/0.1.3/20160826185302 national-parks: Artifact: /src/results/billmeyer-national-parks-0.1.3-20160826185302-x86_64-linux.hart national-parks: Build Report: /src/results/last_build.env national-parks: SHA256 Checksum: a7ee5012ca1f30de87686f433cb5092baad1e7525249d73bdb3e527306305f8e national-parks: Blake2b Checksum: b0e2d83b411c302bf649bfbc7e53670e833ff75b808081e6e4837c7bc91c6803 national-parks: national-parks: I love it when a plan.sh comes together. national-parks: national-parks: Build time: 13m17s
Once the build is complete, we have a Habitat package file that we can either run directly or export to other formats for our preferred runtime environment.
For this example, we want to export our application’s Habitat package into a Docker image that we can then run via docker. We also want to export the MongoDB package into a Docker image as well.
From within hab studio execute the following.
[2][default:/src:0]# hab pkg export docker billmeyer/mongodb core/hab-pkg-dockerize is not installed Searching for core/hab-pkg-dockerize in remote https://willem.__Habitat__.sh/v1/depot » Installing core/hab-pkg-dockerize ↓ Downloading core/hab-pkg-dockerize/0.9.0/20160815225538 3.93 KB / 3.93 KB / [==================================================================================================================] 100.00 % 45.65 MB/s → Using core/acl/2.2.52/20160612075215 → Using core/attr/2.4.47/20160612075207 → Using core/bash/4.3.42/20160729192720 ... Step 7 : ENTRYPOINT /init.sh ---> Running in 5bbb01fafc60 ---> cac9aae3a725 Removing intermediate container 5bbb01fafc60 Step 8 : CMD start billmeyer/mongodb ---> Running in c0533ce06691 ---> 53c5601a02c6 Removing intermediate container c0533ce06691 Successfully built 53c5601a02c6
[3][default:/src:0]# hab pkg export docker billmeyer/national-parks hab-studio: Creating Studio at /tmp/hab-pkg-dockerize-Nw3A/rootfs (baseimage) > Using local package for billmeyer/national-parks > Using local package for billmeyer/mongodb/3.2.6/20160824195527 via billmeyer/national-parks > Using local package for core/acl/2.2.52/20160612075215 via billmeyer/national-parks ... Step 7 : ENTRYPOINT /init.sh ---> Running in 364991c8ff8d ---> 6472464ba9c0 Removing intermediate container 364991c8ff8d Step 8 : CMD start billmeyer/national-parks ---> Running in bf66f488ba62 ---> 67dbaef2b854 Removing intermediate container bf66f488ba62 Successfully built 67dbaef2b854
$ docker images REPOSITORY TAG IMAGE ID CREATED SIZE billmeyer/national-parks 0.1.3-20160826185302 67dbaef2b854 3 minutes ago 710.7 MB billmeyer/national-parks latest 67dbaef2b854 3 minutes ago 710.7 MB billmeyer/mongodb 3.2.6-20160824195527 53c5601a02c6 8 minutes ago 303.2 MB billmeyer/mongodb latest 53c5601a02c6 8 minutes ago 303.2 MB
You should see an entry for billmeyer/national-parks and one for billmeyer/mongodb.
From a terminal, execute the following:
$ docker run -it -p 27017:27017 billmeyer/mongodb
NOTE: If you are running on Linux, run this command via sudo.
Port 27017 is what Mongo uses to listen for incoming connections so we tell docker to open that port up to external connections.
You will notice as it starts, the Habitat Supervisor displays its IP address:
hab-sup(MN): Starting billmeyer/mongodb hab-sup(TP): Child process will run as user=root, group=hab hab-sup(GS): Supervisor 172.17.0.3: 84a8cbf3-839c-4ad5-bb13-4766e0e5432e hab-sup(GS): Census mongodb.default: fbf93825-d0e3-4866-bde0-56c29454abd1 hab-sup(GS): Starting inbound gossip listener hab-sup(GS): Starting outbound gossip distributor hab-sup(GS): Starting gossip failure detector hab-sup(CN): Starting census health adjuster ...
172.17.0.3 in this example. We will need to pass it as our peer when we start up Tomcat in the next step.
$ docker run -it -p 8080:8080 billmeyer/national-parks --peer 172.17.0.3 --bind database:mongodb.default
We start the Tomcat instance and pass the ip address of our MongoDB peer along with a bind option to enable the runtime binding behavior we want to take advantage of as explained earlier.
hab-sup(MN): Starting billmeyer/national-parks hab-sup(TP): Child process will run as user=root, group=hab hab-sup(GS): Supervisor 172.17.0.4: e57136c8-b1d4-4efe-b8e7-308cf33d15e7 hab-sup(GS): Census national-parks.default: 564de0e4-d03f-4523-82b0-3d75346cf5fd hab-sup(GS): Starting inbound gossip listener
As the Tomcat supervisor starts, you can see it join its peer:
hab-sup(GS): Joining gossip peer at 172.17.0.3:9634
hab-sup(GS): Starting outbound gossip distributor hab-sup(GS): Starting gossip failure detector hab-sup(CN): Starting census health adjusterWhen this happens, Habitat triggers a refresh of the files in our plan’s /config
directory. As it updates, we can see confirmation of the update in the startup output:
hab-sup(SC): Updated catalina-opts.conf hab-sup(SC): Updated mongoimport-opts.conf hab-sup(TP): Restarting because the service config was updated via the census
Next, our hooks/init
script runs where we see positive confirmation that the dynamic runtime binding has been applied and that we do, in fact, have a configured IP address and Port available to use:
init(PH): Seeding Mongo Collection init(PH): $MONGOIMPORT_OPTS=--host=172.17.0.3 --port=27017
Lastly, our init script loads our seed data into our MongoDB instance:
init(PH): 2016-08-26T19:31:25.254+0000 connected to: 172.17.0.3:27017
init(PH): 2016-08-26T19:31:25.254+0000 dropping: demo.nationalparks init(PH): 2016-08-26T19:31:25.279+0000 imported 359 documentsAt this point, our hooks/run
script takes over starting Tomcat as normal.
If you are running docker natively on Linux, you can point your browser directly at the IP address of your Tomcat instance:
http://somehost:8080/national-parks
If you are running docker on macOS, you need to get the ip address from docker using a command like:
$ open "http://$(docker-machine ip default):8080/national-parks"