Did you know that you can navigate the posts by swiping left and right?

Dynamic Inventory with Consul and Ansible

04 Dec 2016 . category: Ansible . Comments
#Ansible #Consul

In this post I want to go over using Ansible with a dynamic inventory generated with Consul. Ansible is a great tool for Configuration Management and really lives up to its mantra of simplicity. However, when it comes to management of static inventory files it can become messy quite quickly.

Contents

Intro

Inventory in Ansible is a simplistic concept, however as you add more and more snowflake type servers in it can become quickly over complicated and cluttered. Dynamic inventory in Ansible allows us to query an endpoint to retrieve inventory data that can be dynamically updated from other sources. This ‘endpoint’ can really be anything that is holding information in regards to your nodes such as VMware, AWS or something like Consul.

Consul Cluster

Since we’re going to need a Consul Cluster running we may as well go the quickest and easiest route and use progrium/consul Docker image.

Spin up the three Consul nodes as described under the Testing a Consul cluster on a single host section.

Output

The above picture is taken from Docker Kitematic. This is a simple way to visualize our Containers and give us up-to-date information with constant log tailing.

Adding Clients

So, once our Consul cluster is up and running we will want to add some clients to it.

For this example we’ll just use centos which we can get going quickly by doing docker run --expose=22 -d bundyfx/centos-consul

You can jump onto this container by running: docker exec -it {containerID} /bin/bash

This Container image is based upon this centos-ssh image as it’s all setup ready to use with SSH. (required for ansible)

It’s also had the Consul binaries installed on it, however if you wanted to do this via a fresh centos image you could do as like:

yum -y install unzip wget
cd /usr/local/bin
wget https://releases.hashicorp.com/consul/0.7.1/consul_0.7.1_linux_amd64.zip
unzip *.zip

You should be able to run consul from the container to see the help.

Joining the Cluster

Once we have Consul installed we can simply join our cluster as a Client by running:

consul agent -join 172.17.0.4 -data-dir /tmp/consul -config-dir=/etc/consul.d

In this case, 172.17.0.4 is one of the three nodes within the Consul Cluster. We also need to specify a folder in which Consul will use as its data directory and a directory to use for any configurations (services/checks).

Output

You can check the log in Kitematic or by running docker logs on any of the nodes in the Consul Cluster to see more information about the new node being registered. Of course you can also run consul members to see the list of nodes that are registered.

Let’s add in a few more nodes just so we have a bit more data to work with.

Output

Now we have our three clients and three servers.

Adding Services

This is all well and good however none of these clients are running any specific Services yet. Let’s create a Service on each of them so that we can filter on that in Ansible later on.

As you may remember from my previous post on Getting to know Consul we went through setting up a Consul Service. For this example let’s keep things simple and go with a very similar service.

On one of our client containers let’s make a nodejs.json:

echo '{"service": {"name": "NodeJS", "tags": ["nodejs"], "port": 80}}' \
| tee /etc/consul.d/nodejs.json

And redis.json on a different container:

echo '{"service": {"name": "Redis", "tags": ["redis"], "port": 6379}}' \
| tee /etc/consul.d/redis.json

And let’s do mongodb.json also on our third container:

echo '{"service": {"name": "MongoDB", "tags": ["mongodb"], "port": 27019}}' \
| tee /etc/consul.d/mongodb.json

Now that those configurations are in place we can simply run consul reload to trigger an update of our agent. This will look into our configuration directory and load in any services or checks specified.

After our reload, we can hit our cluster to return a list of all services that are defined:

curl 172.17.0.2:8500/v1/catalog/services

Output

Running Ansible

Little note here: You should never need to use Configuration Management on Containers to configure… anything.. This is simply an example of how dynamic inventory works on Ansible and Containers are an easy way to demonstrate this functionality.

Like everything else these days, Ansible is available on Docker hub. You can simply download it by running docker run -it ansible/centos7-ansible.

Now that we’re on our Ansible container let’s download the Consul binaries also for use later on. Once we’ve done that let’s use the same command as before and join our Ansible Container to the Consul Cluster. This time however you will want to save your Consul binaries which the /opt/ansible/ansible/bin/ path for simplicity.

Dynamic Inventory

So now that we’ve got everything running as we planned. It’s time to go through dynamic inventory with Consul!

The magic glue that ties this all together is the official consul_io.py code that is available on the Ansible repository.

Go ahead and clone this repository down. Once you’ve got it locally in the Ansible Container copy the consul_io.py and the consul.ini into your /opt/ansible/ansible/bin directory.

Crack open the consul.ini file and throw in the name of one of the server nodes within the Consul Cluster.

Output

Before we can run our inventory query we will need to do a quick pip install:

pip install python-consul

Once that’s done we’re good to go.

Simply run the consul_io.py file to see an output taken directly from Consul.

Output

So now we have this data, let’s get Ansible to run a command against nodes from our output. We can simply pass in the consul_io.py file as our inventory using -i. We also need to choose a role in which we want to target for our configuration.

Output

This makes it effortless for Ansible to update new nodes that join Consul.

For testing purposes you can use the Insecure private key as I have for this demo.

If you would like to use this same container image to run through this scenario you will need to add a group_vars/all/vars.yml file with the SSH password for the image which is: Password101 and the username: app-admin

In a production scenario, you would want to setup health checks to determine the state of the node. If these health checks fail the node can be removed from Consul and thus updating the inventory. A node can leave Consul by running consul force-leave.

Conclusion

Ansible is a great tool for Configuration Management. Couple it with a new age tool such as Consul for dynamic inventory and you’ve got yourself a truly flexible system that requires no manual touch-ups to get nodes configured.


Me

A lover of all things automated, continously integrated and deployed.