A Seamlessly Integrated Terraform & Ansible Driven CI/CD Process

Image for post
Image for post
Photo by Sagar Dani on Unsplash

ecent advances in Cloudify plugins have enabled the seamless integration of Terraform, Ansible, and Jenkins in a declarative framework that can span continuous integration and deployment. This article looks at the application of a general purpose declarative orchestrator in the devops space, which can coordinate services at any level, and truly support both “dev” and “ops”.

The construction and configuration of environments for testing requires infrastructure and configuration automation. Terraform (HashiCorp) and Ansible (Red Hat) are popular solution in each these domains. Terraform specializes in ‘Infrastructure as Code’ (essentially declarative infrastructure), and Ansible specializes in configuration automation. Cloudify can simplify the coordination of these two, and since it is an active runtime server, it can be used for continuous deployment of production workloads as well.

Image for post
Image for post

The Cloudify Terraform plugin addresses a couple of challenges, the sharing of Terraform state and the capturing of log output. By default, the plugin runs it’s own instance of Terraform and makes use of standard Terraform templates. Cloudify’s own template can pass Terraform template variables and the Cloudify server stores Terraform state information internally that allows easy access to other services (like Ansible) in the Cloudify template to coordinate.

tf_module:
type: cloudify.nodes.terraform.Module
properties:
resource_config:
source: resources/terraform/template.zip
variables:
access_key: { get_secret: aws_access_key_id }
secret_key: { get_secret: aws_secret_access_key }
aws_region: { get_input: aws_region_name }
aws_zone: { get_input: aws_zone_name }
admin_user: { get_input: agent_user }
admin_key_public: { get_attribute: [agent_key, public_key_export] }

The snippet above represents a sample Terraform module that is creating infrastructure on AWS. The variables property is passed to Terraform for use with the template named in the source property.

When Cloudify evaluates (runs ‘install’ on) a blueprint with a cloudify.nodes.terraform.Module node in it, it runs the terraform init command follows by terraform plan and terraform apply. The resulting state is stored in the Cloudify Terraform node instance as the runtime propertyresources. Once the resources property containing the Terraform state is created, other Cloudify blueprint nodes can easily access and use it. It should also be noted that the Terraform state, once stored in Cloudify, will be replicated if running in clustered mode.

In the Cloudify blueprint, Ansible nodes are activated after Terraform has had its way with the infrastructure. Cloudify forwards Terraform state to Ansible so that, for example, IP addresses of VMs can be used for configuration.

ansible_playbook:
type: custom.nodes.ansible.Executor
properties:
playbook_path: playbook.yaml]
relationships:
- type: custom.relationships.ansible_connected_to_terraform
target: tf_module

The Ansible node in the blueprint includes a relationship to the Terraform node that ensures that it doesn’t run until the infrastructure is ready. Since the Terraform node has stored the Terraform state in Cloudify, the Ansible node can grab what it needs, in this case the relevant host information needed for the Ansible inventory. Note that to simplify the example, a custom node type (no code, just YAML), derived from cloudify.nodes.ansible.Executor is used here. The custom relationship does require some code to grab the inventory from Terraform. The details are beyond the scope of this article, but example code looks like:

from cloudify import ctx
from cloudify.state import ctx_parameters as inputs
if __name__ == '__main__':
ansible_node = ctx.source.instance.runtime_properties
terraform_node = ctx.target.instance.runtime_properties
if 'sources' not in ansible_node:
ansible_node['sources'] = {
'cloud_resources': {
'hosts': {
}
}
}
for cloud_resource_compute in terraform_node['resources']['eip']['instances']:
ansible_inventory_entry = {
cloud_resource_compute['attributes']['public_ip']: {
'ansible_host': cloud_resource_compute['attributes']['public_ip'],
'ansible_user': inputs.get('agent_user'),
'ansible_become': True,
'ansible_ssh_private_key_file': inputs.get('private_key'),
'ansible_ssh_common_args': '-o StrictHostKeyChecking=no'
}
}
ansible_node['sources']['cloud_resources']['hosts'].update(ansible_inventory_entry)

The code iterates through the Terraform state and grabs the Ansible inventory.

Image for post
Image for post

ne of the main benefits of using the Cloudify manager is it’s capability for day 2 operations. This includes the scaling and healing workloads, and running custom workflows to perform arbitrary administrative tasks. The scaling capability extends itself nicely into the Terraform/Ansible integration by using the scaling groups feature.

Using scaling groups, we can logically group Terraform/Ansible nodes that scale as a unit. This is done via the Cloudify built-in scale workflow, and just requires a little extra configuration to take advantage of.

The simplest relevant example would be a Terraform/Ansible node pair, that manages a single compute host. To make it work, we need a Terraform/Ansible node pair that looks like what we’ve already seen above, but with some additional YAML to let Cloudify know about the desired scaling behavior.

Assuming the same node names above (tf_module and ansible_playbook ), we can define a group in the blueprint:

groups:
group1:
members: [tf_module,ansible_playbook]

Then we only to define a scaling policy to indicate how the group is to be used

policies:
scale_policy1:
type: cloudify.policies.scaling
properties:
default_instances: 1
targets: [group1]

That’s all we need. Now when the deployment is up and running, the scale workflow can be triggered to increase (or decrease capacity).

cfy execution start -d <deployment_id> scale -p 
'scalable_entity_name=group1'

There is more configurability to scaling than covered here. Please read the relevant docs.

Image for post
Image for post

the above seems a little esoteric, especially if you’re not terribly familiar with Cloudify or just want to do testing, there is yet another layer you can add that can help, and it might already be a part of your testing tool chain. The Cloudify Jenkins plugin hides Cloudify operation and configuration behind task definitions. This method let’s you run Cloudify driven orchestrations without writing blueprints!

By setting the outputs_file parameter in the Cloudify Terraform build step, the outputs (consisting of the Terraform state) are written as a JSON format file to be easily consumed by future steps. See plugin docs for details. Note this task assumes Cloudify is running with the Terraform plugin installed, and that Terraform itself is installed and running.

After the Terraform step is done, a similar Ansible step can be executed to pick up the Terraform outputs file, extract needed info (like inventory info) and perform configuration based on the Terraform state. Again, without writing blueprints or really knowing much about Cloudify itself.

We’ve seen how an extra layer of automation can be useful for organizing the operation of services commonly used in devops scenarios. What Cloudify provides in this CI/CD use case, is a general purpose declarative framework and runtime server that can model services clearly, and support both testing and production environments while using native artifacts/descriptors.

Software developer and architect

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store