In the world of IT automation, we've all told ourselves the same lie: "If it's in code, it's repeatable." We write beautiful Ansible playbooks, check them into Git, and pat ourselves on the back for a job well done. Then, reality hits. The playbook that ran flawlessly on your laptop explodes in the CI/CD pipeline. A new teammate tries to run it and gets a cryptic Python error. Why? Because a ghost is haunting your automation: the "works on my machine" syndrome.
This phantom menace isn't a bug in Ansible. It's the subtle drift between environments. It’s the slightly different Python version, that one missing library, or that Ansible Collection that got updated without you realizing it. These tiny cracks in the foundation of your runtime undermine the very promise of automation—reliability.
For years, we fought this ghost with a hodgepodge of rituals: sacred virtualenv
s, meticulously detailed READMEs (that were outdated by Tuesday), and a whole lot of hoping for the best. It was a fragile peace, easily broken and impossible to scale. As our automation grew, so did the chaos. We needed something better. We needed to exorcise this ghost for good.
This is where Execution Environments (EEs) come in. They are the ghost trap.
An Execution Environment isn't just a fancy new term; it's a container image that holds everything Ansible needs to do its job. Think of it as a pre-packaged, portable control node. Inside this box, you have a specific version of Ansible Core, all the right Python libraries, the exact Ansible Collections you need, and any other command-line tools your playbooks depend on.
By bundling the runtime with the automation, EEs deliver a self-contained, shareable, and, most importantly, reproducible environment. The exact same runtime you use on your laptop is the one that hums along in your testing pipeline and the one that powers your production jobs.
This guide is your roadmap to leaving the haunted house of inconsistent runtimes behind. We'll start with a therapy session on why the old way was so painful, then dive deep into what makes an EE tick. We'll get our hands dirty building our own custom EEs with ansible-builder
, and finally, we'll see how to make them a seamless part of your daily workflow with tools like ansible-navigator
. This is the future of Ansible, and it's time to build a more stable, scalable, and collaborative way to automate.
Chapter 1: The Automation Ghost Story We All Know
To really get why EEs are such a game-changer, let's revisit the familiar pain of the classic Ansible control node.
The House of Cards We Called a Control Node
In the old days, setting up a "control node" meant an engineer would meticulously configure their workstation or a server. They'd install:
A specific version of
ansible-core
.Python, usually whatever the operating system came with.
A constellation of Python libraries, installed with
pip
either globally (yikes!) or into a project-specific virtual environment.Ansible Collections, pulled down with
ansible-galaxy
.A handful of other tools a playbook might need, like
kubectl
,aws-cli
, orgit
.
For one person on one project, this felt manageable. But the moment you added another person or another project, the whole thing started to wobble.
Does Any of This Sound Familiar?
Dependency Hell: You're working on Project A, which needs the latest and greatest
kubernetes
Python client. But over on Project B, there's a legacy playbook that only works with a version from two years ago. You install one, the other breaks. You're now trapped in a complicated dance of virtual environments, each with its own care and feeding instructions.The "System Python" Landmine: You, or maybe a junior engineer,
pip install
something directly into the system's Python. Everything is fine for months. Then, an OS security patch updates the system Python, and suddenly, half your playbooks are failing with errors you've never seen before.The Invisible Dependency: Your playbook has a clever little
shell
task that usesjq
to parse some JSON. It works great for you because you installedjq
months ago and forgot about it. A new teammate tries to run the playbook, and it fails. Why? The dependency onjq
wasn't written down anywhere; it just lived in your head.The CI/CD Mirage: You get your playbook working locally and push it. The CI/CD runner, a supposedly identical environment, chokes on it. After hours of debugging, you find the cause: a patch-level difference in a system library. The environments weren't identical; they were just a convincing illusion.
Onboarding Gridlock: A new developer joins the team. Welcome! Here's a 20-page document on how to set up your machine. It's a manual, error-prone ritual that takes days. And if that document is even slightly out of date, their first week is spent troubleshooting instead of contributing.
All these headaches stem from one simple fact: the runtime environment was separate from the automation code. Execution Environments fix this by putting them in the same box.
A New Way of Thinking: What Exactly is an Execution Environment?
At its heart, an EE is a container image with a purpose. It's not just any old container; it's a portable Ansible control node, built to a spec that Ansible tools understand.
Let's peek inside the box:
A Solid Foundation (Base OS): Every EE starts with a standard container base image, like Red Hat UBI, CentOS, or Debian. This provides the basic operating system tools and libraries.
Its Own Python: A specific version of Python is installed right inside the EE. This immediately severs the dangerous link to the host system's Python, wiping out a huge source of problems.
The Ansible Engine: A precise version of
ansible-core
is installed. No more surprises because a minor version update changed how a feature works.Python Dependencies, Contained: All the Python libraries you need are installed from a
requirements.txt
file. They live safely inside the container, never to conflict with other projects again.Ansible Collections, Guaranteed: The lifeblood of modern Ansible. All the collections your playbooks use are declared in a
requirements.yml
file and baked right in. The right modules and roles are always there.System Tools, Made Explicit: Those "invisible" dependencies like
kubectl
,terraform
, orvault
? They're now explicitly installed via the OS package manager. They are now a declared, version-controlled part of your runtime.
When you run a playbook with an EE, your local machine is blissfully ignorant. It doesn't need Ansible, the collections, or any of those dependencies. All it needs is a container runtime (like Docker or Podman) and a smart tool like ansible-navigator
.
That tool spins up your EE container, neatly mounts your Ansible project folder inside it, and then runs ansible-playbook
from within the container. Your code executes in a pristine, predictable world, and the results are streamed back to you.
The Snowball Effect of Benefits
Switching to an EE-based workflow isn't just a minor improvement; it fundamentally upgrades how you do automation.
Finally, True Reproducibility: This is the holy grail. Once you build and tag an EE image, it's set in stone. The playbook run you get from
v1.2.0
of your EE will be identical today, tomorrow, in your CI pipeline, and on your coworker's machine.Go-Anywhere Portability: The entire runtime is a single container image. You can store it in any container registry and run it on any machine with a container engine—Linux, macOS, or Windows.
Airtight Dependency Isolation: The EE for your network gear can have its own special libraries without ever conflicting with the EE for your cloud infrastructure. Each project gets its own perfect, isolated toolbox.
Onboarding in Minutes, Not Days: That 20-page setup document? It becomes a single command:
podman pull registry.example.com/automation/base-ee:latest
. Your new teammate is ready to go.Dev, Test, and Prod in Perfect Harmony: The biggest source of friction in any delivery pipeline is environment drift. EEs erase it. The exact same image artifact flows from your laptop to testing to production. What you test is exactly what you run.
In short, EEs apply the powerful ideas of immutable infrastructure and declarative configuration—concepts we've used for years on the systems we manage—to the automation tools themselves.
Chapter 2: ansible-builder
: Your EE Construction Kit
Okay, the idea of an EE is great. But how do you actually make one? You could write a Dockerfile
from scratch, but you'd be wrestling with package managers, Python environments, ansible-galaxy
commands, and a specific directory layout that Ansible tools expect. It's a lot of tedious work.
This is why the Ansible team created ansible-builder
. It’s a command-line tool that acts as your EE construction kit. You give it a simple, declarative YAML file describing what you need, and it handles the how, automatically generating the Containerfile
and building the image for you.
The Blueprint: execution-environment.yml
The heart of ansible-builder
is a single file: execution-environment.yml
. This is the blueprint for your runtime. Let's walk through its most important parts.
---
version: 3
# The foundation of your EE house
base_image: quay.io/centos/centos:stream9
# Optional: A custom ansible.cfg to bake into the image
ansible_config: 'ansible.cfg'
# The shopping list for all your dependencies
dependencies:
galaxy: requirements.yml
python: requirements.txt
system: bindep.txt
# The "special instructions" section for custom steps
additional_build_steps:
prepend_base:
- RUN dnf install -y epel-release
- RUN echo "Adding custom repositories..."
append_final:
- RUN git config --global user.name "Ansible EE"
- COPY --from=quay.io/helm/helm:v3.9.0 /usr/local/bin/helm /usr/local/bin/helm
version
This tells ansible-builder
which version of the blueprint format you're using. For anything new, you'll want to use 3
.
base_image
This is the foundation. It's the FROM
line in the Containerfile
that ansible-builder
creates. Your choice here matters for size and security.
quay.io/ansible/ansible-runner:latest
: A great starting point, as it comes with some necessary tools already installed.quay.io/centos/centos:stream9
orregistry.access.redhat.com/ubi9/ubi
: Rock-solid choices for an enterprise-grade base.python:3.9-slim
: A good option if you want to start with something more minimal.
ansible_config
This is optional, but handy. You can point it to a local ansible.cfg
file, and ansible-builder
will bake it into your EE. This is perfect for setting system-wide defaults for all playbooks running in that environment.
dependencies
This is the most important part of the blueprint—your shopping list.
galaxy: requirements.yml
: Points to your list of required Ansible Collections. File:requirements.yml
--- collections: - name: community.general version: ">=5.0.0" - name: kubernetes.core version: "2.4.0" - name: community.docker
python: requirements.txt
: Points to your list of Python libraries. File:requirements.txt
kubernetes>=24.2.0 openshift pyvmomi
system: bindep.txt
: Points to your list of system-level packages, like command-line tools. It uses a clever format calledbindep
that can handle different Linux flavors. File:bindep.txt
# General tools git [platform:rpm] git [platform:dpkg] # For Kubernetes kubectl [platform:rpm]
The
[platform:...]
bit letsansible-builder
know whether to usednf
orapt
to install the package, depending on your base image.
additional_build_steps
This is your escape hatch for anything that doesn't fit neatly on the shopping list. It lets you inject your own custom build commands at various stages.
prepend_base
: Runs commands right at the beginning, perfect for adding a custom package repository.append_system
: Runs commands right after your system packages are installed. A great place to clean up package caches to shrink your final image size (RUN dnf clean all
).append_final
: Runs commands at the very end. This is where you can download binaries withcurl
, install tools from source, or copy files in from other container images.
Let's Build One: Your First EE
Alright, let's roll up our sleeves and build a real EE for managing Kubernetes.
Step 1: Set Up Your Workshop
Make sure you have ansible-builder
installed (pip install ansible-builder
) and a container runtime like Podman or Docker.
mkdir k8s-ee
cd k8s-ee
Step 2: Write Your Shopping Lists
Create the three dependency files.
requirements.yml
(Ansible Collections):--- collections: - name: kubernetes.core version: "2.4.0" - name: community.general
requirements.txt
(Python Libraries):kubernetes==24.2.0 PyYAML
bindep.txt
(System Tools):# We need kubectl to talk to Kubernetes kubectl [platform:rpm] # Git and jq are always good to have git [platform:rpm] jq [platform:rpm]
Step 3: Create the Blueprint
Now, create the main execution-environment.yml
file.
---
version: 3
base_image: quay.io/centos/centos:stream9
dependencies:
galaxy: requirements.yml
python: requirements.txt
system: bindep.txt
additional_build_steps:
# We need to add the EPEL repository to find the 'jq' package
prepend_system:
- RUN dnf install -y epel-release --nodocs
# Let's clean up after ourselves to keep the image tidy
append_system:
- RUN dnf clean all
Step 4: Fire Up the Builder
It's time for the magic. Run the build command.
ansible-builder build --tag my-k8s-ee:1.0 -v 3
ansible-builder
will whir to life, read your blueprint, and generate a detailed Containerfile
. Then, it'll hand that off to Podman or Docker to do the actual image construction.
Step 5: Check Your Work
Once it's finished, you'll have a new container image.
$ podman images | grep my-k8s-ee
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/my-k8s-ee 1.0 f1a2b3c4d5e6 2 minutes ago 1.2 GB
Let's pop the hood and make sure everything's inside.
podman run -it --rm localhost/my-k8s-ee:1.0 /bin/bash
Inside the container's shell, you can now verify that everything you asked for is present and accounted for:
# Is Ansible here? Are the collections installed?
[runner@f1a2b3c4d5e6 /]$ ansible --version
[runner@f1a2b3c4d5e6 /]$ ansible-galaxy collection list
# Are the Python libraries ready to go?
[runner@f1a2b3c4d5e6 /]$ pip list | grep kubernetes
# Are our command-line tools available?
[runner@f1a2b3c4d5e6 /]$ which kubectl
/usr/bin/kubectl
Success! You've just built your first custom Execution Environment. It's a self-contained, portable artifact, ready to be pushed to a registry and shared with your team.
Chapter 3: Putting Your EE to Work
Building an EE is a great first step, but the real magic happens when you start using it. The modern Ansible ecosystem gives you a fantastic tool for this: ansible-navigator
.
Your New Cockpit: ansible-navigator
Forget staring at the raw, endless stream of text from ansible-playbook
. ansible-navigator
is an interactive command-center for your automation. It was built from the ground up to use EEs and gives you a much richer experience.
With navigator
, you can:
Run playbooks and watch the progress in a clean, organized interface.
After a run, dive into the results of every single task on every host.
Easily inspect variables, facts, and the exact arguments passed to a module.
Browse detailed logs and artifacts without having to dig through files.
Telling navigator
What to Do
You configure navigator
with a simple ansible-navigator.yml
file in your project. The most important setting is telling it which EE to use.
File: ansible-navigator.yml
---
ansible-navigator:
execution-environment:
# Use the EE we just built!
image: localhost/my-k8s-ee:1.0
# Only pull the image if it's not already here
pull:
policy: missing
# For a classic feel, switch the mode to stdout
playbook-run:
mode: stdout # The default is 'interactive', which is the cool TUI
Launching a Playbook
With that file in place, running a playbook is a breeze. Instead of the old ansible-playbook deploy.yml
, you now run:
ansible-navigator run deploy.yml
Behind the scenes, navigator
takes care of everything: it finds the right EE image, starts the container, mounts your code, runs the playbook inside, and streams the results back to its interface. It's the smooth, seamless experience of a containerized runtime without any of the headache of manual podman
commands.
The Engine Room: ansible-runner
While you'll interact with ansible-navigator
, the component doing the dirty work is ansible-runner
. It’s the low-level engine that both navigator
and Ansible Automation Platform use to execute Ansible in a standardized way. It handles creating temporary directories, managing inventory, orchestrating the container, and capturing all the output. You can use it directly, but for daily interactive work, navigator
is the way to go.
Going Manual: The podman run
Method
Sometimes, for a quick test or debugging, you just want to run a command inside your EE. You can do this with a manual podman run
command, but you have to get the details right.
podman run --rm -it \
-v "$(pwd)":/runner/project:Z \
--workdir /runner/project \
localhost/my-k8s-ee:1.0 \
ansible-playbook -i inventory.yml deploy.yml
The key here is the -v "$(pwd)":/runner/project:Z
part, which mounts your current directory into the special /runner/project
location inside the container where ansible-runner
expects to find it. It works, but it’s a lot to type and remember, which is why ansible-navigator
is your best friend.
EEs in Your CI/CD Pipeline
This is where EEs truly shine. You can take the exact same EE image from your laptop and use it as the build environment in your CI/CD pipeline, closing the loop and guaranteeing consistency.
Here’s what a GitHub Actions workflow might look like:
File: .github/workflows/ci.yml
---
name: Ansible CI
on: [push, pull_request]
jobs:
test-playbook:
runs-on: ubuntu-latest
# Tell GitHub Actions to run all steps inside our EE!
container:
image: quay.io/my-org/my-k8s-ee:1.0
# You'd add credentials here for a private registry
steps:
- name: Checkout repository
uses: actions/checkout@v3
- name: Lint the playbooks
run: ansible-lint playbooks/*.yml
- name: Check the playbook syntax
run: ansible-playbook playbooks/deploy.yml --syntax-check
In this pipeline, every run
command executes inside our EE. We can call ansible-lint
and ansible-playbook
directly, knowing with 100% certainty that they exist and have all their dependencies ready to go. Simple, declarative, and perfectly reproducible.
Chapter 4: Leveling Up Your EE Game
Once you've got the basics down, you can start using some more advanced tricks to make your EEs even better.
Slimming Down Your Images
Large container images can slow down your CI/CD pipelines and eat up storage. Here are a few tips for keeping your EEs lean:
Start Small: Use a
minimal
base image likeubi9-minimal
. They have a much smaller footprint.Clean Up Your Mess: After you install packages, always clean up the package manager's cache in the same build step. For
dnf
, useRUN dnf clean all
.Skip the Docs: When installing packages with
dnf
, add the--nodocs
flag. You don't need man pages inside your automation runtime.Be Precise: Don't install a giant library if a smaller one will do the job. Every little bit helps.
Getting Creative with Custom Builds
The additional_build_steps
section is your superpower for handling tricky requirements.
Installing a Tool from the Web:
additional_build_steps: append_final: - RUN curl -L -o /usr/local/bin/kubectl [https://dl.k8s.io/release/v1.25.0/bin/linux/amd64/kubectl](https://dl.k8s.io/release/v1.25.0/bin/linux/amd64/kubectl) && \ chmod +x /usr/local/bin/kubectl
Stealing Binaries from Other Images: A powerful pattern for keeping your image small is to use a multi-stage build to copy a tool from another image without inheriting all its layers.
additional_build_steps: append_final: # Just grab the 'helm' binary from the official helm image - COPY --from=quay.io/helm/helm:v3.9.0 /usr/local/bin/helm /usr/local/bin/helm
Versioning and Sharing Your EEs
Treat your EE images like the critical software artifacts they are.
Use Semantic Versioning: Tag your images with versions like
1.0.0
,1.1.0
, etc. Use patch releases for small fixes, minor releases for new features, and major releases for breaking changes.Use a Private Registry: For any serious team, host your EEs in a private container registry like Quay, Artifactory, or Harbor. This keeps them secure and reliable.
Pin Your Versions: In production, always point to a specific, immutable version tag (e.g.,
my-ee:1.2.3
). This prevents any surprises. You can use a floating tag likelatest
for development, but production deserves stability.
Conclusion: A New Standard for Automation
Execution Environments aren't just another tool; they represent a necessary evolution for Ansible. They are the definitive answer to the old, frustrating problems of inconsistency and "dependency hell." By packaging your entire runtime into a single, portable box, EEs deliver a level of reliability and reproducibility we could only dream of before.
We've walked through the whole journey, from the ghost stories of the old way to the hands-on process of building and using your own modern, containerized runtimes. You now have the blueprint to build a more robust, scalable, and collaborative automation practice.
Adopting EEs is a shift in mindset. It’s about treating your automation tooling with the same discipline you apply to your application code. It's about building a solid foundation for the future. Whether you're a solo admin or part of a huge platform team, embracing the containerized world of Ansible will make your automation stronger, your team faster, and your deployments more predictable. Your journey starts with a single ansible-builder build
command. Go build something amazing.
No comments:
Post a Comment