Let’s dive into some more practical, real-world Ansible examples that a System Administrator or DevOps Engineer would frequently encounter. These examples will build upon the concepts we’ve covered (roles, variables, loops, conditionals, handlers, etc.) and demonstrate how Ansible streamlines common IT operations.

For these examples, let’s assume a project structure like this:

ansible_projects/
├── inventory.ini
├── group_vars/
   └── all.yml
├── host_vars/
   └── webserver01.example.com.yml
├── playbooks/
   ├── deploy_lamp_stack.yml
   ├── manage_users.yml
   └── patch_servers.yml
└── roles/
    ├── apache/
       ├── tasks/
          └── main.yml
       ├── handlers/
          └── main.yml
       └── templates/
           └── vhost.conf.j2
    ├── mysql/
       ├── tasks/
          └── main.yml
       ├── handlers/
          └── main.yml
       └── vars/
           └── main.yml # For sensitive data to be vaulted
    └── common/
        ├── tasks/
           └── main.yml
        └── defaults/
            └── main.yml

Ansible Practical Examples: Real-World Scenarios for SysAdmins/Engineers

Scenario 1: Deploying a LAMP Stack (Linux, Apache, MySQL, PHP)

This is a classic use case for configuration management. We’ll break it down into modular roles. inventory.ini

[webservers]
webserver01.example.com

[databases]
dbserver01.example.com

[lamp_stack:children]
webservers
databases

[all:vars]
ansible_user=devops_admin
ansible_ssh_private_key_file=~/.ssh/id_rsa

group_vars/all.yml


# group_vars/all.yml
# Common variables for all hosts
timezone: "America/Los_Angeles"
ntp_server: "time.nist.gov"

host_vars/webserver01.example.com.yml

# host_vars/webserver01.example.com.yml
apache_port: 80
apache_doc_root: /var/www/html/mysite

roles/common/tasks/main.yml (Common System Setup)

This role handles basic OS-level configurations that are common to all servers.

# roles/common/tasks/main.yml
---
- name: Ensure correct timezone is set
  community.general.timezone:
    name: "{{ timezone }}"

- name: Install NTP client and ensure it's running
  ansible.builtin.package:
    name: ntpdate # or chrony, depending on OS
    state: present
  when: ansible_os_family == "RedHat" or ansible_os_family == "Debian"

- name: Configure NTP server (RedHat)
  ansible.builtin.template:
    src: ntp.conf.j2
    dest: /etc/ntp.conf
    owner: root
    group: root
    mode: '0644'
  when: ansible_os_family == "RedHat"
  notify: Restart NTP service

- name: Install common utilities
  ansible.builtin.package:
    name: "{{ item }}"
    state: present
  loop:
    - wget
    - curl
    - telnet
    - unzip

roles/common/handlers/main.yml

# roles/common/handlers/main.yml
---
- name: Restart NTP service
  ansible.builtin.service:
    name: ntpd
    state: restarted
  when: ansible_os_family == "RedHat" # Or chronyd

roles/common/templates/ntp.conf.j2 (Example for RedHat)

# roles/common/templates/ntp.conf.j2
# NTP configuration for RedHat
driftfile /var/lib/ntp/drift
restrict default kod nomodify notrap nopeer noquery
restrict 127.0.0.1
restrict ::1
server {{ ntp_server }} iburst
includefile /etc/ntp/crypto/pw
keys /etc/ntp/keys
disable monitor

roles/apache/tasks/main.yml (Web Server Setup)

# roles/apache/tasks/main.yml
---
- name: Ensure Apache package is present (RedHat)
  ansible.builtin.yum:
    name: httpd
    state: present
  when: ansible_os_family == "RedHat"

- name: Ensure Apache package is present (Debian)
  ansible.builtin.apt:
    name: apache2
    state: present
  when: ansible_os_family == "Debian"

- name: Create Apache document root directory
  ansible.builtin.file:
    path: "{{ apache_doc_root }}"
    state: directory
    owner: apache # RedHat
    group: apache # RedHat
    mode: '0755'
  when: ansible_os_family == "RedHat"

- name: Create Apache document root directory
  ansible.builtin.file:
    path: "{{ apache_doc_root }}"
    state: directory
    owner: www-data # Debian
    group: www-data # Debian
    mode: '0755'
  when: ansible_os_family == "Debian"

- name: Deploy Apache Virtual Host configuration
  ansible.builtin.template:
    src: vhost.conf.j2
    dest: "/etc/httpd/conf.d/{{ ansible_fqdn }}.conf" # RedHat path
    owner: root
    group: root
    mode: '0644'
  when: ansible_os_family == "RedHat"
  notify: Restart Apache

- name: Deploy Apache Virtual Host configuration (Debian)
  ansible.builtin.template:
    src: vhost.conf.j2
    dest: "/etc/apache2/sites-available/{{ ansible_fqdn }}.conf" # Debian path
    owner: root
    group: root
    mode: '0644'
  when: ansible_os_family == "Debian"
  notify: Restart Apache

- name: Enable Apache site (Debian)
  ansible.builtin.command: "a2ensite {{ ansible_fqdn }}.conf"
  when: ansible_os_family == "Debian" and not "a2ensite {{ ansible_fqdn }}.conf" in ansible_facts.services # Example of conditional based on service facts
  notify: Restart Apache

- name: Deploy simple index.html
  ansible.builtin.copy:
    content: "<h1>Hello from Apache on {{ ansible_fqdn }}!</h1>"
    dest: "{{ apache_doc_root }}/index.html"
    owner: apache # RedHat
    group: apache # RedHat
    mode: '0644'
  when: ansible_os_family == "RedHat"

- name: Deploy simple index.html (Debian)
  ansible.builtin.copy:
    content: "<h1>Hello from Apache on {{ ansible_fqdn }}!</h1>"
    dest: "{{ apache_doc_root }}/index.html"
    owner: www-data # Debian
    group: www-data # Debian
    mode: '0644'
  when: ansible_os_family == "Debian"


- name: Ensure Apache service is started and enabled
  ansible.builtin.service:
    name: httpd # RedHat
    state: started
    enabled: true
  when: ansible_os_family == "RedHat"

- name: Ensure Apache service is started and enabled (Debian)
  ansible.builtin.service:
    name: apache2 # Debian
    state: started
    enabled: true
  when: ansible_os_family == "Debian"

roles/apache/handlers/main.yml

# roles/apache/handlers/main.yml
---
- name: Restart Apache
  ansible.builtin.service:
    name: httpd # RedHat
    state: restarted
  when: ansible_os_family == "RedHat"

- name: Restart Apache (Debian)
  ansible.builtin.service:
    name: apache2 # Debian
    state: restarted
  when: ansible_os_family == "Debian"

roles/apache/templates/vhost.conf.j2

# roles/apache/templates/vhost.conf.j2
<VirtualHost *:{{ apache_port }}>
    ServerAdmin webmaster@localhost
    DocumentRoot "{{ apache_doc_root }}"
    ServerName {{ ansible_fqdn }}
    ErrorLog ${APACHE_LOG_DIR}/error.log
    CustomLog ${APACHE_LOG_DIR}/access.log combined

    <Directory "{{ apache_doc_root }}">
        Options Indexes FollowSymLinks
        AllowOverride All
        Require all granted
    </Directory>
</VirtualHost>

roles/mysql/tasks/main.yml (Database Server Setup)

This will use a vault for credentials.

# roles/mysql/tasks/main.yml
---
- name: Ensure MySQL/MariaDB server is installed (RedHat)
  ansible.builtin.yum:
    name: mariadb-server
    state: present
  when: ansible_os_family == "RedHat"

- name: Ensure MySQL/MariaDB server is installed (Debian)
  ansible.builtin.apt:
    name: mariadb-server
    state: present
  when: ansible_os_family == "Debian"

- name: Start and enable MariaDB service
  ansible.builtin.service:
    name: mariadb
    state: started
    enabled: true

- name: Set root password for MariaDB
  community.mysql.mysql_user:
    name: root
    host: localhost
    password: "{{ mysql_root_password }}"
    login_user: root # Use current root (no password)
    login_password: "" # No password on first run
    check_implicit_admin: true # Allow managing without existing root pass
  no_log: true # Prevent password from showing in logs

- name: Create application database
  community.mysql.mysql_db:
    name: "{{ db_name }}"
    state: present
    login_user: root
    login_password: "{{ mysql_root_password }}"
  no_log: true

- name: Create application database user
  community.mysql.mysql_user:
    name: "{{ db_user }}"
    password: "{{ db_password }}"
    host: "{{ db_host }}" # e.g., 'localhost' or '%' for any host
    priv: "{{ db_name }}.*:ALL"
    state: present
    login_user: root
    login_password: "{{ mysql_root_password }}"
  no_log: true

Note: For the MySQL role, you’d define mysql_root_password, db_name, db_user, db_password, and db_host in a vaulted file (e.g., roles/mysql/vars/main.yml, then encrypt it with ansible-vault encrypt roles/mysql/vars/main.yml).

playbooks/deploy_lamp_stack.yml

---
- name: Deploy LAMP stack on webservers and database servers
  hosts: lamp_stack
  become: true
  roles:
    - role: common # General system setup
      tags: [common]
    - role: apache # Apache on webservers group
      when: "'webservers' in group_names"
      tags: [apache]
    - role: mysql # MySQL on database servers group
      when: "'databases' in group_names"
      tags: [mysql]

How to run:

Full LAMP stack deployment (will ask for vault password)


ansible-playbook -i inventory.ini playbooks/deploy_lamp_stack.yml --ask-vault-pass

Only run common tasks and apache on webservers


ansible-playbook -i inventory.ini playbooks/deploy_lamp_stack.yml --limit webservers --tags common,apache --ask-vault-pass

Scenario 2: User and SSH Key Management

Adding/removing users and managing their SSH keys across multiple servers. playbooks/manage_users.yml

---
- name: Manage system users and SSH keys
  hosts: all
  become: true

  vars:
    # Define users to manage. 'state: absent' to remove.
    managed_users:
      - name: jdoe
        uid: 1004
        ssh_key_url: "https://github.com/jdoe.keys" # Fetch key from GitHub
        sudo_access: true
        state: present
      - name: asmith
        uid: 1005
        ssh_key_content: "ssh-rsa AAAAB3NzaC... asmith@workstation" # Direct key content
        state: present
      - name: olduser # User to remove
        state: absent

  tasks:
    - name: Add/Update users
      ansible.builtin.user:
        name: "{{ item.name }}"
        uid: "{{ item.uid | default(omit) }}" # Omit UID if not specified
        state: "{{ item.state | default('present') }}"
        shell: /bin/bash
        groups: "{{ 'sudo' if item.sudo_access else 'users' }}" # Add to sudo group if specified
        append: true
        system: false # Not a system user
      loop: "{{ managed_users }}"
      loop_control:
        label: "User {{ item.name }}"

    - name: Ensure SSH directory exists for users
      ansible.builtin.file:
        path: "/home/{{ item.name }}/.ssh"
        state: directory
        owner: "{{ item.name }}"
        group: "{{ item.name }}"
        mode: '0700'
      loop: "{{ managed_users }}"
      when: item.state == 'present'

    - name: Deploy SSH keys for users from URL
      ansible.builtin.get_url:
        url: "{{ item.ssh_key_url }}"
        dest: "/home/{{ item.name }}/.ssh/authorized_keys"
        owner: "{{ item.name }}"
        group: "{{ item.name }}"
        mode: '0600'
      loop: "{{ managed_users }}"
      when: item.state == 'present' and item.ssh_key_url is defined

    - name: Deploy SSH keys for users from content
      ansible.builtin.copy:
        content: "{{ item.ssh_key_content }}"
        dest: "/home/{{ item.name }}/.ssh/authorized_keys"
        owner: "{{ item.name }}"
        group: "{{ item.name }}"
        mode: '0600'
      loop: "{{ managed_users }}"
      when: item.state == 'present' and item.ssh_key_content is defined

How to run:


ansible-playbook -i inventory.ini playbooks/manage_users.yml

Scenario 3: Server Patching and Reboot Management

A common and critical admin task, handled with care using Ansible. playbooks/patch_servers.yml

---
- name: Patch Linux servers and manage reboots
  hosts: all # Or specific groups like 'webservers', 'databases'
  become: true
  serial: 1 # Patch one server at a time (crucial for production)
  # Or use a percentage, e.g., serial: "30%"

  vars:
    reboot_required_file: "/var/run/reboot-required" # Ubuntu/Debian
    reboot_system: false # Set to true to force reboot if needed
    # For RedHat/CentOS, check if 'needs-restarting' is present

  tasks:
    - name: Update all packages (Debian/Ubuntu)
      ansible.builtin.apt:
        update_cache: true
        upgrade: dist # or 'full'
        autoclean: true
        autoremove: true
      when: ansible_os_family == "Debian"

    - name: Update all packages (RedHat/CentOS)
      ansible.builtin.yum:
        name: "*" # All packages
        state: latest
        update_cache: true
      when: ansible_os_family == "RedHat"

    - name: Check if reboot is required (Debian/Ubuntu)
      ansible.builtin.stat:
        path: "{{ reboot_required_file }}"
      register: reboot_status
      when: ansible_os_family == "Debian"

    - name: Check if reboot is required (RedHat/CentOS - simplified)
      ansible.builtin.command: "needs-restarting -r || exit 0" # exits 0 if no restart needed, 1 if needed
      register: reboot_status_rh
      changed_when: reboot_status_rh.rc != 0
      failed_when: false # Don't fail if needs-restarting isn't found
      when: ansible_os_family == "RedHat"

    - name: Reboot server if required or forced
      ansible.builtin.reboot:
        reboot_timeout: 600 # Wait up to 10 minutes for reboot
      when:
        - reboot_system or (ansible_os_family == "Debian" and reboot_status.stat.exists) or (ansible_os_family == "RedHat" and reboot_status_rh.rc != 0)
      # Wait for 30 seconds before continuing with next host in serial
      # You might want to add a pre-reboot check, e.g., stop services
      # and a post-reboot check, e.g., ensure services are up

    - name: Wait for system to come back after reboot (if it happened)
      ansible.builtin.wait_for_connection:
        timeout: 300
        delay: 10
      when: reboot_system or (ansible_os_family == "Debian" and reboot_status.stat.exists) or (ansible_os_family == "RedHat" and reboot_status_rh.rc != 0)

    - name: Ensure critical services are running after patch/reboot
      ansible.builtin.service:
        name: "{{ item }}"
        state: started
        enabled: true
      loop:
        - sshd
        - "{{ 'httpd' if ansible_os_family == 'RedHat' else 'apache2' }}"
        - "{{ 'mariadb' if ansible_os_family == 'RedHat' or ansible_os_family == 'Debian' else omit }}"
      when: "'webservers' in group_names or 'databases' in group_names" # Only check on relevant servers

How to run:

Patch all servers, but only reboot if necessary


ansible-playbook -i inventory.ini playbooks/patch_servers.yml

Patch all servers and force a reboot on all of them


ansible-playbook -i inventory.ini playbooks/patch_servers.yml -e "reboot_system=true"

Patch only webservers, one at a time, and reboot if needed


ansible-playbook -i inventory.ini playbooks/patch_servers.yml --limit webservers

Scenario 4: Deploying a Custom Application with CI/CD Integration

Imagine you have a simple Python Flask application. Application Structure on Control Node (simplified)

my_flask_app/
├── app.py
├── requirements.txt
└── .env

group_vars/webservers.yml

# group_vars/webservers.yml
app_name: myflaskapp
app_user: flaskuser
app_group: flaskuser
app_repo_url: https://github.com/yourorg/my_flask_app.git # Or local path / tarball
app_dest_path: /opt/{{ app_name }}

playbooks/deploy_flask_app.yml

---
- name: Deploy Python Flask Application
  hosts: webservers
  become: true
  vars:
    # Environment variables for the app
    app_env_vars:
      FLASK_APP: app.py
      FLASK_ENV: production
      DATABASE_URL: "sqlite:///{{ app_dest_path }}/app.db" # Example

  tasks:
    - name: Ensure Python and pip are installed
      ansible.builtin.package:
        name: "{{ item }}"
        state: present
      loop:
        - python3
        - python3-pip

    - name: Create app user and group
      ansible.builtin.user:
        name: "{{ app_user }}"
        state: present
        comment: "Flask Application User"
        shell: /bin/bash

    - name: Ensure application directory exists
      ansible.builtin.file:
        path: "{{ app_dest_path }}"
        state: directory
        owner: "{{ app_user }}"
        group: "{{ app_group }}"
        mode: '0755'

    - name: Clone or update application repository
      ansible.builtin.git:
        repo: "{{ app_repo_url }}"
        dest: "{{ app_dest_path }}"
        version: main # Or a specific tag/branch
        force: true # Force clone/pull even if dest is not empty
      become_user: "{{ app_user }}" # Run git as app_user
      notify: Restart Flask App

    - name: Install Python dependencies
      ansible.builtin.pip:
        requirements: "{{ app_dest_path }}/requirements.txt"
        virtualenv: "{{ app_dest_path }}/venv"
        virtualenv_command: python3 -m venv
      become_user: "{{ app_user }}" # Install as app_user
      notify: Restart Flask App

    - name: Create .env file for environment variables
      ansible.builtin.template:
        src: flask_env.j2
        dest: "{{ app_dest_path }}/.env"
        owner: "{{ app_user }}"
        group: "{{ app_group }}"
        mode: '0640'
      notify: Restart Flask App

    - name: Configure Systemd service for Flask app
      ansible.builtin.template:
        src: flask_app.service.j2
        dest: "/etc/systemd/system/{{ app_name }}.service"
        owner: root
        group: root
        mode: '0644'
      notify: Reload Systemd & Restart Flask App

    - name: Ensure Flask app service is started and enabled
      ansible.builtin.systemd:
        name: "{{ app_name }}"
        state: started
        enabled: true

  handlers:
    - name: Restart Flask App
      ansible.builtin.systemd:
        name: "{{ app_name }}"
        state: restarted

    - name: Reload Systemd & Restart Flask App
      ansible.builtin.systemd:
        name: "{{ app_name }}"
        daemon_reload: true
        state: restarted

templates/flask_env.j2

# templates/flask_env.j2
{% for key, value in app_env_vars.items() %}
{{ key }}={{ value }}
{% endfor %}

templates/flask_app.service.j2

# templates/flask_app.service.j2
[Unit]
Description={{ app_name }} Flask Application
After=network.target

[Service]
User={{ app_user }}
Group={{ app_group }}
WorkingDirectory={{ app_dest_path }}
EnvironmentFile={{ app_dest_path }}/.env
ExecStart={{ app_dest_path }}/venv/bin/gunicorn -w 4 -b 0.0.0.0:5000 app:app # Using Gunicorn
Restart=always

[Install]
WantedBy=multi-user.target

How to run (part of a CI/CD pipeline):

This playbook would be triggered by a CI/CD tool (Jenkins, GitLab CI, etc.)

after a successful build or commit to main branch.


ansible-playbook -i inventory.ini playbooks/deploy_flask_app.yml --limit webserver01.example.com

These examples illustrate how Ansible handles common SysAdmin and DevOps tasks with modularity, reusability, and idempotence. They provide a strong foundation for building more complex automation workflows. Remember to always test your playbooks thoroughly in a development or staging environment before deploying to production!