DEV Community

Livio Ribeiro
Livio Ribeiro

Posted on

Using LXD and Ansible to simulate infrastructure

When dealing with software like Kubernetes, Openshift and Rancher, it may be challenging to test an application locally. It is true that we have tools like minikube and minishift, but if we are working with the infrastructure, simulating a cluster with virtual machines can quickly consume all our available RAM.

LXD can help solve the memory problem by using containers as if they were full blown virtual machines! Unlike Docker, which uses containers to run a single process until it finishes, LXD uses containers to spawn an operating system image and use it as a complete server.

To install LXD you can follow the instructions on the official website, but after installing we have to initialize it:

# The default values are good enough
$ sudo lxd init
Enter fullscreen mode Exit fullscreen mode

If you followed the getting started, you may have created some containers using the cli, but now comes the interesting part: We can use Ansible to automate the creation and provisioning of our servers!

To our LXD/Ansible project we are going to create a Nomad cluster with Consul and Traefik.

I chose Nomad because of its simplicity and versatility: it can run services as docker containers, like Kubernetes, but also java applications and any executable available in the host, isolated using the operating system resources (e.g. cgroups, namespaces and chroot on linux). You can see in the documentation what Nomad is able to run.

Consul is used by Nomad for service discovery, key-value storage and to bootstrap the cluster. Nomad can also run without Consul.

Traefik will proxy requests to the services deployed in the cluster. It will use Consul service catalog as the configuration backend, so the routes to services will be automatically configured.

The completed tutorial can be found in this repository.

Index

step 0: Planning

Our cluster will need the following:

  • 3 Consul nodes operating in server mode
    • consul1: 10.99.0.101
    • consul2: 10.99.0.102
    • consul3: 10.99.0.103
  • 3 Nomad nodes operating in server mode
    • nomad-server1: 10.99.0.111
    • nomad-server2: 10.99.0.112
    • nomad-server3: 10.99.0.113
  • 3 Nomad nodes operating in client mode
    • nomad-client1: 10.99.0.121
    • nomad-client2: 10.99.0.122
    • nomad-client3: 10.99.0.123
    • The nomad clients will docker and openjdk installed
  • 1 Traefik node
    • proxy: 10.99.0.100

All non Consul nodes will have Consul operating in client node.

And for ansible, we will create a project with the following structure:

~/projects/nomad-lxd-ansible
├── cache/
├── inventory/
│   └── hosts
├── roles/
├── ansible.cfg
└── playbook.yml
Enter fullscreen mode Exit fullscreen mode

This structure can be created with the following commands:

mkdir -p ~/projects/nomad-lxd-ansible
cd ~/projects/nomad-lxd-ansible
mkdir cache inventory roles
touch ansible.cfg playboook.yml inventory/hosts
Enter fullscreen mode Exit fullscreen mode

With the cache directory, we will download the Consul, Nomad and Traefik binaries only once in the host. This way, we avoid downloading again in every host.

In ansible.cfg we will tell ansible to use our inventory:

# ansible.cfg
[defaults]
inventory = inventory
Enter fullscreen mode Exit fullscreen mode

Add our cluster nodes to the inventory:

# inventory/hosts
proxy           ip_address=10.99.0.100

[consul_servers]
consul1         ip_address=10.99.0.101
consul2         ip_address=10.99.0.102
consul3         ip_address=10.99.0.103

[nomad_servers]
nomad-server1   ip_address=10.99.0.111
nomad-server2   ip_address=10.99.0.112
nomad-server3   ip_address=10.99.0.113

[nomad_clients]
nomad-client1   ip_address=10.99.0.121
nomad-client2   ip_address=10.99.0.122
nomad-client3   ip_address=10.99.0.123

[all:vars]
ansible_connection=lxd
ansible_python_interpreter=/usr/bin/python3
Enter fullscreen mode Exit fullscreen mode

The ip_address variable will be referenced in the playbook.

And to create the containers, add the following to playbook.yml:

# playbook.yml
--------
- hosts: localhost
  # run this task in the host
  connection: local
  tasks:
    - name: create containers
      # get all host names from inventory
      loop: "{{ groups['all'] }}"
      # use lxd_container module from ansible to create containers
      lxd_container:
        # container name is the hostname
        name: "{{ item }}"
        state: started
        source:
          type: image
          mode: pull
          server: https://images.linuxcontainers.org
          alias: ubuntu/bionic/amd64
        config:
          # nomad clients need some privileges to be able to run docker containers
          security.nesting: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
          security.privileged: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
        devices:
          # configure network interface
          eth0:
            type: nic
            nictype: bridged
            parent: lxdbr0
            # get ip address from inventory
            ipv4.address: "{{ hostvars[item].ip_address }}"
        # # uncomment if you installed lxd using snap
        # url: unix:/var/snap/lxd/common/lxd/unix.socket
Enter fullscreen mode Exit fullscreen mode

Now run ansible-playbook playbook.yml to create our nodes with lxd.

step 1: Consul

Let's tell ansible to download and setup Consul.

Edit the playbook to be as follows:

# playbook.yml
--------
- hosts: localhost
  # run this task in the host
  connection: local
  # set urls as variables
  vars:
    consul_version: "1.4.0"
    consul_url: "https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_linux_amd64.zip"
  tasks:
    - name: create containers
      # get all host names from inventory
      loop: "{{ groups['all'] }}"
      # use lxd_container module from ansible to create containers
      lxd_container:
        # container name is the hostname
        name: "{{ item }}"
        state: started
        source:
          type: image
          mode: pull
          server: https://images.linuxcontainers.org
          alias: ubuntu/bionic/amd64
        config:
          # nomad clients need some privileges to be able to run docker containers
          security.nesting: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
          security.privileged: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
        devices:
          # configure network interface
          eth0:
            type: nic
            nictype: bridged
            parent: lxdbr0
            # get ip address from inventory
            ipv4.address: "{{ hostvars[item].ip_address }}"
        # # uncomment if you installed lxd using snap
        # url: unix:/var/snap/lxd/common/lxd/unix.socket

    # ensure cache directory exists
    - name: create cache directory
      file:
        path: cache
        state: directory

    - name: fetch applications
      unarchive:
        src: "{{ item.url }}"
        dest: cache
        creates: "cache/{{ item.file }}"
        remote_src: yes
      loop:
        - url: "{{ consul_url }}"
          file: consul

- hosts: consul_servers
  roles:
    - consul_server
Enter fullscreen mode Exit fullscreen mode

The hosts belonging to the consul_servers group will have the role consul_server. We will also create another role called consul_service that will copy the consul binary to the host and setup the service. We split the role this way in order to have a role consul_client that also needs consul binary and service, but with different configuration.

Roles are located under the roles directory, and for the three roles for Consul we will have the following structure:

roles/
├── consul_client
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── consul.hcl.j2
├── consul_server
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── consul.hcl.j2
└── consul_service
    ├── files
    │   └── consul.service
    └── tasks
        └── main.yml
Enter fullscreen mode Exit fullscreen mode

You can create the structure with the following:

mkdir -p \
  roles/consul_service/tasks \
  roles/consul_service/files \
  roles/consul_server/tasks \
  roles/consul_server/templates \
  roles/consul_client/tasks \
  roles/consul_client/templates \
&& touch \
  roles/consul_service/tasks/main.yml \
  roles/consul_service/files/consul.service \
  roles/consul_server/tasks/main.yml \
  roles/consul_server/templates/consul.hcl.j2 \
  roles/consul_client/tasks/main.yml \
  roles/consul_client/templates/consul.hcl.j2
Enter fullscreen mode Exit fullscreen mode

Role: consul_service

Edit roles/consul_service/tasks/main.yml:

# roles/consul_service/tasks/main.yml
--------
- name: install consul
  copy:
    src: cache/consul
    dest: /usr/local/bin/
    mode: 0755

- name: create consul service
  copy:
    src: consul.service
    dest: /etc/systemd/system/

- name: create consul directories
  file:
    path: "{{ item }}"
    state: directory
  loop:
    - /etc/consul.d
    - /var/consul
Enter fullscreen mode Exit fullscreen mode

Edit roles/consul_service/files/consul.service:

# roles/consul_service/files/consul.service
[Unit]
Description="HashiCorp Consul - A service mesh solution"
Documentation=https://www.consul.io/
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/consul.d/consul.hcl

[Service]
Restart=on-failure
ExecStart=/usr/local/bin/consul agent -config-dir=/etc/consul.d
ExecReload=/usr/local/bin/consul reload
KillMode=process
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Role: consul_server

Edit roles/consul_server/tasks/main.yml:

# roles/consul_server/tasks/main.yml
--------
- import_role:
    name: consul_service

- name: copy consul config
  template:
    src: consul.hcl.j2
    dest: /etc/consul.d/consul.hcl

- name: start consul
  service:
    name: consul
    state: restarted
    enabled: yes
Enter fullscreen mode Exit fullscreen mode

Edit roles/consul_server/templates/consul.hcl.j2:

# roles/consul_server/templates/consul.hcl.j2
data_dir = "/var/consul"

server = true
advertise_addr = "{{ ansible_eth0.ipv4.address }}"

client_addr = "127.0.0.1 {{ ansible_eth0.ipv4.address }}"
enable_script_checks = true

{% if ansible_hostname == 'consul1' -%}
ui = true
bootstrap_expect = 3
{% else -%}
retry_join = [ "{{ hostvars.consul1.ansible_hostname }}" ]
{% endif %}
Enter fullscreen mode Exit fullscreen mode

Role: consul_client

Edit roles/consul_client/tasks/main.yml:

# roles/consul_client/tasks/main.yml
--------
- import_role:
    name: consul_service

- name: copy consul config
  template:
    src: consul.hcl.j2
    dest: /etc/consul.d/consul.hcl

- name: start consul
  service:
    name: consul
    state: restarted
    enabled: yes
Enter fullscreen mode Exit fullscreen mode

Edit roles/consul_client/templates/consul.hcl.j2:

# roles/consul_client/templates/consul.hcl.j2
data_dir = "/var/consul"
server = false

advertise_addr = "{{ ansible_eth0.ipv4.address }}"
client_addr = "127.0.0.1 {{ ansible_eth0.ipv4.address }}"
enable_script_checks = true
retry_join = [ "{{ hostvars.consul1.ansible_hostname }}" ]
Enter fullscreen mode Exit fullscreen mode

step 2: Nomad

Nomad setup will be very similar to Consul.

Edit the playbook to include Nomad:

# playbook.yml
--------
- hosts: localhost
  # run this task in the host
  connection: local
  # set urls as variables
  vars:
    consul_version: "1.4.0"
    nomad_version: "0.8.6"
    consul_url: "https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_linux_amd64.zip"
    nomad_url: "https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_linux_amd64.zip"
  tasks:
    - name: create containers
      # get all host names from inventory
      loop: "{{ groups['all'] }}"
      # use lxd_container module from ansible to create containers
      lxd_container:
        # container name is the hostname
        name: "{{ item }}"
        state: started
        source:
          type: image
          mode: pull
          server: https://images.linuxcontainers.org
          alias: ubuntu/bionic/amd64
        config:
          # nomad clients need some privileges to be able to run docker containers
          security.nesting: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
          security.privileged: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
        devices:
          # configure network interface
          eth0:
            type: nic
            nictype: bridged
            parent: lxdbr0
            # get ip address from inventory
            ipv4.address: "{{ hostvars[item].ip_address }}"
        # uncomment if you installed lxd using snap
        url: unix:/var/snap/lxd/common/lxd/unix.socket

    # ensure cache directory exists
    - name: create cache directory
      file: { path: cache, state: directory }

    - name: fetch applications
      unarchive:
        src: "{{ item.url }}"
        dest: cache
        creates: "cache/{{ item.file }}"
        remote_src: yes
      loop:
        - url: "{{ consul_url }}"
          file: consul
        - url: "{{ nomad_url }}"
          file: nomad

- hosts: consul_servers
  roles:
    - consul_server

- hosts: nomad_servers
  roles:
    - consul_client
    - nomad_server

- hosts: nomad_clients
  roles:
    - consul_client
    - nomad_client
Enter fullscreen mode Exit fullscreen mode

Similarly to Consul, we will have the roles nomad_service, nomad_server and nomad_client. But now we have two groups, nomad_servers and nomad_clients, each having its respective role, but both having the consul_client role.

We will also have a similar structure for the nomad roles:

roles/
├── nomad_client
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── nomad.hcl.j2
├── nomad_server
│   ├── tasks
│   │   └── main.yml
│   └── templates
│       └── nomad.hcl.j2
└── nomad_service
    ├── files
    │   └── nomad.service
    └── tasks
        └── main.yml
Enter fullscreen mode Exit fullscreen mode

We can create this structure with the following commands:

mkdir -p \
  roles/nomad_service/tasks \
  roles/nomad_service/files \
  roles/nomad_server/tasks \
  roles/nomad_server/templates \
  roles/nomad_client/tasks \
  roles/nomad_client/templates \
&& touch \
  roles/nomad_service/tasks/main.yml \
  roles/nomad_service/files/nomad.service \
  roles/nomad_server/tasks/main.yml \
  roles/nomad_server/templates/nomad.hcl.j2 \
  roles/nomad_client/tasks/main.yml \
  roles/nomad_client/templates/nomad.hcl.j2
Enter fullscreen mode Exit fullscreen mode

Role: nomad_service

Edit roles/nomad_service/tasks/main.yml:

# roles/nomad_service/tasks/main.yml
--------
- name: install nomad
  copy:
    src: cache/nomad
    dest: /usr/local/bin/
    mode: 0755

- name: create nomad service
  copy:
    src: nomad.service
    dest: /etc/systemd/system/

- name: create nomad directories
  file:
    path: "{{ item }}"
    state: directory
  loop:
    - /etc/nomad.d
    - /var/nomad
Enter fullscreen mode Exit fullscreen mode

Edit roles/nomad_service/files/nomad.service:

# roles/nomad_service/files/nomad.service
[Unit]
Description="HashiCorp Nomad - Application scheduler"
Documentation=https://www.nomadproject.io/
Requires=network-online.target
After=network.target
ConditionFileNotEmpty=/etc/nomad.d/nomad.hcl

[Service]
Restart=on-failure
ExecStart=/usr/local/bin/nomad agent -config=/etc/nomad.d/nomad.hcl
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Role: nomad_server

Edit roles/nomad_server/tasks/main.yml:

# roles/nomad_server/tasks/main.yml
--------
- import_role:
    name: nomad_service

- name: copy nomad config
  template:
    src: nomad.hcl.j2
    dest: /etc/nomad.d/nomad.hcl

- name: start nomad
  service:
    name: nomad
    state: restarted
    enabled: yes
Enter fullscreen mode Exit fullscreen mode

Edit roles/nomad_server/templates/nomad.hcl.j2:

# roles/nomad_server/templates/nomad.hcl.j2
data_dir  = "/var/nomad"

advertise {
  http = "{{ ansible_eth0.ipv4.address }}"
  rpc  = "{{ ansible_eth0.ipv4.address }}"
  serf = "{{ ansible_eth0.ipv4.address }}"
}

server {
  enabled          = true
  bootstrap_expect = 3
  raft_protocol    = 3
}
Enter fullscreen mode Exit fullscreen mode

Role: nomad_client

Edit roles/nomad_client/tasks/main.yml:

# roles/nomad_client/tasks/main.yml
--------
- import_role:
    name: nomad_service

- name: update apt cache
  apt:
    update_cache: yes

- name: install docker and openjdk
  apt:
    name: "{{ packages }}"
    state: present
  vars:
    packages:
      - docker.io
      - openjdk-11-jdk-headless

- name: start docker service
  service:
    name: docker
    state: started

- name: copy nomad config
  template:
    src: nomad.hcl.j2
    dest: /etc/nomad.d/nomad.hcl

- name: start nomad
  service:
    name: nomad
    state: restarted
    enabled: yes
Enter fullscreen mode Exit fullscreen mode

Edit roles/nomad_client/templates/nomad.hcl.j2:

data_dir  = "/var/nomad"

bind_addr = "{{ ansible_eth0.ipv4.address }}"

client {
  enabled = true
  network_interface = "eth0"
}
Enter fullscreen mode Exit fullscreen mode

step 3: Traefik

Setting up Traefik will be similar to Consul and Nomad, but a bit simpler: there will be only one role named proxy.

Edit the playbook to include Traefik:

# playbook.yml
--------
- hosts: localhost
  # run this task in the host
  connection: local
  # set urls as variables
  vars:
    consul_version: "1.4.0"
    nomad_version: "0.8.6"
    traefik_version: "1.7.5"
    consul_url: "https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_linux_amd64.zip"
    nomad_url: "https://releases.hashicorp.com/nomad/{{ nomad_version }}/nomad_{{ nomad_version }}_linux_amd64.zip"
    traefik_url: "https://github.com/containous/traefik/releases/download/v{{ traefik_version }}/traefik_linux-amd64"
  tasks:
    - name: create containers
      # get all host names from inventory
      loop: "{{ groups['all'] }}"
      # use lxd_container module from ansible to create containers
      lxd_container:
        # container name is the hostname
        name: "{{ item }}"
        state: started
        source:
          type: image
          mode: pull
          server: https://images.linuxcontainers.org
          alias: ubuntu/bionic/amd64
        config:
          # nomad clients need some privileges to be able to run docker containers
          security.nesting: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
          security.privileged: "{{ 'true' if item in ['nomad-client1', 'nomad-client2', 'nomad-client3'] else 'false' }}"
        devices:
          # configure network interface
          eth0:
            type: nic
            nictype: bridged
            parent: lxdbr0
            # get ip address from inventory
            ipv4.address: "{{ hostvars[item].ip_address }}"
        # # uncomment if you installed lxd using snap
        # url: unix:/var/snap/lxd/common/lxd/unix.socket

    # ensure cache directory exists
    - name: create cache directory
      file: { path: cache, state: directory }

    - name: fetch applications
      unarchive:
        src: "{{ item.url }}"
        dest: cache
        creates: "cache/{{ item.file }}"
        remote_src: yes
      loop:
        - url: "{{ consul_url }}"
          file: consul
        - url: "{{ nomad_url }}"
          file: nomad

    - name: fecth traefik
      get_url:
        url: "{{ traefik_url }}"
        dest: cache/traefik
        mode: 0755

- hosts: consul_servers
  roles:
    - consul_server

- hosts: nomad_servers
  roles:
    - consul_client
    - nomad_server

- hosts: nomad_clients
  roles:
    - consul_client
    - nomad_client

- hosts: proxy
  roles:
    - consul_client
    - proxy
Enter fullscreen mode Exit fullscreen mode

The structure needed for the proxy role will be like this:

roles/
└── proxy
    ├── files
    │   └── traefik.service
    ├── tasks
    │   └── main.yml
    └── templates
        └── traefik.toml.j2
Enter fullscreen mode Exit fullscreen mode

You can create the structure with the following:

mkdir -p \
  roles/proxy/tasks \
  roles/proxy/files \
  roles/proxy/templates \
&& touch \
  roles/proxy/tasks/main.yml \
  roles/proxy/files/traefik.service \
  roles/proxy/templates/traefik.toml.j2
Enter fullscreen mode Exit fullscreen mode

Edit roles/proxy/tasks/main.yml:

# roles/proxy/tasks/main.yml
--------
- name: install traefik
  copy:
    src: cache/traefik
    dest: /usr/local/bin/
    mode: 0755

- name: create traefik service
  copy:
    src: traefik.service
    dest: /etc/systemd/system/

- name: create traefik config directory
  file:
    path: /etc/traefik
    state: directory

- name: copy traefik config
  template:
    src: traefik.toml.j2
    dest: /etc/traefik/traefik.toml

- name: start traefik
  service:
    name: traefik
    state: restarted
    enabled: yes
Enter fullscreen mode Exit fullscreen mode

Edit roles/proxy/files/traefik.service:

# roles/proxy/files/traefik.service
[Unit]
Description="Traefik Proxy"
Documentation=https://traefik.io
Requires=network-online.target
After=network-online.target
ConditionFileNotEmpty=/etc/traefik/traefik.toml

[Service]
Restart=on-failure
ExecStart=/usr/local/bin/traefik --configfile=/etc/traefik/traefik.toml
ExecReload=/bin/kill -HUP $MAINPID

[Install]
WantedBy=multi-user.target
Enter fullscreen mode Exit fullscreen mode

Edit roles/proxy/templates/traefik.toml.j2:

# roles/proxy/templates/traefik.toml.j2
[file]

# Backends
[backends]
  [backends.consul]
    [backends.consul.servers]
    {% for host in groups['consul_servers'] %}
      [backends.consul.servers.{{ host }}]
        url = "http://{{ hostvars[host].ansible_eth0.ipv4.address }}:8500"
    {% endfor %}

  [backends.nomad]
    [backends.nomad.servers]
    {% for host in groups['nomad_servers'] %}
      [backends.nomad.servers.{{ host }}]
        url = "http://{{ hostvars[host].ansible_eth0.ipv4.address }}:4646"
    {% endfor %}

# Frontends
[frontends]
  [frontends.consul]
  backend = "consul"
    [frontends.consul.routes.route1]
    rule = "Host:consul.{{ ansible_eth0.ipv4.address }}.nip.io"

  [frontends.nomad]
  backend = "nomad"
    [frontends.nomad.routes.route1]
    rule = "Host:nomad.{{ ansible_eth0.ipv4.address }}.nip.io"

[consulCatalog]
endpoint = "127.0.0.1:8500"
exposedByDefault = false
domain = "service.{{ ansible_eth0.ipv4.address }}.nip.io"

[api]
dashboard = true
debug = true
Enter fullscreen mode Exit fullscreen mode

Traefik configuration grants access to Consul and Nomad dashboards and configures Consul Catalog backend. This way, services can be automatically discovered and exposed.

We set exposedByDefault = false so only the services marked with a specific tag will be exposed, therefore reducing the risk of accidentally making an internal service public.

step 4: Deploying Services

Now that we have everything in place, we can build our cluster and see it working:

ansible-playbook playbook.yml
Enter fullscreen mode Exit fullscreen mode

If everything went OK, we can now access the Trafik dashboard at http://10.99.0.100:8080:

Traefik dashboard 1

Not very impressive, since we do not have any service deployed yet, but if we go to the file tab:

Traefik dashboard 2

There it is! It shows Consul and Nomad!... Still not very impressive, we configured them statically in Traefik configuration. Let's do something more interesting:

  1. Enter any nomad server node:
lxc exec nomad-server1 -- bash
Enter fullscreen mode Exit fullscreen mode
  1. Create the nomad service definition
cat > hello.nomad <<EOF
job "hello-world" {
  datacenters = ["dc1"]

  group "example" {
    count = 3
    task "server" {
      # we will run a docker container
      driver = "docker"

      # resouces required by the task
      resources {
        network {
          # require a random port named "http"
          port "http" {}
        }
      }

      config {
        # docker image to run
        image = "hashicorp/http-echo"
        args = [
          "-listen", ":8080",
          "-text", "hello world",
        ]

        # map the random port to port 8080 on the task
        port_map = {
          http = 8080
        }
      }

      # exposed service
      service {
        # service name, compose the url like 'hello-world.service.myorg.com'
        name = "hello-world"
        # service will bind to this port
        port = "http"
        # tell traefik to expose this service
        tags = ["traefik.enable=true"]
      }
    }
  }
}
EOF
Enter fullscreen mode Exit fullscreen mode

In the service section, tags = ["traefik.enable=true"] is what will tell Traefik to expose the service.

  1. Deploy!
nomad job run hello.nomad
Enter fullscreen mode Exit fullscreen mode

It will output something like this:

root@nomad-server1:~# nomad job run hello.nomad 
==> Monitoring evaluation "be583c44"
    Evaluation triggered by job "hello-world"
    Allocation "a19978c9" created: node "d9d3daa0", group "example"
    Allocation "0e0e0015" created: node "7fdbbd1f", group "example"
    Allocation "690efcc2" created: node "ab36a46e", group "example"
    Evaluation status changed: "pending" -> "complete"
==> Evaluation "be583c44" finished with status "complete"
Enter fullscreen mode Exit fullscreen mode

Now, if we go to Traefik dashboard again:

Traefik dashboard 3

It's there!

We can access the service by opening http://hello-world.service.10.99.0.100.nip.io.

But wait a minute, what is that 10.99.0.100.nip.io url? It is a service that to the ip address you put before it. Pretty handy for testing. Go to [nip.io] for more info.

This service, however, is quite boring, let's deploy something more interesting:

  1. Create the nomad service definition
cat > gitea.nomad <<EOF
job "gitea" {
  datacenters = ["dc1"]

  group "gitea" {
    count = 1

    ephemeral_disk {
      # try to deploy this service on the same node every time
      sticky  = true
      # try to migrate the ephemeral disk if possible 
      migrate = true
      # set the ephemeral disk size to 2GB 
      size    = "2048"
    }

    task "server" {
      driver = "docker"

      config {
        image = "gitea/gitea:1.6"

        port_map = {
          http = 3000
        }

        # with docker driver, it is possible to mount volumes insinde the container from the ephemeral disk
        volumes = [
          "local/gitea-data:/data"
        ]
      }

      resources {
        network {
          port "http" {}
        }
      }

      service {
        name = "gitea"
        port = "http"
        tags = ["traefik.enable=true"]
      }
    }
  }
}
EOF
Enter fullscreen mode Exit fullscreen mode
  1. Plan the job deployment
nomad job plan gitea.nomad
Enter fullscreen mode Exit fullscreen mode

It will output something like this:

+ Job: "gitea"
+ Task Group: "gitea" (1 create)
  + Task: "server" (forces create)

Scheduler dry-run:
- All tasks successfully allocated.

Job Modify Index: 0
To submit the job with version verification run:

nomad job run -check-index 0 gitea.nomad

When running the job with the check-index flag, the job will only be run if the
server side version matches the job modify index returned. If the index has
changed, another user has modified the job and the plan's results are
potentially invalid.
Enter fullscreen mode Exit fullscreen mode

We went a bit different here. We planned the deployment or, in other words, we validated the deployment config (gitea.nomad) and generated an index number (0 in the case of a new deployment), so we do not risk updating a deployment after another operator did a deployment of his own.

To deploy the service, just follow the instructions nomad gave us:

nomad job run -check-index 0 gitea.nomad
Enter fullscreen mode Exit fullscreen mode

In a few minutes it will appear in the Traefik dashboard and will be accessible at http://gitea.service.10.99.0.100.nip.io:

Gitea

Conclusion

LXD is a very useful tool to test solutions that would be otherwise impossible or impractical with virtual machines. When combined with Ansible, you can quickly create test environments to evaluate these solutions in a way that is closer to a production environment than a scaled down tool like minikube or minishift (which are still completely valid tools if you are focusing only in the applications deployed in these solutions).

Nomad is a great software. Along with Consul, you have a simple yet very powerful solution to orchestrate your services. It can run docker, rkt and lxc containers, java applications packaged in a .jar file (like a Spring Boot application), and even binaries (like a Rocket application), which can be retrieved by nomad in the job definition and executed using the isolation primitives provided by the operating system. It is not as feature complete as Kubernetes but is a lot easier to operate.

Traefik integrates with a lot of services to provide auto configuration. Its Consul Catalog integration provides an incredible solution to a Nomad cluster.

Top comments (2)

Collapse
 
asg1612 profile image
Andrés Sánchez García

hi,

Good post!

Check out the Molecule framework for testing Ansible roles. It simulate an infrastructure.

molecule.readthedocs.io/en/stable/

Collapse
 
aligan profile image
Oleg

Hello!

How can I use my own local images in ansible playbook (not from internet), to create new containers?