DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Load Balancing avec HAProxy, Nomad et Consul …

HashiCorp a introduit depuis la version 1.14 de Consul, un plan de données, la gestion du trafic dans un maillage de services entre les pairs dans une grappe et des améliorations en matière de basculement de services :

Consul 1.14 GA: Announcing Simplified Service Mesh Deployments

Cela se traduit par Consul Service Mesh qui assure l’autorisation et le chiffrement des connexions entre services à l’aide de la sécurité mutuelle de la couche de transport (TLS). Consul Connect est utilisé de manière interchangeable avec le nom Consul Service Mesh et c’est ce qui fait référence à la fonctionnalité Service Mesh dans Consul.

Les applications peuvent utiliser des proxies latéraux dans une configuration de maillage de services pour établir des connexions TLS pour les connexions entrantes et sortantes sans avoir connaissance de Connect. Les applications peuvent également s’intégrer nativement à Connect pour des performances et une sécurité optimales. Connect peut aider à sécuriser les services et à fournir des données sur les communications entre services.

Service Mesh on Consul | Consul | HashiCorp Developer

Exemple de l’intégration de Nomad avec Consul Service Mesh dans ce premier exemple dans une instance Ubuntu 22.04 LTS dans Linode avec un moteur Docker préinstallé :


root@localhost:~# curl -fsSL https://get.docker.com | sh -

Client: Docker Engine - Community
 Version: 20.10.22
 API version: 1.41
 Go version: go1.18.9
 Git commit: 3a2c30b
 Built: Thu Dec 15 22:28:04 2022
 OS/Arch: linux/amd64
 Context: default
 Experimental: true

Server: Docker Engine - Community
 Engine:
  Version: 20.10.22
  API version: 1.41 (minimum version 1.12)
  Go version: go1.18.9
  Git commit: 42c8b31
  Built: Thu Dec 15 22:25:49 2022
  OS/Arch: linux/amd64
  Experimental: false
 containerd:
  Version: 1.6.14
  GitCommit: 9ba4b250366a5ddde94bb7c9d1def331423aa323
 runc:
  Version: 1.1.4
  GitCommit: v1.1.4-0-g5fd4c4d
 docker-init:
  Version: 0.19.0
  GitCommit: de40ad0
Enter fullscreen mode Exit fullscreen mode

Je récupère la dernière version de Consul :

Downloads | Consul by HashiCorp

root@localhost:~# wget -c https://releases.hashicorp.com/consul/1.14.3/consul_1.14.3_linux_amd64.zip

root@localhost:~# unzip consul_1.14.3_linux_amd64.zip && chmod +x consul && mv consul /usr/local/bin/

Archive: consul_1.14.3_linux_amd64.zip
  inflating: consul   

root@localhost:~# consul

Usage: consul [--version] [--help] <command> [<args>]

Available commands are:
    acl Interact with Consul's ACLs
    agent Runs a Consul agent
    catalog Interact with the catalog
    config Interact with Consul's Centralized Configurations
    connect Interact with Consul Connect
    debug Records a debugging archive for operators
    event Fire a new event
    exec Executes a command on Consul nodes
    force-leave Forces a member of the cluster to enter the "left" state
    info Provides debugging information for operators.
    intention Interact with Connect service intentions
    join Tell Consul agent to join cluster
    keygen Generates a new encryption key
    keyring Manages gossip layer encryption keys
    kv Interact with the key-value store
    leave Gracefully leaves the Consul cluster and shuts down
    lock Execute a command holding a lock
    login Login to Consul using an auth method
    logout Destroy a Consul token created with login
    maint Controls node or service maintenance mode
    members Lists the members of a Consul cluster
    monitor Stream logs from a Consul agent
    operator Provides cluster-level tools for Consul operators
    peering Create and manage peering connections between Consul clusters
    reload Triggers the agent to reload configuration files
    rtt Estimates network round trip time between nodes
    services Interact with services
    snapshot Saves, restores and inspects snapshots of Consul server state
    tls Builtin helpers for creating CAs and certificates
    validate Validate config files/directories
    version Prints the Consul version
    watch Watch for changes in Consul
Enter fullscreen mode Exit fullscreen mode

Installation de Node.js via Node Version Manager et du célèbre gestionnaire de processus PM2 :

root@localhost:~# wget -qO- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash

=> Downloading nvm from git to '/root/.nvm'
=> Cloning into '/root/.nvm'...

export NVM_DIR="$HOME/.nvm"
[-s "$NVM_DIR/nvm.sh"] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[-s "$NVM_DIR/bash_completion"] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion

root@localhost:~# source .bashrc

root@localhost:~# nvm

Node Version Manager (v0.39.3)

Note: <version> refers to any version-like string nvm understands. This includes:
  - full or partial version numbers, starting with an optional "v" (0.10, v0.1.2, v1)
  - default (built-in) aliases: node, stable, unstable, iojs, system
  - custom aliases you define with `nvm alias foo`

 Any options that produce colorized output should respect the `--no-colors` option.

root@localhost:~# nvm install --lts

Installing latest LTS version.
Downloading and installing node v18.12.1...
Downloading https://nodejs.org/dist/v18.12.1/node-v18.12.1-linux-x64.tar.xz...
############################################################################################################################################################################ 100.0%
Computing checksum with sha256sum
Checksums matched!
Now using node v18.12.1 (npm v8.19.2)
Creating default alias: default -> lts/* (-> v18.12.1)

root@localhost:~# node -v && npm -v
v18.12.1
8.19.2

root@localhost:~# npm install -g pm2@latest

root@localhost:~# pm2

                        -------------

__/\\\\\\\\\\\\\____ /\\\\ ____________/\\\\____ /\\\\\\\\\ _____
 _\/\\\/////////\\\_\/\\\\\\ ________/\\\\\\__ /\\\///////\\\___
  _\/\\\ _______\/\\\_\/\\\//\\\____ /\\\//\\\_\/// ______\//\\\__
   _\/\\\\\\\\\\\\\/ __\/\\\\///\\\/\\\/_\/\\\___________ /\\\/___
    _\/\\\///////// ____\/\\\__ \///\\\/ ___\/\\\________ /\\\// _____
     _\/\\\ _____________\/\\\____ \/// _____\/\\\_____ /\\\// ________
      _\/\\\ _____________\/\\\_____________ \/\\\ ___/\\\/___________
       _\/\\\ _____________\/\\\_____________ \/\\\__/\\\\\\\\\\\\\\\_
        _\/// ______________\///______________ \/// __\///////////////__

                          Runtime Edition

        PM2 is a Production Process Manager for Node.js applications
                     with a built-in Load Balancer.

                Start and Daemonize any application:
                $ pm2 start app.js

                Load Balance 4 instances of api.js:
                $ pm2 start api.js -i 4

                Monitor in production:
                $ pm2 monitor

                Make pm2 auto-boot at server restart:
                $ pm2 startup

                To go further checkout:
                http://pm2.io/

                        -------------

usage: pm2 [options] <command>

pm2 -h, --help all available commands and options
pm2 examples display pm2 usage examples
pm2 <command> -h help on a specific command

Access pm2 files in ~/.pm2
Enter fullscreen mode Exit fullscreen mode

Je peux lancer Consul localement avec PM2 :

root@localhost:~# cat consul.sh 

#!/bin/bash
consul agent -dev -bind 0.0.0.0 -log-level INFO

root@localhost:~# pm2 start consul.sh 

[PM2] Spawning PM2 daemon with pm2_home=/root/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /root/consul.sh in fork_mode (1 instance)
[PM2] Done.
┌─────┬───────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼───────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ consul │ default │ N/A │ fork │ 8699 │ 0s │ 0 │ online │ 0% │ 3.5mb │ root │ disabled │
└─────┴───────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘

root@localhost:~# pm2 logs 0

[TAILING] Tailing last 15 lines for [0] process (change the value with --lines option)
/root/.pm2/logs/consul-error.log last 15 lines:
/root/.pm2/logs/consul-out.log last 15 lines:
0|consul | 2022-12-27T22:46:42.549Z [INFO] agent.leader: started routine: routine="metrics for streaming peering resources"
0|consul | 2022-12-27T22:46:42.549Z [INFO] agent.leader: started routine: routine="peering deferred deletion"
0|consul | 2022-12-27T22:46:42.550Z [INFO] agent.server: New leader elected: payload=localhost
0|consul | 2022-12-27T22:46:42.551Z [INFO] connect.ca: updated root certificates from primary datacenter
0|consul | 2022-12-27T22:46:42.551Z [INFO] connect.ca: initialized primary datacenter CA with provider: provider=consul
0|consul | 2022-12-27T22:46:42.551Z [INFO] agent.leader: started routine: routine="intermediate cert renew watch"
0|consul | 2022-12-27T22:46:42.551Z [INFO] agent.leader: started routine: routine="CA root pruning"
0|consul | 2022-12-27T22:46:42.551Z [INFO] agent.leader: started routine: routine="CA root expiration metric"
0|consul | 2022-12-27T22:46:42.551Z [INFO] agent.leader: started routine: routine="CA signing expiration metric"
0|consul | 2022-12-27T22:46:42.551Z [INFO] agent.leader: started routine: routine="virtual IP version check"
0|consul | 2022-12-27T22:46:42.551Z [INFO] agent.server: member joined, marking health alive: member=localhost partition=default
0|consul | 2022-12-27T22:46:42.552Z [INFO] agent.leader: stopping routine: routine="virtual IP version check"
0|consul | 2022-12-27T22:46:42.552Z [INFO] agent.leader: stopped routine: routine="virtual IP version check"
0|consul | 2022-12-27T22:46:42.604Z [INFO] agent.server: federation state anti-entropy synced
0|consul | 2022-12-27T22:46:42.718Z [INFO] agent: Synced node info
Enter fullscreen mode Exit fullscreen mode

Consul tourne en arrière plan …


root@localhost:~# netstat -tunlp

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name    
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 594/systemd-resolve 
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 747/sshd: /usr/sbin 
tcp 0 0 127.0.0.1:8500 0.0.0.0:* LISTEN 8700/consul         
tcp 0 0 127.0.0.1:8502 0.0.0.0:* LISTEN 8700/consul         
tcp 0 0 127.0.0.1:8503 0.0.0.0:* LISTEN 8700/consul         
tcp 0 0 127.0.0.1:8600 0.0.0.0:* LISTEN 8700/consul         
tcp6 0 0 :::22 :::* LISTEN 747/sshd: /usr/sbin 
tcp6 0 0 :::8301 :::* LISTEN 8700/consul         
tcp6 0 0 :::8300 :::* LISTEN 8700/consul         
tcp6 0 0 :::8302 :::* LISTEN 8700/consul         
udp 0 0 127.0.0.53:53 0.0.0.0:* 594/systemd-resolve 
udp 0 0 127.0.0.1:8600 0.0.0.0:* 8700/consul         
udp6 0 0 :::8301 :::* 8700/consul         
udp6 0 0 :::8302 :::* 8700/consul 
Enter fullscreen mode Exit fullscreen mode

Nomad doit être programmé sur une interface routable pour que les proxies puissent se connecter les uns aux autres. Les étapes suivantes montrent comment démarrer un agent de développement Nomad configuré pour le maillage de services Consul.

Récupération de Nomad et lancement de ce dernier avec PM2 encore une fois :

Nomad by HashiCorp


root@localhost:~# wget -c https://releases.hashicorp.com/nomad/1.4.3/nomad_1.4.3_linux_amd64.zip

root@localhost:~# unzip nomad_1.4.3_linux_amd64.zip && chmod +x nomad && mv nomad /usr/local/bin/

Archive: nomad_1.4.3_linux_amd64.zip
  inflating: nomad 

root@localhost:~# nomad

Usage: nomad [-version] [-help] [-autocomplete-(un)install] <command> [args]

Common commands:
    run Run a new job or update an existing job
    stop Stop a running job
    status Display the status output for a resource
    alloc Interact with allocations
    job Interact with jobs
    node Interact with nodes
    agent Runs a Nomad agent

Other commands:
    acl Interact with ACL policies and tokens
    agent-info Display status information about the local agent
    config Interact with configurations
    deployment Interact with deployments
    eval Interact with evaluations
    exec Execute commands in task
    fmt Rewrites Nomad config and job files to canonical format
    license Interact with Nomad Enterprise License
    monitor Stream logs from a Nomad agent
    namespace Interact with namespaces
    operator Provides cluster-level tools for Nomad operators
    plugin Inspect plugins
    quota Interact with quotas
    recommendation Interact with the Nomad recommendation endpoint
    scaling Interact with the Nomad scaling endpoint
    sentinel Interact with Sentinel policies
    server Interact with servers
    service Interact with registered services
    system Interact with the system API
    ui Open the Nomad Web UI
    var Interact with variables
    version Prints the Nomad version
    volume Interact with volumes

root@localhost:~# cat nomad.sh 

#!/bin/bash
nomad agent -dev-connect -bind 0.0.0.0 -log-level INFO

root@localhost:~# pm2 start nomad.sh 

[PM2] Starting /root/nomad.sh in fork_mode (1 instance)
[PM2] Done.
┌─────┬───────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼───────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ consul │ default │ N/A │ fork │ 8699 │ 8m │ 0 │ online │ 0% │ 3.5mb │ root │ disabled │
│ 1 │ nomad │ default │ N/A │ fork │ 9147 │ 0s │ 0 │ online │ 0% │ 3.4mb │ root │ disabled │
└─────┴───────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘

root@localhost:~# pm2 logs 1

[TAILING] Tailing last 15 lines for [1] process (change the value with --lines option)
/root/.pm2/logs/nomad-error.log last 15 lines:
/root/.pm2/logs/nomad-out.log last 15 lines:
1|nomad | 2022-12-27T22:55:36.577Z [INFO] client.plugin: starting plugin manager: plugin-type=csi
1|nomad | 2022-12-27T22:55:36.577Z [INFO] client.plugin: starting plugin manager: plugin-type=driver
1|nomad | 2022-12-27T22:55:36.577Z [INFO] client.plugin: starting plugin manager: plugin-type=device
1|nomad | 2022-12-27T22:55:36.606Z [INFO] client: started client: node_id=c1ccb5d1-3acf-6235-7cb6-7d75bbbc6c80
1|nomad | 2022-12-27T22:55:37.617Z [WARN] nomad.raft: heartbeat timeout reached, starting election: last-leader-addr= last-leader-id=
1|nomad | 2022-12-27T22:55:37.619Z [INFO] nomad.raft: entering candidate state: node="Node at 172.17.0.1:4647 [Candidate]" term=2
1|nomad | 2022-12-27T22:55:37.619Z [INFO] nomad.raft: election won: term=2 tally=1
1|nomad | 2022-12-27T22:55:37.619Z [INFO] nomad.raft: entering leader state: leader="Node at 172.17.0.1:4647 [Leader]"
1|nomad | 2022-12-27T22:55:37.619Z [INFO] nomad: cluster leadership acquired
1|nomad | 2022-12-27T22:55:37.626Z [INFO] nomad.core: established cluster id: cluster_id=0eca49e5-2f55-e90a-4601-e9898c3fb97e create_time=1672181737626487176
1|nomad | 2022-12-27T22:55:37.627Z [INFO] nomad: eval broker status modified: paused=false
1|nomad | 2022-12-27T22:55:37.627Z [INFO] nomad: blocked evals status modified: paused=false
1|nomad | 2022-12-27T22:55:37.629Z [INFO] nomad.keyring: initialized keyring: id=d03630b4-de3f-5cba-607a-315b1458eadd
1|nomad | 2022-12-27T22:55:37.864Z [INFO] client: node registration complete
1|nomad | 2022-12-27T22:55:38.866Z [INFO] client: node registration complete
Enter fullscreen mode Exit fullscreen mode

Les interfaces graphiques pour Consul et Nomad sont présentes ici :

root@localhost:~# netstat -tunlp

Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name    
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN 594/systemd-resolve 
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 747/sshd: /usr/sbin 
tcp 0 0 127.0.0.1:8500 0.0.0.0:* LISTEN 8700/consul         
tcp 0 0 127.0.0.1:8502 0.0.0.0:* LISTEN 8700/consul         
tcp 0 0 127.0.0.1:8503 0.0.0.0:* LISTEN 8700/consul         
tcp 0 0 127.0.0.1:8600 0.0.0.0:* LISTEN 8700/consul         
tcp6 0 0 :::4648 :::* LISTEN 9148/nomad          
tcp6 0 0 :::4647 :::* LISTEN 9148/nomad          
tcp6 0 0 :::4646 :::* LISTEN 9148/nomad          
tcp6 0 0 :::22 :::* LISTEN 747/sshd: /usr/sbin 
tcp6 0 0 :::8301 :::* LISTEN 8700/consul         
tcp6 0 0 :::8300 :::* LISTEN 8700/consul         
tcp6 0 0 :::8302 :::* LISTEN 8700/consul         
udp 0 0 127.0.0.53:53 0.0.0.0:* 594/systemd-resolve 
udp 0 0 127.0.0.1:8600 0.0.0.0:* 8700/consul         
udp6 0 0 :::4648 :::* 9148/nomad          
udp6 0 0 :::8301 :::* 8700/consul         
udp6 0 0 :::8302 :::* 8700/consul  
Enter fullscreen mode Exit fullscreen mode

Nomad utilise les plugins CNI pour configurer le namespace réseau utilisé pour sécuriser le proxy sidecar maillé du service Consul. Les plugins CNI doivent être installés sur tous les nœuds clients de Nomad qui utilisent des namespaces de réseau.

CNI

root@localhost:~# curl -L -o cni-plugins.tgz "https://github.com/containernetworking/plugins/releases/download/v1.0.0/cni-plugins-linux-$( [$(uname -m) = aarch64] && echo arm64 || echo amd64)"-v1.0.0.tgz
root@localhost:~# mkdir -p /opt/cni/bin
root@localhost:~# tar -C /opt/cni/bin -xzf cni-plugins.tgz
root@localhost:~# echo 1 | tee /proc/sys/net/bridge/bridge-nf-call-arptables
1
root@localhost:~# echo 1 | tee /proc/sys/net/bridge/bridge-nf-call-ip6tables
1
root@localhost:~# echo 1 | tee /proc/sys/net/bridge/bridge-nf-call-iptables
1
Enter fullscreen mode Exit fullscreen mode

Je soumets alors le service suivant à Nomad avec un service API et un frontal web :

root@localhost:~# cat servicemesh.nomad 

job "countdash" {
  datacenters = ["dc1"]

  group "api" {
    network {
      mode = "bridge"
    }

    service {
      name = "count-api"
      port = "9001"

      connect {
        sidecar_service {}
      }
    }

    task "web" {
      driver = "docker"

      config {
        image = "hashicorpdev/counter-api:v3"
      }
    }
  }

  group "dashboard" {
    network {
      mode = "bridge"

      port "http" {
        static = 9002
        to = 9002
      }
    }

    service {
      name = "count-dashboard"
      port = "http"

      connect {
        sidecar_service {
          proxy {
            upstreams {
              destination_name = "count-api"
              local_bind_port = 8080
            }
          }
        }
      }
    }

    task "dashboard" {
      driver = "docker"

      env {
        COUNTING_SERVICE_URL = "http://${NOMAD_UPSTREAM_ADDR_count_api}"
      }

      config {
        image = "hashicorpdev/counter-dashboard:v3"
      }
    }
  }
}

root@localhost:~# nomad job run servicemesh.nomad

==> 2022-12-27T23:12:42Z: Monitoring evaluation "11998ad9"
    2022-12-27T23:12:42Z: Evaluation triggered by job "countdash"
    2022-12-27T23:12:42Z: Allocation "46f7e903" created: node "c1ccb5d1", group "dashboard"
    2022-12-27T23:12:42Z: Allocation "8c6d70e9" created: node "c1ccb5d1", group "api"
    2022-12-27T23:12:43Z: Evaluation within deployment: "3fc9e54e"
    2022-12-27T23:12:43Z: Evaluation status changed: "pending" -> "complete"
==> 2022-12-27T23:12:43Z: Evaluation "11998ad9" finished with status "complete"
==> 2022-12-27T23:12:43Z: Monitoring deployment "3fc9e54e"
  ✓ Deployment "3fc9e54e" successful

    2022-12-27T23:13:05Z
    ID = 3fc9e54e
    Job ID = countdash
    Job Version = 0
    Status = successful
    Description = Deployment completed successfully

    Deployed
    Task Group Desired Placed Healthy Unhealthy Progress Deadline
    api 1 1 1 0 2022-12-27T23:23:04Z
    dashboard 1 1 1 0 2022-12-27T23:23:04Z

root@localhost:~# nomad job status

ID Type Priority Status Submit Date
countdash service 50 running 2022-12-27T23:12:42Z
Enter fullscreen mode Exit fullscreen mode

Le job qui tourne via Docker est visible sur les dashboards :

Le frontal web se connecte au service API via le maillage de services fourni par Consul sur le port 9002 ici :

Mais je peux induire également un service de Load Balancing via Consul et HAProxy.

Pour lancement d’un job avec le sempiternel démonstrateur FranceConnect en arrière plan :

root@localhost:~# cat webapp.nomad 

job "demo-webapp" {
  datacenters = ["dc1"]

  group "demo" {
    count = 3

    network {
      port "http" { }
    }

    service {
      name = "demo-webapp"
      port = "http"

      check {
        type = "http"
        path = "/"
        interval = "2s"
        timeout = "2s"
      }
    }

    task "server" {
      env {
        PORT = "${NOMAD_PORT_http}"
        NODE_IP = "${NOMAD_IP_http}"
      }

      driver = "docker"

      config {
        image = "mcas/franceconnect-demo3:latest"
        ports = ["http"]
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Ici je lance trois instances de l’application Web du démonstrateur FC que l’on peut faire cibler dans la configuration HAProxy.

root@localhost:~# nomad run webapp.nomad

==> 2022-12-28T00:08:46Z: Monitoring evaluation "0d8784ab"
    2022-12-28T00:08:46Z: Evaluation triggered by job "demo-webapp"
    2022-12-28T00:08:47Z: Evaluation within deployment: "e3c5978b"
    2022-12-28T00:08:47Z: Allocation "591957ed" created: node "c1ccb5d1", group "demo"
    2022-12-28T00:08:47Z: Allocation "aaf70671" created: node "c1ccb5d1", group "demo"
    2022-12-28T00:08:47Z: Allocation "20eb6b05" created: node "c1ccb5d1", group "demo"
    2022-12-28T00:08:47Z: Evaluation status changed: "pending" -> "complete"
==> 2022-12-28T00:08:47Z: Evaluation "0d8784ab" finished with status "complete"
==> 2022-12-28T00:08:47Z: Monitoring deployment "e3c5978b"
  ✓ Deployment "e3c5978b" successful

    2022-12-28T00:09:09Z
    ID = e3c5978b
    Job ID = demo-webapp
    Job Version = 0
    Status = successful
    Description = Deployment completed successfully

    Deployed
    Task Group Desired Placed Healthy Unhealthy Progress Deadline
    demo 3 3 3 0 2022-12-28T00:19:08Z
Enter fullscreen mode Exit fullscreen mode

Je peux donc créer une tâche pour HAProxy qui va équilibrer les requêtes entre les instances déployées :

root@localhost:~# cat haproxy.nomad 

job "haproxy" {
  region = "global"
  datacenters = ["dc1"]
  type = "service"

  group "haproxy" {
    count = 1

    network {
      port "http" {
        static = 80
      }

      port "haproxy_ui" {
        static = 1936
      }
    }

    service {
      name = "haproxy"

      check {
        name = "alive"
        type = "tcp"
        port = "http"
        interval = "10s"
        timeout = "2s"
      }
    }

    task "haproxy" {
      driver = "docker"

      config {
        image = "haproxy:2.0"
        network_mode = "host"

        volumes = [
          "local/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg",
        ]
      }

      template {
        data = <<EOF
defaults
   mode http

frontend stats
   bind *:1936
   stats uri /
   stats show-legends
   no log

frontend http_front
   bind *:80
   default_backend http_back

backend http_back
    balance roundrobin
    server-template mywebapp 10 _demo-webapp._tcp.service.consul resolvers consul resolve-opts allow-dup-ip resolve-prefer ipv4 check

resolvers consul
    nameserver consul 127.0.0.1:8600
    accepted_payload_size 8192
    hold valid 5s
EOF

        destination = "local/haproxy.cfg"
      }

      resources {
        cpu = 200
        memory = 128
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode
root@localhost:~# nomad run haproxy.nomad 

==> 2022-12-28T00:22:50Z: Monitoring evaluation "d24577a0"
    2022-12-28T00:22:50Z: Evaluation triggered by job "haproxy"
    2022-12-28T00:22:50Z: Evaluation within deployment: "5c6481d1"
    2022-12-28T00:22:50Z: Allocation "9a4363d6" created: node "c1ccb5d1", group "haproxy"
    2022-12-28T00:22:50Z: Evaluation status changed: "pending" -> "complete"
==> 2022-12-28T00:22:50Z: Evaluation "d24577a0" finished with status "complete"
==> 2022-12-28T00:22:50Z: Monitoring deployment "5c6481d1"
  ✓ Deployment "5c6481d1" successful

    2022-12-28T00:23:13Z
    ID = 5c6481d1
    Job ID = haproxy
    Job Version = 0
    Status = successful
    Description = Deployment completed successfully

    Deployed
    Task Group Desired Placed Healthy Unhealthy Progress Deadline
    haproxy 1 1 1 0 2022-12-28T00:33:12Z

root@localhost:~# nomad job status

ID Type Priority Status Submit Date
countdash service 50 running 2022-12-27T23:12:42Z
demo-webapp service 50 running 2022-12-28T00:08:46Z
haproxy service 50 running 2022-12-28T00:22:50Z
Enter fullscreen mode Exit fullscreen mode

Consul permet à HAProxy d’utiliser l’enregistrement DNS SRV du service backend demo-webapp.service.consul pour découvrir les instances disponibles pour ce service.

Et je peux en effet vérifier la page de statistiques d’HAProxy sur le port définit précédemment sur TCP 1936 :

ainsi que la présence du démonstrateur FC en load balancing avec HAProxy sur le port 80 :

Il est possible d’utiliser Traefik, Nginx ou Fabio par exemple pour la partie Load Balancing avec Nomad et Consul …

On pourrait aller plus loin en implémentant un équilibreur de charge d’application (ALB) externe pour permettre le trafic vers des services internes et équilibrer davantage le trafic vers différentes instances d’un Load Balancer comme HAProxy/Nginx/Traefik/Fabio. De cette façon, l’ALB est responsable de la transmission du trafic en fonction du service d’application demandé, et le Load Balancer est responsable de l’équilibrage du trafic entre les multiples instances du même service d’application.

Managing External Traffic with Application Load Balancing | Nomad | HashiCorp Developer

et de profiter de Consul Service Mesh pour découvrir, lier et sécuriser les services …

À suivre !

Top comments (0)