DEV Community

Karim
Karim

Posted on • Originally published at deep75.Medium on

Déployer un cluster Nomad et Consul très rapidement avec hashi-up …

Hashi-up est un utilitaire léger qui permet d’installer HashiCorp Consul, Nomad ou Vault sur n’importe quel hôte Linux distant.

Load Balancing avec HAProxy, Nomad et Consul …

Tout ce dont vous avez besoin est un accès ssh et le binaire hashi-up pour construire son cluster.

GitHub - jsiebens/hashi-up: bootstrap HashiCorp Consul, Nomad, or Vault over SSH < 1 minute

L’outil est écrit en Go et est compilé pour Linux, Windows, MacOS et même pour Raspberry Pi.

Building a Nomad cluster on Raspberry Pi running Ubuntu server

Ce projet est fortement inspiré du travail d’Alex Ellis qui a créé k3sup, un outil pour passer de zéro à KUBECONFIG avec k3s.

Implémentation avec le lancement d’un serveur Ubuntu 22.04 LTS autorisant la virtualisation imbriquée dans DigitalOcean :

curl -X POST -H 'Content-Type: application/json' \
    -H 'Authorization: Bearer '$TOKEN'' \
    -d '{"name":"minione",
        "size":"m-4vcpu-32gb",
        "region":"fra1",
        "image":"ubuntu-22-04-x64",
        "monitoring":true,
        "vpc_uuid":"8b8c0544-e7b6-4d0a-977d-4406ea518f7a"}' \
    "https://api.digitalocean.com/v2/droplets"
Enter fullscreen mode Exit fullscreen mode

Je vais y installer OpenNebula via le script miniONE :

root@minione:~# wget -c https://raw.githubusercontent.com/OpenNebula/minione/master/minione
--2023-03-11 14:21:46-- https://raw.githubusercontent.com/OpenNebula/minione/master/minione
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.110.133, 185.199.109.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.110.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 51458 (50K) [text/plain]
Saving to: ‘minione’

minione 100%[=============================================================================================>] 50.25K --.-KB/s in 0s      

2023-03-11 14:21:46 (149 MB/s) - ‘minione’ saved [51458/51458]
Enter fullscreen mode Exit fullscreen mode

Déploiement automatisé très rapidement avec lancement du script :

root@minione:~# bash minione

### Checks & detection
Checking augeas is installed SKIP will try to install
Checking AppArmor SKIP will try to modify
Checking for present ssh key SKIP
Checking (iptables|netfilter)-persistent are installed SKIP will try to install
Checking docker is installed SKIP will try to install
Checking python3-pip is installed SKIP will try to install
Checking ansible SKIP will try to install
Checking terraform SKIP will try to install
Checking unzip is installed SKIP will try to install

### Main deployment steps:
Install OpenNebula frontend version 6.6
Install Terraform
Install Docker
Configure bridge minionebr with IP 172.16.100.1/24
Enable NAT over eth0
Modify AppArmor
Install OpenNebula KVM node
Export appliance and update VM template
Install augeas-tools iptables-persistent netfilter-persistent python3-pip unzip
Install pip 'ansible==2.9.9'

Do you agree? [yes/no]:
yes

### Installation
Updating APT cache OK
Install augeas-tools iptables-persistent netfilter-persistent python3-pip unzip OK
Updating PIP OK
Install from PyPI 'ansible==2.9.9' OK
Creating bridge interface minionebr OK
Bring bridge interfaces up OK
Enabling IPv4 forward OK
Persisting IPv4 forward OK
Configuring NAT using iptables OK
Saving iptables changes OK
Installing DNSMasq OK
Starting DNSMasq OK
Configuring repositories OK
Updating APT cache OK
Installing OpenNebula packages OK
Installing opennebula-provision package OK
Installing TerraForm OK
Create docker packages repository OK
Install docker OK
Start docker service OK
Enable docker service OK
Installing OpenNebula kvm node packages OK
Updating AppArmor OK
Disable default libvirtd networking OK
Restart libvirtd OK

### Configuration
Generating ssh keypair in /root/.ssh-oneprovision/id_rsa OK
Add oneadmin to docker group OK
Update network hooks OK
Switching OneGate endpoint in oned.conf OK
Switching OneGate endpoint in onegate-server.conf OK
Switching keep_empty_bridge on in OpenNebulaNetwork.conf OK
Switching scheduler interval in oned.conf OK
Setting initial password for current user and oneadmin OK
Changing WebUI to listen on port 80 OK
Switching FireEdge public endpoint OK
Starting OpenNebula services OK
Enabling OpenNebula services OK
Add ssh key to oneadmin user OK
Update ssh configs to allow VM addresses reusing OK
Ensure own hostname is resolvable OK
Checking OpenNebula is working OK
Disabling ssh from virtual network OK
Adding localhost ssh key to known_hosts OK
Testing ssh connection to localhost OK
Updating datastores template OK
Creating KVM host OK
Restarting OpenNebula OK
Creating virtual network OK
Exporting [Alpine Linux 3.17] from Marketplace to local datastore OK
Waiting until the image is ready OK
Updating VM template OK

### Report
OpenNebula 6.6 was installed
Sunstone is running on:
  http://164.90.215.231/
FireEdge is running on:
  http://164.90.215.231:2616/
Use following to login:
  user: oneadmin
  password: SyEnJiYuWC
Enter fullscreen mode Exit fullscreen mode

chargement d’une image Ubuntu 22.04 LTS pour création de plusieures machines virtuelles :

Lancement d’une première machine virtuelle sous Ubuntu :

et installation de hashi-up :

root@minione:~# curl -sLS https://get.hashi-up.dev | sh

Downloading package https://github.com/jsiebens/hashi-up/releases/download/v0.16.0/hashi-up as /tmp/hashi-up
Download complete.

Running with sufficient permissions to attempt to move hashi-up to /usr/local/bin
New version of hashi-up installed to /usr/local/bin
Version: 0.16.0
Git Commit: b062f5d

root@minione:~# hashi-up version
Version: 0.16.0
Git Commit: b062f5d
Enter fullscreen mode Exit fullscreen mode

pour la mise en place d’un serveur Consul :

root@minione:~# export IP=172.16.100.2

root@minione:~# hashi-up consul install --ssh-target-addr $IP --ssh-target-user root --server --client-addr 0.0.0.0 --ssh-target-key .ssh/id_rsa --version 1.15.1

[INFO] Uploading generated Consul configuration ...
[INFO] Installing Consul ...
[INFO] -> User 'consul' already exists, will not create again
[INFO] -> Copying configuration files
[INFO] -> Downloading consul_1.15.1_linux_amd64.zip
[INFO] -> Downloading consul_1.15.1_SHA256SUMS
[INFO] -> Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
[INFO] -> Unpacking consul_1.15.1_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/consul.service
[INFO] -> Enabling systemd service
[INFO] -> Starting systemd service
[INFO] Done.
Enter fullscreen mode Exit fullscreen mode

J’utilise “le VPN du pauvre” avec sshuttle pour me connecter au dashboard du serveur Consul :

Usage - sshuttle 1.1.1 documentation

$ sshuttle --dns -NHr root@164.90.215.231 0.0.0.0/0  ✔  base   15:45:09  
c : Connected to server.
Enter fullscreen mode Exit fullscreen mode

Ce qui peut me permettre d’atteindre le serveur Consul directement …

Je peux y adjoindre un serveur Nomad avec hashi-up :

Install | Nomad | HashiCorp Developer

root@minione:~# hashi-up nomad install --ssh-target-addr $IP --ssh-target-user root --server --ssh-target-key .ssh/id_rsa --version 1.5.0

[INFO] Uploading generated Nomad configuration ...
[INFO] Installing Nomad ...
[INFO] -> Copying configuration files
[INFO] -> Downloading nomad_1.5.0_linux_amd64.zip
[INFO] -> Downloading nomad_1.5.0_SHA256SUMS
[INFO] -> Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
[INFO] -> Unpacking nomad_1.5.0_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/nomad.service
[INFO] -> Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
[INFO] -> Starting systemd service
[INFO] Done.
Enter fullscreen mode Exit fullscreen mode

Une interface graphique est également accessible pour Nomad à cette étape …

Lancement de la partie cliente de Consul et Nomad avec trois autres machines virtuelles sous Ubuntu 22.04 LTS dans OpenNebula :

Personnalisation de ces trois machines virtuelles avec Pyinfra, une autre alternative sous Python à Ansible avec pipx :

pyinfra


root@minione:~# pipx install pyinfra

  installed package pyinfra 2.6.2, installed using Python 3.10.6
  These apps are now globally available
    - pyinfra
⚠️ Note: '/root/.local/bin' is not on your PATH environment variable. These apps will not be globally accessible until your PATH is updated. Run `pipx ensurepath` to
    automatically add it, or manually modify your PATH in your shell config file (i.e. ~/.bashrc).
done! ✨ 🌟 ✨

root@minione:~# pipx ensurepath

Success! Added /root/.local/bin to the PATH environment variable.

Consider adding shell completions for pipx. Run 'pipx completions' for instructions.

You will need to open a new terminal or re-login for the PATH changes to take effect.

Otherwise pipx is ready to go! ✨ 🌟 ✨

root@minione:~# source .bashrc

root@minione:~# pyinfra --help

Usage: pyinfra [OPTIONS] INVENTORY OPERATIONS...

  pyinfra manages the state of one or more servers. It can be used for
  app/service deployment, config management and ad-hoc command execution.

  Documentation: pyinfra.readthedocs.io

  # INVENTORY

  + a file (inventory.py)
  + hostname (host.net)
  + Comma separated hostnames:
    host-1.net,host-2.net,@local

  # OPERATIONS

  # Run one or more deploys against the inventory
  pyinfra INVENTORY deploy_web.py [deploy_db.py]...

  # Run a single operation against the inventory
  pyinfra INVENTORY server.user pyinfra home=/home/pyinfra

  # Execute an arbitrary command against the inventory
  pyinfra INVENTORY exec -- echo "hello world"

  # Run one or more facts against the inventory
  pyinfra INVENTORY fact server.LinuxName [server.Users]...
  pyinfra INVENTORY fact files.File path=/path/to/file...

  # Debug the inventory hosts and data
  pyinfra INVENTORY debug-inventory

Options:
  -v Print meta (-v), input (-vv) and output
                                  (-vvv).
  --dry Don't execute operations on the target
                                  hosts.
  --limit TEXT Restrict the target hosts by name and group
                                  name.
  --fail-percent INTEGER % of hosts that need to fail before exiting
                                  early.
  --data TEXT Override data values, format key=value.
  --group-data TEXT Paths to load additional group data from
                                  (overrides matching keys).
  --config TEXT Specify config file to use (default:
                                  config.py).
  --chdir TEXT Set the working directory before executing.
  --sudo Whether to execute operations with sudo.
  --sudo-user TEXT Which user to sudo when sudoing.
  --use-sudo-password Whether to use a password with sudo.
  --su-user TEXT Which user to su to.
  --shell-executable TEXT Shell to use (ex: "sh", "cmd", "ps").
  --parallel INTEGER Number of operations to run in parallel.
  --no-wait Don't wait between operations for hosts.
  --serial Run operations in serial, host by host.
  --ssh-user, --user TEXT SSH user to connect as.
  --ssh-port, --port INTEGER SSH port to connect to.
  --ssh-key, --key PATH SSH Private key filename.
  --ssh-key-password, --key-password TEXT
                                  SSH Private key password.
  --ssh-password, --password TEXT
                                  SSH password.
  --winrm-username TEXT WINRM user to connect as.
  --winrm-password TEXT WINRM password.
  --winrm-port TEXT WINRM port to connect to.
  --winrm-transport TEXT WINRM transport for use.
  --support Print useful information for support and
                                  exit.
  --quiet Hide most pyinfra output.
  --debug Print debug info.
  --debug-facts Print facts after generating operations and
                                  exit.
  --debug-operations Print operations after generating and exit.
  --version Show the version and exit.
  --help Show this message and exit.

root@minione:~# pyinfra 172.16.100.4 exec -- hostnamectl set-hostname client0

--> Loading config...

--> Loading inventory...

--> Connecting to hosts...
    No host key for 172.16.100.4 found in known_hosts
    [172.16.100.4] Connected
    [172.16.100.4] Ready: shell

--> Proposed changes:
    Ungrouped:
    [172.16.100.4] Operations: 1 Change: 1 No change: 0   

--> Beginning operation run...
--> Starting operation: Server/Shell (hostnamectl set-hostname client0)
    [172.16.100.4] Success

--> Results:
    Ungrouped:
    [172.16.100.4] Changed: 1 No change: 0 Errors: 0   


root@minione:~# pyinfra 172.16.100.3 exec -- hostnamectl set-hostname client1

--> Loading config...

--> Loading inventory...

--> Connecting to hosts...
    No host key for 172.16.100.3 found in known_hosts
    [172.16.100.3] Connected
    [172.16.100.3] Ready: shell

--> Proposed changes:
    Ungrouped:
    [172.16.100.3] Operations: 1 Change: 1 No change: 0   

--> Beginning operation run...
--> Starting operation: Server/Shell (hostnamectl set-hostname client1)
    [172.16.100.3] Success

--> Results:
    Ungrouped:
    [172.16.100.3] Changed: 1 No change: 0 Errors: 0  

root@minione:~# pyinfra 172.16.100.5 exec -- hostnamectl set-hostname client2

--> Loading config...

--> Loading inventory...

--> Connecting to hosts...
    No host key for 172.16.100.5 found in known_hosts
    [172.16.100.5] Connected
    [172.16.100.5] Ready: shell

--> Proposed changes:
    Ungrouped:
    [172.16.100.5] Operations: 1 Change: 1 No change: 0   

--> Beginning operation run...
--> Starting operation: Server/Shell (hostnamectl set-hostname client2)
    [172.16.100.5] Success

--> Results:
    Ungrouped:
    [172.16.100.5] Changed: 1 No change: 0 Errors: 0  
Enter fullscreen mode Exit fullscreen mode

Par ce biais, insertion du moteur Docker dans ces machines virtuelles :

root@minione:~# cat inventory.py 

my_hosts = ["172.16.100.4", "172.16.100.3", "172.16.100.5"]

root@minione:~# pyinfra inventory.py exec -- "curl -fsSL https://get.docker.com | sh -"

--> Loading config...

--> Loading inventory...

--> Connecting to hosts...
    No host key for 172.16.100.5 found in known_hosts
    No host key for 172.16.100.3 found in known_hosts
    [172.16.100.5] Connected
    No host key for 172.16.100.4 found in known_hosts
    [172.16.100.4] Connected
    [172.16.100.3] Connected
    [172.16.100.4] Ready: shell
    [172.16.100.3] Ready: shell
    [172.16.100.5] Ready: shell

--> Proposed changes:
    Groups: inventory / my_hosts
    [172.16.100.4] Operations: 1 Change: 1 No change: 0   
    [172.16.100.3] Operations: 1 Change: 1 No change: 0   
    [172.16.100.5] Operations: 1 Change: 1 No change: 0   

--> Beginning operation run...
--> Starting operation: Server/Shell (curl -fsSL https://get.docker.com | sh -)
[172.16.100.5] # Executing docker install script, commit: 66474034547a96caa0a25be56051ff8b726a1b28
[172.16.100.5] + sh -c apt-get update -qq >/dev/null
[172.16.100.4] # Executing docker install script, commit: 66474034547a96caa0a25be56051ff8b726a1b28
[172.16.100.4] + sh -c apt-get update -qq >/dev/null
[172.16.100.3] # Executing docker install script, commit: 66474034547a96caa0a25be56051ff8b726a1b28
[172.16.100.3] + sh -c apt-get update -qq >/dev/null
[172.16.100.4] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
[172.16.100.5] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
[172.16.100.3] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq apt-transport-https ca-certificates curl >/dev/null
[172.16.100.5] + sh -c mkdir -p /etc/apt/keyrings && chmod -R 0755 /etc/apt/keyrings
[172.16.100.5] + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
[172.16.100.5] + sh -c chmod a+r /etc/apt/keyrings/docker.gpg
[172.16.100.5] + sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list
[172.16.100.5] + sh -c apt-get update -qq >/dev/null
[172.16.100.4] + sh -c mkdir -p /etc/apt/keyrings && chmod -R 0755 /etc/apt/keyrings
[172.16.100.3] + sh -c mkdir -p /etc/apt/keyrings && chmod -R 0755 /etc/apt/keyrings
[172.16.100.4] + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
[172.16.100.3] + sh -c curl -fsSL "https://download.docker.com/linux/ubuntu/gpg" | gpg --dearmor --yes -o /etc/apt/keyrings/docker.gpg
[172.16.100.3] + sh -c chmod a+r /etc/apt/keyrings/docker.gpg
[172.16.100.3] + sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list
[172.16.100.3] + sh -c apt-get update -qq >/dev/null
[172.16.100.4] + sh -c chmod a+r /etc/apt/keyrings/docker.gpg
[172.16.100.4] + sh -c echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu jammy stable" > /etc/apt/sources.list.d/docker.list
[172.16.100.4] + sh -c apt-get update -qq >/dev/null
[172.16.100.3] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-scan-plugin docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin >/dev/null
[172.16.100.4] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-scan-plugin docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin >/dev/null
[172.16.100.5] + sh -c DEBIAN_FRONTEND=noninteractive apt-get install -y -qq docker-ce docker-ce-cli containerd.io docker-scan-plugin docker-compose-plugin docker-ce-rootless-extras docker-buildx-plugin >/dev/null
[172.16.100.5] + sh -c docker version
[172.16.100.4] + sh -c docker version
[172.16.100.3] + sh -c docker version
[172.16.100.5] Client: Docker Engine - Community
[172.16.100.5] Version: 23.0.1
[172.16.100.5] API version: 1.42
[172.16.100.5] Go version: go1.19.5
[172.16.100.5] Git commit: a5ee5b1
[172.16.100.5] Built: Thu Feb 9 19:47:01 2023
[172.16.100.5] OS/Arch: linux/amd64
[172.16.100.5] Context: default
[172.16.100.5] 
[172.16.100.5] Server: Docker Engine - Community
[172.16.100.5] Engine:
[172.16.100.5] Version: 23.0.1
[172.16.100.5] API version: 1.42 (minimum version 1.12)
[172.16.100.5] Go version: go1.19.5
[172.16.100.5] Git commit: bc3805a
[172.16.100.5] Built: Thu Feb 9 19:47:01 2023
[172.16.100.5] OS/Arch: linux/amd64
[172.16.100.5] Experimental: false
[172.16.100.5] containerd:
[172.16.100.5] Version: 1.6.18
[172.16.100.5] GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640
[172.16.100.5] runc:
[172.16.100.5] Version: 1.1.4
[172.16.100.5] GitCommit: v1.1.4-0-g5fd4c4d
[172.16.100.5] docker-init:
[172.16.100.5] Version: 0.19.0
[172.16.100.5] GitCommit: de40ad0
[172.16.100.5] 
[172.16.100.5] ================================================================================
[172.16.100.5] 
[172.16.100.5] To run Docker as a non-privileged user, consider setting up the
[172.16.100.5] Docker daemon in rootless mode for your user:
[172.16.100.5] 
[172.16.100.5] dockerd-rootless-setuptool.sh install
[172.16.100.5] 
[172.16.100.5] Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
[172.16.100.5] 
[172.16.100.5] 
[172.16.100.5] To run the Docker daemon as a fully privileged service, but granting non-root
[172.16.100.5] users access, refer to https://docs.docker.com/go/daemon-access/
[172.16.100.5] 
[172.16.100.5] WARNING: Access to the remote API on a privileged Docker daemon is equivalent
[172.16.100.5] to root access on the host. Refer to the 'Docker daemon attack surface'
[172.16.100.5] documentation for details: https://docs.docker.com/go/attack-surface/
[172.16.100.5] 
[172.16.100.5] ================================================================================
[172.16.100.5] 
    [172.16.100.5] Success
[172.16.100.4] Client: Docker Engine - Community
[172.16.100.4] Version: 23.0.1
[172.16.100.4] API version: 1.42
[172.16.100.4] Go version: go1.19.5
[172.16.100.4] Git commit: a5ee5b1
[172.16.100.4] Built: Thu Feb 9 19:47:01 2023
[172.16.100.4] OS/Arch: linux/amd64
[172.16.100.4] Context: default
[172.16.100.4] 
[172.16.100.4] Server: Docker Engine - Community
[172.16.100.4] Engine:
[172.16.100.4] Version: 23.0.1
[172.16.100.4] API version: 1.42 (minimum version 1.12)
[172.16.100.4] Go version: go1.19.5
[172.16.100.4] Git commit: bc3805a
[172.16.100.4] Built: Thu Feb 9 19:47:01 2023
[172.16.100.4] OS/Arch: linux/amd64
[172.16.100.4] Experimental: false
[172.16.100.4] containerd:
[172.16.100.4] Version: 1.6.18
[172.16.100.4] GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640
[172.16.100.4] runc:
[172.16.100.4] Version: 1.1.4
[172.16.100.4] GitCommit: v1.1.4-0-g5fd4c4d
[172.16.100.4] docker-init:
[172.16.100.4] Version: 0.19.0
[172.16.100.4] GitCommit: de40ad0
[172.16.100.4] 
[172.16.100.4] ================================================================================
[172.16.100.4] 
[172.16.100.4] To run Docker as a non-privileged user, consider setting up the
[172.16.100.4] Docker daemon in rootless mode for your user:
[172.16.100.4] 
[172.16.100.4] dockerd-rootless-setuptool.sh install
[172.16.100.4] 
[172.16.100.4] Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
[172.16.100.4] 
[172.16.100.4] 
[172.16.100.4] To run the Docker daemon as a fully privileged service, but granting non-root
[172.16.100.4] users access, refer to https://docs.docker.com/go/daemon-access/
[172.16.100.4] 
[172.16.100.4] WARNING: Access to the remote API on a privileged Docker daemon is equivalent
[172.16.100.4] to root access on the host. Refer to the 'Docker daemon attack surface'
[172.16.100.4] documentation for details: https://docs.docker.com/go/attack-surface/
[172.16.100.4] 
[172.16.100.4] ================================================================================
[172.16.100.4] 
    [172.16.100.4] Success
[172.16.100.3] Client: Docker Engine - Community
[172.16.100.3] Version: 23.0.1
[172.16.100.3] API version: 1.42
[172.16.100.3] Go version: go1.19.5
[172.16.100.3] Git commit: a5ee5b1
[172.16.100.3] Built: Thu Feb 9 19:47:01 2023
[172.16.100.3] OS/Arch: linux/amd64
[172.16.100.3] Context: default
[172.16.100.3] 
[172.16.100.3] Server: Docker Engine - Community
[172.16.100.3] Engine:
[172.16.100.3] Version: 23.0.1
[172.16.100.3] API version: 1.42 (minimum version 1.12)
[172.16.100.3] Go version: go1.19.5
[172.16.100.3] Git commit: bc3805a
[172.16.100.3] Built: Thu Feb 9 19:47:01 2023
[172.16.100.3] OS/Arch: linux/amd64
[172.16.100.3] Experimental: false
[172.16.100.3] containerd:
[172.16.100.3] Version: 1.6.18
[172.16.100.3] GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640
[172.16.100.3] runc:
[172.16.100.3] Version: 1.1.4
[172.16.100.3] GitCommit: v1.1.4-0-g5fd4c4d
[172.16.100.3] docker-init:
[172.16.100.3] Version: 0.19.0
[172.16.100.3] GitCommit: de40ad0
[172.16.100.3] 
[172.16.100.3] ================================================================================
[172.16.100.3] 
[172.16.100.3] To run Docker as a non-privileged user, consider setting up the
[172.16.100.3] Docker daemon in rootless mode for your user:
[172.16.100.3] 
[172.16.100.3] dockerd-rootless-setuptool.sh install
[172.16.100.3] 
[172.16.100.3] Visit https://docs.docker.com/go/rootless/ to learn about rootless mode.
[172.16.100.3] 
[172.16.100.3] 
[172.16.100.3] To run the Docker daemon as a fully privileged service, but granting non-root
[172.16.100.3] users access, refer to https://docs.docker.com/go/daemon-access/
[172.16.100.3] 
[172.16.100.3] WARNING: Access to the remote API on a privileged Docker daemon is equivalent
[172.16.100.3] to root access on the host. Refer to the 'Docker daemon attack surface'
[172.16.100.3] documentation for details: https://docs.docker.com/go/attack-surface/
[172.16.100.3] 
[172.16.100.3] ================================================================================
[172.16.100.3] 
    [172.16.100.3] Success

--> Results:
    Groups: inventory / my_hosts
    [172.16.100.4] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.3] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.5] Changed: 1 No change: 0 Errors: 0 

root@minione:~# pyinfra inventory.py exec -- "docker ps -a"

--> Loading config...

--> Loading inventory...

--> Connecting to hosts...
    No host key for 172.16.100.5 found in known_hosts
    No host key for 172.16.100.3 found in known_hosts
    No host key for 172.16.100.4 found in known_hosts
    [172.16.100.5] Connected
    [172.16.100.3] Connected
    [172.16.100.4] Connected
    [172.16.100.5] Ready: shell
    [172.16.100.3] Ready: shell
    [172.16.100.4] Ready: shell

--> Proposed changes:
    Groups: inventory / my_hosts
    [172.16.100.5] Operations: 1 Change: 1 No change: 0   
    [172.16.100.3] Operations: 1 Change: 1 No change: 0   
    [172.16.100.4] Operations: 1 Change: 1 No change: 0   

--> Beginning operation run...
--> Starting operation: Server/Shell (docker ps -a)
[172.16.100.5] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    [172.16.100.5] Success
[172.16.100.4] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    [172.16.100.4] Success
[172.16.100.3] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
    [172.16.100.3] Success

--> Results:
    Groups: inventory / my_hosts
    [172.16.100.5] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.3] Changed: 1 No change: 0 Errors: 0   
    [172.16.100.4] Changed: 1 No change: 0 Errors: 0 
Enter fullscreen mode Exit fullscreen mode

Déploiement ensuite des clients Consul et Nomad toujours avec hashi-up :

root@minione:~# export SERVER_IP=172.16.100.2
root@minione:~# export AGENT_1_IP=172.16.100.4
root@minione:~# export AGENT_2_IP=172.16.100.3
root@minione:~# export AGENT_3_IP=172.16.100.5

root@minione:~# hashi-up consul install --ssh-target-addr $AGENT_1_IP --ssh-target-user root --retry-join $SERVER_IP --ssh-target-key .ssh/id_rsa --version 1.15.1

[INFO] Uploading generated Consul configuration ...
[INFO] Installing Consul ...
[INFO] -> Creating user named 'consul'
[INFO] -> Copying configuration files
[INFO] -> Downloading consul_1.15.1_linux_amd64.zip
[INFO] -> Downloading consul_1.15.1_SHA256SUMS
[INFO] -> Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
[INFO] -> Unpacking consul_1.15.1_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/consul.service
[INFO] -> Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/consul.service → /etc/systemd/system/consul.service.
[INFO] -> Starting systemd service
[INFO] Done.

root@minione:~# hashi-up consul install --ssh-target-addr $AGENT_2_IP --ssh-target-user root --retry-join $SERVER_IP --ssh-target-key .ssh/id_rsa --version 1.15.1

[INFO] Uploading generated Consul configuration ...
[INFO] Installing Consul ...
[INFO] -> Creating user named 'consul'
[INFO] -> Copying configuration files
[INFO] -> Downloading consul_1.15.1_linux_amd64.zip
[INFO] -> Downloading consul_1.15.1_SHA256SUMS
[INFO] -> Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
[INFO] -> Unpacking consul_1.15.1_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/consul.service
[INFO] -> Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/consul.service → /etc/systemd/system/consul.service.
[INFO] -> Starting systemd service
[INFO] Done.

root@minione:~# hashi-up consul install --ssh-target-addr $AGENT_3_IP --ssh-target-user root --retry-join $SERVER_IP --ssh-target-key .ssh/id_rsa --version 1.15.1

[INFO] Uploading generated Consul configuration ...
[INFO] Installing Consul ...
[INFO] -> Creating user named 'consul'
[INFO] -> Copying configuration files
[INFO] -> Downloading consul_1.15.1_linux_amd64.zip
[INFO] -> Downloading consul_1.15.1_SHA256SUMS
[INFO] -> Verifying downloaded consul_1.15.1_linux_amd64.zip
consul_1.15.1_linux_amd64.zip: OK
[INFO] -> Unpacking consul_1.15.1_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/consul.service
[INFO] -> Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/consul.service → /etc/systemd/system/consul.service.
[INFO] -> Starting systemd service
[INFO] Done.

root@minione:~# hashi-up nomad install --ssh-target-addr $AGENT_1_IP --ssh-target-user root --client --retry-join $SERVER_IP --ssh-target-key .ssh/id_rsa --version 1.5.0

[INFO] Uploading generated Nomad configuration ...
[INFO] Installing Nomad ...
[INFO] -> Copying configuration files
[INFO] -> Downloading nomad_1.5.0_linux_amd64.zip
[INFO] -> Downloading nomad_1.5.0_SHA256SUMS
[INFO] -> Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
[INFO] -> Unpacking nomad_1.5.0_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/nomad.service
[INFO] -> Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
[INFO] -> Starting systemd service
[INFO] Done.

root@minione:~# hashi-up nomad install --ssh-target-addr $AGENT_2_IP --ssh-target-user root --client --retry-join $SERVER_IP --ssh-target-key .ssh/id_rsa --version 1.5.0

[INFO] Uploading generated Nomad configuration ...
[INFO] Installing Nomad ...
[INFO] -> Copying configuration files
[INFO] -> Downloading nomad_1.5.0_linux_amd64.zip
[INFO] -> Downloading nomad_1.5.0_SHA256SUMS
[INFO] -> Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
[INFO] -> Unpacking nomad_1.5.0_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/nomad.service
[INFO] -> Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
[INFO] -> Starting systemd service
[INFO] Done.

root@minione:~# hashi-up nomad install --ssh-target-addr $AGENT_3_IP --ssh-target-user root --client --retry-join $SERVER_IP --ssh-target-key .ssh/id_rsa --version 1.5.0

[INFO] Uploading generated Nomad configuration ...
[INFO] Installing Nomad ...
[INFO] -> Copying configuration files
[INFO] -> Downloading nomad_1.5.0_linux_amd64.zip
[INFO] -> Downloading nomad_1.5.0_SHA256SUMS
[INFO] -> Verifying downloaded nomad_1.5.0_linux_amd64.zip
nomad_1.5.0_linux_amd64.zip: OK
[INFO] -> Unpacking nomad_1.5.0_linux_amd64.zip
[INFO] -> Adding systemd service file /etc/systemd/system/nomad.service
[INFO] -> Enabling systemd service
Created symlink /etc/systemd/system/multi-user.target.wants/nomad.service → /etc/systemd/system/nomad.service.
[INFO] -> Starting systemd service
[INFO] Done.
Enter fullscreen mode Exit fullscreen mode

Les clients Nomad avec Consul sont présents :

Lancement de Fabio qui s’intègre nativement avec Consul et fournit une interface Web optionnelle pour visualiser le routage. Le principal cas d’utilisation de Fabio est la distribution des requêtes HTTP(S) et TCP entrantes depuis Internet vers les services frontaux qui peuvent traiter ces requêtes.

Load Balancing with Fabio | Nomad | HashiCorp Developer

Ici, je lance Fabio en tant que job système afin qu’il puisse acheminer le trafic entrant de manière homogène vers un groupe de serveurs quels que soient les nœuds clients sur lesquels il s’exécute. On peut donc placer tous les nœuds clients derrière un équilibreur de charge pour fournir à l’utilisateur final un point d’accès unique.

job "fabio" {
  datacenters = ["dc1"]
  type = "system"

  group "fabio" {
    network {
      port "lb" {
        static = 9999
      }
      port "ui" {
        static = 9998
      }
    }
    task "fabio" {
      driver = "docker"
      config {
        image = "fabiolb/fabio"
        network_mode = "host"
        ports = ["lb","ui"]
      }

      resources {
        cpu = 200
        memory = 128
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

L’interface graphique de Fabio est disponible sur chacun des trois noeuds clients sur le port TCP 9998 :

Exemple avec ce job et une image de base Nginx avec création de trois serveurs attenants :

job "webserver" {
  datacenters = ["dc1"]
  type = "service"

  group "webserver" {
    count = 3
    network {
      port "http" {
        to = 8080
      }
    }

    service {
      name = "nginx-webserver"
      tags = ["urlprefix-/"]
      port = "http"
      check {
        name = "alive"
        type = "http"
        path = "/"
        interval = "10s"
        timeout = "2s"
      }
    }

    restart {
      attempts = 2
      interval = "30m"
      delay = "15s"
      mode = "fail"
    }

    task "nginx" {
      driver = "docker"
      config {
        image = "bitnami/nginx:latest"
        ports = ["http"]
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Fabio a détecté le déploiement de ces trois serveurs Nginx :

en fournissant des ports d’accès à ces entités :

Test d’une instance Alpine Linux avec un serveur OpenSSH qui mime le fonctionnement d’une machine virtuelle dans Nomad en utilisant cette image Docker :

Docker

job "ssh" {
  datacenters = ["dc1"]
  type = "service"

  group "ssh" {
    count = 1
    network {
      port "ssh" {
        to = 22
      }
    }

    service {
      name = "sshd"
      port = "ssh"
    }

    task "openssh" {
      driver = "docker"
      config {
        image = "mcas/alpine-sshd:latest"
        ports = ["ssh"]
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

Exécution du job via la dashboard du serveur Nomad :

Consul voit l’ensemble des instances déployées dans Nomad :

en me donnant les informations sur le port d’accès à cette instance Alpine Linux :

Connexion locale à cette instance en SSH avec ce port d’accès :

root@minione:~# ssh -p 26053 alpine@172.16.100.5
Warning: Permanently added '[172.16.100.5]:26053' (ED25519) to the list of known hosts.
alpine@172.16.100.5 password: 
Welcome to Alpine!

The Alpine Wiki contains a large amount of how-to guides and general
information about administrating Alpine systems.
See <https://wiki.alpinelinux.org/>.

You can setup the system with the command: setup-alpine

You may change this message by editing /etc/motd.

1900f3e8b78a:~$ ps aux
PID USER TIME COMMAND
    1 root 0:00 sshd: /usr/sbin/sshd -D -e [listener] 0 of 10-100 startups
   10 root 0:00 sshd: alpine [priv]
   12 alpine 0:00 sshd: alpine@pts/0
   13 alpine 0:00 -bash
   14 alpine 0:00 ps aux
1900f3e8b78a:~$ free -m
              total used free shared buff/cache available
Mem: 5934 348 4982 1 604 5346
Swap: 0 0 0
1900f3e8b78a:~$ df -h
Filesystem Size Used Available Use% Mounted on
overlay 19.2G 2.3G 16.9G 12% /
tmpfs 64.0M 0 64.0M 0% /dev
shm 64.0M 0 64.0M 0% /dev/shm
/dev/vda1 19.2G 2.3G 16.9G 12% /alloc
/dev/vda1 19.2G 2.3G 16.9G 12% /local
tmpfs 1.0M 0 1.0M 0% /secrets
/dev/vda1 19.2G 2.3G 16.9G 12% /etc/resolv.conf
/dev/vda1 19.2G 2.3G 16.9G 12% /etc/hostname
/dev/vda1 19.2G 2.3G 16.9G 12% /etc/hosts
tmpfs 2.9G 0 2.9G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 2.9G 0 2.9G 0% /proc/scsi
tmpfs 2.9G 0 2.9G 0% /sys/firmware
Enter fullscreen mode Exit fullscreen mode

Le sempiternel démonstrateur FC s’y exécute localement comme s’il s’agissait d’une machine virtuelle :

GitHub - france-connect/service-provider-example: An implementation example of the FranceConnect button on a service provider's website.

1900f3e8b78a:~/fcdemo3$ pm2 start start.sh 

                        -------------

__/\\\\\\\\\\\\\____ /\\\\ ____________/\\\\____ /\\\\\\\\\ _____
 _\/\\\/////////\\\_\/\\\\\\ ________/\\\\\\__ /\\\///////\\\___
  _\/\\\ _______\/\\\_\/\\\//\\\____ /\\\//\\\_\/// ______\//\\\__
   _\/\\\\\\\\\\\\\/ __\/\\\\///\\\/\\\/_\/\\\___________ /\\\/___
    _\/\\\///////// ____\/\\\__ \///\\\/ ___\/\\\________ /\\\// _____
     _\/\\\ _____________\/\\\____ \/// _____\/\\\_____ /\\\// ________
      _\/\\\ _____________\/\\\_____________ \/\\\ ___/\\\/___________
       _\/\\\ _____________\/\\\_____________ \/\\\__/\\\\\\\\\\\\\\\_
        _\/// ______________\///______________ \/// __\///////////////__

                          Runtime Edition

        PM2 is a Production Process Manager for Node.js applications
                     with a built-in Load Balancer.

                Start and Daemonize any application:
                $ pm2 start app.js

                Load Balance 4 instances of api.js:
                $ pm2 start api.js -i 4

                Monitor in production:
                $ pm2 monitor

                Make pm2 auto-boot at server restart:
                $ pm2 startup

                To go further checkout:
                http://pm2.io/

                        -------------

[PM2] Spawning PM2 daemon with pm2_home=/home/alpine/.pm2
[PM2] PM2 Successfully daemonized
[PM2] Starting /home/alpine/fcdemo3/start.sh in fork_mode (1 instance)
[PM2] Done.
┌─────┬──────────┬─────────────┬─────────┬─────────┬──────────┬────────┬──────┬───────────┬──────────┬──────────┬──────────┬──────────┐
│ id │ name │ namespace │ version │ mode │ pid │ uptime │ ↺ │ status │ cpu │ mem │ user │ watching │
├─────┼──────────┼─────────────┼─────────┼─────────┼──────────┼────────┼──────┼───────────┼──────────┼──────────┼──────────┼──────────┤
│ 0 │ start │ default │ 0.0.0 │ fork │ 466 │ 0s │ 0 │ online │ 0% │ 1.7mb │ alpine │ disabled │
└─────┴──────────┴─────────────┴─────────┴─────────┴──────────┴────────┴──────┴───────────┴──────────┴──────────┴──────────┴──────────┘

1900f3e8b78a:~/fcdemo3$ pm2 log start

[TAILING] Tailing last 15 lines for [start] process (change the value with --lines option)
/home/alpine/.pm2/logs/start-error.log last 15 lines:
/home/alpine/.pm2/logs/start-out.log last 15 lines:
0|start | 
0|start | > service-provider-mock@0.0.0 start
0|start | > node ./bin/www
0|start | 
0|start | Server listening on http://0.0.0.0:3000

1900f3e8b78a:~$ curl http://localhost:3000
<!doctype html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css" integrity="sha256-zIG416V1ynj3Wgju/scU80KAEWOsO5rRLfVyRDuOv7Q=" crossorigin="anonymous" />
    <title>Démonstrateur Fournisseur de Service</title>
</head>

<body>
<nav class="navbar" role="navigation" aria-label="main navigation">
    <div class="navbar-start">
        <div class="navbar-brand">
            <a class="navbar-item" href="/">
                <img src="/img/fc_logo_v2.png" alt="Démonstrateur Fournisseur de Service" height="28">
            </a>
        </div>
        <a href="/" class="navbar-item">
            Home
        </a>
    </div>
    <div class="navbar-end">
        <div class="navbar-item">

                <div class="buttons">
                    <a class="button is-light" href="/login">Se connecter</a>
                </div>

        </div>
    </div>
</nav>

<section class="hero is-info is-medium">
    <div class="hero-body">
        <div class="container">
            <h1 class="title">
                Bienvenue sur le démonstrateur de fournisseur de service
            </h1>
            <h2 class="subtitle">
                Cliquez sur "se connecter" pour vous connecter via <strong>FranceConnect</strong>
            </h2>
        </div>
    </div>
</section>
Enter fullscreen mode Exit fullscreen mode

Dans cet article, il est décrit la possibilité d’utiliser hashi-up pour provisionner un cluster avec Vault ou en mode HA pour le cluster Nomad et Consul :

Ce cluster commun entre Nomad et Consul aurait pû être utilisé avec Waypoint pour y provisionner ses jobs comme avec cet exemple de fichier HCL …

Waypoint | HashiCorp Developer

Deploy an Application to Nomad | Waypoint | HashiCorp Developer

# Copyright (c) HashiCorp, Inc.
# SPDX-License-Identifier: MPL-2.0

project = "nomad-nodejs"

app "nomad-nodejs-web" {

  build {
    use "pack" {}
    registry {
      use "docker" {
        image = "nomad-nodejs-web"
        tag = "1"
        local = true
      }
    }
  }

  deploy {
    use "nomad" {
      // these options both default to the values shown, but are left here to
      // show they are configurable
      datacenter = "dc1"
      namespace = "default"
    }
  }

}
Enter fullscreen mode Exit fullscreen mode

On en allant beaucoup plus loin avec cet exemple de déploiement Blue/Green …

À suivre !

Top comments (0)