Setup a Docker Swarm cluster

This commit is contained in:
Kris Lamoureux 2023-11-05 02:33:04 -05:00
commit 0ac27f42f1
Signed by: kris
GPG Key ID: 3EDA9C3441EDA925
9 changed files with 304 additions and 0 deletions

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
*.log
nodes.rb
.settings.yml
.swarm
.vagrant

12
LICENSE Normal file
View File

@ -0,0 +1,12 @@
Copyright (C) 2023 by Kris Lamoureux <kris@lamoureux.io>
Permission to use, copy, modify, and/or distribute this software for any
purpose with or without fee is hereby granted.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH
REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,
INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM
LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR
OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.

10
Makefile Normal file
View File

@ -0,0 +1,10 @@
.PHONY: all vagrant clean
all: vagrant
vagrant:
vagrant up --no-destroy-on-error --no-color | tee ./vagrantup.log
clean:
vagrant destroy -f --no-color
rm -rf .vagrant vagrantup.log .swarm

120
README.md Normal file
View File

@ -0,0 +1,120 @@
# Vagrant Docker Swarm Environment
**Warning: For development only, do not use for production**
This repository contains a Vagrantfile and the necessary configuration files
for automating the setup of a Docker Swarm cluster using Vagrants shell
provisioning. You can easily override the default settings for the Vagrant
environment and the Swarm cluster to suit your needs. Customize global
settings via the `.settings.yml` file, and specify per-node overrides using the
`NODES` Ruby hash in `nodes.rb`.
By default, `make` will create three Debian Stable x86_64 Docker Swarm nodes.
Each node will have:
- 2 threads/cores (depending on architecture)
- 2 GB of RAM
- ~1 GB of storage
**Warning: Make sure your machine has enough resources or adjust override settings.**
# Quick Start
Get your Vagrant Docker Swarm cluster up and running with these simple steps:
- Setup the Cluster
Run the following command to clean any previous setup and start fresh:
```bash
make clean && make
```
- SSH into a Node
Access the first node (`node1`) in your cluster:
```bash
vagrant ssh node1
```
- Verify Cluster Setup
List all the nodes to ensure they've joined the cluster successfully:
```bash
docker node ls
```
# Global Overrides
If you wish to override the default settings on a global level,
you can do so by creating a `.settings.yml` file based on the provided
`example-.settings.yml` file:
```bash
cp example-.settings.yml .settings.yml
```
Once you have copied the `example-.settings.yml` to `.settings.yml`, you can
edit it to override the default settings. Below are the available settings:
## Vagrant Settings Overrides
- `VAGRANT_BOX`
- Default: `debian/bookworm64`
- Tested most around Debian Stable x86_64 (currently Bookworm)
- `VAGRANT_CPU`
- Default: `2`
- Two threads or cores per node, depending on CPU architecture
- `VAGRANT_MEM`
- Default: `2048`
- Two GB of RAM per node
- `SSH_FORWARD`
- Default: `false`
- Enable this if you need to forward SSH agents to the Vagrant machine(s)
## Docker Swarm Settings Overrides
- `SWARM_NODES`
- Default: `3`
- The total number of nodes in your Docker Swarm cluster
- `JOIN_TIMEOUT`
- Default: `60`
- Timeout in seconds for nodes to obtain a swarm join token
# Per-Node Overrides
The naming convention for nodes follows a specific pattern: `nodeX`, where `X`
is a number corresponding to the node's position within the cluster. This
convention is strictly adhered to due to the iteration logic within the
`Vagrantfile`, which utilizes a loop iterating over an array range defined by
the number of swarm nodes (`Array(1..SWARM_NODES)`). Each iteration of the loop
corresponds to a node, and the loop counter is in the node name (`nodeX`).
The overrides, if specified in `nodes.rb`, take the highest precedence,
followed by the overrides in `.settings.yml`, and lastly, the defaults hard
coded in the `Vagrantfile` itself. This hierarchy allows for a flexible
configuration where global overrides can be specified in `.settings.yml`, and
more granular, per-node overrides can be defined in `nodes.rb`. If a particular
setting is not overridden in either `.settings.yml` or `nodes.rb`, the default
value from the `Vagrantfile` is used.
If you wish to override the default settings on a per-node level, you can do so
by creating a `nodes.rb` file based on the provided `example-nodes.rb` file:
```bash
cp example-nodes.rb nodes.rb
```
Once you have copied the `example-nodes.rb` to `nodes.rb`, you can edit it to
override the default settings. Below are the available settings available
per-node:
- `BOX`
- Default: `debian/bookworm64` (or as overridden in `.settings.yml`)
- Vagrant box or image to be used for the node.
- `CPU`
- Default: `2` (or as overridden in `.settings.yml`)
- Defines the number of CPU cores or threads (depending on architecture).
- `MEM`
- Default: `2048` (2 GB) (or as overridden in `.settings.yml`)
- Specifies the amount of memory (in MB) allocated to the node.
- `SSH`
- Default: `false` (or as overridden in `.settings.yml`)
- Enable this if you need to forward SSH agents to the Vagrant machine
All settings are optional, and as many or as few options can be overridden on
any arbitrary node.

59
Vagrantfile vendored Normal file
View File

@ -0,0 +1,59 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Load override settings
require 'yaml'
settings_path = '.settings.yml'
settings = {}
if File.exist?(settings_path)
settings = YAML.load_file(settings_path)
end
# Default Vagrant settings
VAGRANT_BOX = settings['VAGRANT_BOX'] || 'debian/bookworm64'
VAGRANT_CPU = settings['VAGRANT_CPU'] || 2
VAGRANT_MEM = settings['VAGRANT_MEM'] || 2048
SSH_FORWARD = settings['SSH_FORWARD'] || false
# Default Swarm settings
SWARM_NODES = settings['SWARM_NODES'] || 3
JOIN_TIMEOUT = settings['JOIN_TIMEOUT'] || 60
# Node settings overrides
if File.exist?('nodes.rb')
require_relative 'nodes.rb'
else
# Using all defaults
NODES = {}
end
HOSTS = Array(1..SWARM_NODES)
Vagrant.configure(2) do |vm_config|
HOSTS.each do |count|
vm_config.vm.define "node#{count}" do |config|
config.vm.hostname = "node#{count}"
config.vm.box = NODES.dig("node#{count}", 'BOX') || VAGRANT_BOX
config.ssh.forward_agent =
NODES.dig("node#{count}", 'SSH') || SSH_FORWARD
# Libvirt
config.vm.provider :libvirt do |virt|
virt.memory = NODES.dig("node#{count}", 'MEM') || VAGRANT_MEM
virt.cpus = NODES.dig("node#{count}", 'CPU') || VAGRANT_CPU
end
# VirtualBox
config.vm.provider :virtualbox do |vbox|
vbox.memory = NODES.dig("node#{count}", 'MEM') || VAGRANT_MEM
vbox.cpus = NODES.dig("node#{count}", 'CPU') || VAGRANT_CPU
end
# Install and Setup Docker Swarm
config.vm.provision "shell", inline: <<-SHELL
export JOIN_TIMEOUT=#{JOIN_TIMEOUT}
/bin/bash /vagrant/provision.sh
SHELL
end
end
end

18
example-.settings.yml Normal file
View File

@ -0,0 +1,18 @@
########################
### Example settings ###
########################
# This configuration as-is will take 12 threads/cores and 12 GB of RAM
# Make sure you have enough resources before running something like this.
# Set per-node overrides in nodes.rb if your setup requires it
# Vagrant default global overrides
VAGRANT_BOX: debian/bookworm64
VAGRANT_CPUS: 4
VAGRANT_MEM: 4096
SSH_FORWARD: true
# Swarm default overrides
SWARM_NODES: 3
JOIN_TIMEOUT: 60

34
example-nodes.rb Normal file
View File

@ -0,0 +1,34 @@
#########################
### Example overrides ###
#########################
# This configuration as-is will take 10 threads/cores and 10 GB of RAM assuming
# that .settings.yml isn't overriding the defaults. Make sure you have enough
# resources before running something like this.
# Don't forget to set SWARM_NODES in .settings
# if you run more/less than 3 nodes
NODES = {
# CPU/MEM heavy node
'node1' => {
#'BOX' => 'debian/bookworm64',
'CPU' => 4,
'MEM' => 4096,
'SSH' => true
},
# Memory heavy node
'node2' => {
'BOX' => 'debian/bookworm64',
#'CPU' => 4,
'MEM' => 4096,
#'SSH' => true
},
# CPU heavy node
'node3' => {
'BOX' => 'debian/bookworm64',
'CPU' => 4,
#'MEM' => 4096,
#'SSH' => true
}
}

44
provision.sh Executable file
View File

@ -0,0 +1,44 @@
#!/bin/bash
######################################
### Install and setup Docker Swarm ###
######################################
# Print commands and exit on error
set -xe
# Install Docker
which curl &>/dev/null || (apt-get update && apt-get install -y curl)
which docker &>/dev/null || curl -fsSL https://get.docker.com | sh
[ ! "$(id -nG "$USER" | grep -c docker)" -eq 1 ] && usermod -aG docker "$USER"
# Setup Docker Swarm
if [ ! "$(docker info | grep -c 'Swarm: active')" -eq 1 ]; then
# Make hostname: node1 the leader who gives the join token to others
if [ "$(hostname)" == "node1" ]; then
docker swarm init | grep -Eo 'docker swarm join .+:[0-9]+' > /vagrant/.swarm
else
# Waits JOIN_TIMEOUT of seconds to find the swarm join token before giving up
START_TIME="$(date +%s)"
# Initial wait
sleep 5
# Wait until .swarm can be found via Vagrant provider file sharing
while [ ! -f /vagrant/.swarm ]; do
CURRENT_TIME="$(date +%s)"
DIFF_TIME="$((CURRENT_TIME - START_TIME))"
# Timeout
if [ "$DIFF_TIME" -ge "$JOIN_TIMEOUT" ]; then
echo "[ERROR]: $(hostname) waited $DIFF_TIME/$JOIN_TIMEOUT seconds"
exit 1
fi
# Waiting
echo "Waiting ($DIFF_TIME/$JOIN_TIMEOUT seconds) for /vagrant/.swarm file"
sleep 10
done
# /vagrant/.swarm file found, so join the swarm
/bin/bash /vagrant/.swarm
fi
fi

2
scratch/.gitignore vendored Normal file
View File

@ -0,0 +1,2 @@
*
!.gitignore