33 Commits
kutt ... gitea

Author SHA1 Message Date
da3b0cb28b testing 2023-10-20 13:56:55 -04:00
c3b4321667 Add Gitea dev playbook and host_vars 2023-10-19 16:40:34 -04:00
d05c5d3086 Slight tweaks on Ansible output 2023-10-19 16:36:05 -04:00
ac412f16ef Simplify the "Import GPG keys" loop 2023-10-19 14:09:10 -04:00
2354a8fb8c Verify successful GPG imports 2023-10-19 13:37:35 -04:00
251a7c0dd5 Import PGP key and verify git commits 2023-10-19 02:56:36 -04:00
1d8ae8a0b6 Install ntpsec 2023-10-19 01:27:31 -04:00
6b2feaee5e Hide docker-compose secrets from diff output 2023-10-18 23:03:52 -04:00
31e0538b84 Add locale configuration tasks to base role 2023-10-18 16:32:09 -04:00
a65c4b9cf6 Handle Ansible undefined loop variable
- Default docker_compose_deploy to empty list if undefined
- Add conditional check to avoid looping through an empty list
2023-10-10 00:14:52 -04:00
7ee6e4810d Convert booleans to lowercase 2023-10-10 00:00:00 -04:00
87aa7ecf8b Add external compose support in the docker role
- Use ansible.posix.synchronize for compose.yml
- Set fact for compose service restarts
- Introduce plain Docker dev host
- Optionally verify repos via GPG before sync
- Hide docker_repos_path in .folder
- Tweak .env for conciseness
- Add --diff to Ansible in Vagrantfile
- Clean output with loop_control
- Embed GPG in base role
2023-10-09 23:47:49 -04:00
0377a5e642 Add option for private OCI registry auth 2023-09-29 22:18:59 -04:00
2e02efcbb7 Add Makefile, roles_path, and SSH tunnel variable 2023-09-26 21:14:06 -04:00
8fed63792b Ask permission for starting vagrant SSH tunnels 2023-09-16 00:04:58 -04:00
2c4fcbacc3 Introduce forward-ssh.sh method & reorganize
- Abandoned update-hosts.sh in favor of loopback SSH forwarding
- Adopted *.local.krislamo.org as a wildcard loopback domain
- Bound Traefik to ports 443/80 on Dockerbox dev
- Removed outdated Gitea config from Dockerbox
- Relocated production playbooks to a new directory
2023-09-15 23:46:45 -04:00
b81372c07a Fix the Vagrantfile for Github runners 2023-08-30 19:45:42 -04:00
9b5be29a1a Update Vagrantfile to use external settings 2023-08-21 18:46:47 -04:00
ef5aacdbbd No deploy keys without compose deploy variable 2023-07-21 23:52:18 -04:00
a635c7aa48 Add option to deploy external docker-compose stack 2023-07-20 03:51:44 -04:00
56aee460ad Limit Github actions to specific branches 2023-07-20 00:33:42 -04:00
027ba46f6b Add Github actions and remove old ansible stuff 2023-07-08 23:43:52 -04:00
48216db8f9 Updated Nextcloud settings and added cron job 2023-06-18 23:52:10 -04:00
fa1dc4acb7 Fix WireGuard firewall rule 2023-06-15 03:09:13 -04:00
228cd5795b Config adjustments for Jellyfin/Samba deployment
- Ignored .vscode
- Added firewall exclusion option
- Allowed guest access in Samba
2023-06-09 22:26:47 -04:00
74a559f1f6 Update mediaserver playbook and fix Wireguard task 2023-06-08 03:47:54 -04:00
4c2a1550c4 Adding samba and general user management 2023-06-07 02:12:17 -04:00
f02cf7b0cc Refactor docker playbook
- Removed copyright notice
- Variablize 'hosts' value in the playbook
- Install Jenkins agent before running Docker role
2023-05-08 16:26:16 -04:00
9142254a57 Improvements for ansible-linting 2023-05-04 01:44:18 -04:00
dfd93dd5f8 Updated Ansible tasks to FQCN format 2023-05-03 23:42:55 -04:00
81d2ea447a Add mediaserver, rm .gitignore, FQCN, Jellyfin
- Added development "mediaserver" playbook for testing
- rm .gitignore in roles dir since no external ansible roles are used
- Update a part of the base role to use FQCN for linting
- Added "jellyfin" role to install Jellyfin with docker-compose
- Updated Traefik to use the loopback for default web entry points
- Simplified Traefik docker-compose vars, Ansible sets defaults
2023-04-26 02:26:50 -04:00
9512212b84 Refactor Traefik deploy: docker-compose + systemd
- Replace docker_container ansible with new setup
- Add option to disable HTTPS for alternate reverse proxy use
2023-04-21 03:04:53 -04:00
c67a39982e Option to enable websockets for the noVNC console 2022-12-06 00:15:10 -05:00
93 changed files with 1127 additions and 623 deletions

39
.github/workflows/vagrant.yml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: homelab-ci
on:
push:
branches:
- main
- testing
jobs:
homelab-ci:
runs-on: macos-latest
steps:
- uses: actions/checkout@v3
- name: Cache Vagrant boxes
uses: actions/cache@v3
with:
path: ~/.vagrant.d/boxes
key: ${{ runner.os }}-vagrant-${{ hashFiles('Vagrantfile') }}
restore-keys: |
${{ runner.os }}-vagrant-
- name: Install Ansible
run: brew install ansible@7
- name: Software Versions
run: |
printf "VirtualBox "
vboxmanage --version
vagrant --version
export PATH="/usr/local/opt/ansible@7/bin:$PATH"
ansible --version
- name: Vagrant Up with Dockerbox Playbook
run: |
export PATH="/usr/local/opt/ansible@7/bin:$PATH"
PLAYBOOK=dockerbox vagrant up
vagrant ssh -c "docker ps"

15
.gitignore vendored
View File

@@ -1,13 +1,4 @@
.vagrant
.playbook
/*.yml
/*.yaml
!backup.yml
!moxie.yml
!docker.yml
!dockerbox.yml
!hypervisor.yml
!minecraft.yml
!proxy.yml
!unifi.yml
/environments/
.vagrant*
.vscode
/environments/

10
Makefile Normal file
View File

@@ -0,0 +1,10 @@
.PHONY: clean install
all: install
install:
vagrant up --no-destroy-on-error
sudo ./forward-ssh.sh
clean:
vagrant destroy -f && rm -rf .vagrant

50
Vagrantfile vendored
View File

@@ -1,43 +1,41 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
SSH_FORWARD=ENV["SSH_FORWARD"]
if !(SSH_FORWARD == "true")
SSH_FORWARD = false
require 'yaml'
settings_path = '.vagrant.yml'
settings = {}
if File.exist?(settings_path)
settings = YAML.load_file(settings_path)
end
PLAYBOOK=ENV["PLAYBOOK"]
if !PLAYBOOK
if File.exist?('.playbook')
PLAYBOOK = IO.read('.playbook').split("\n")[0]
end
VAGRANT_BOX = settings['VAGRANT_BOX'] || 'debian/bookworm64'
VAGRANT_CPUS = settings['VAGRANT_CPUS'] || 2
VAGRANT_MEM = settings['VAGRANT_MEM'] || 2048
SSH_FORWARD = settings['SSH_FORWARD'] || false
if !PLAYBOOK || PLAYBOOK.empty?
PLAYBOOK = "\nERROR: Set env PLAYBOOK"
end
else
File.write(".playbook", PLAYBOOK)
# Default to shell environment variable: PLAYBOOK (priority #1)
PLAYBOOK=ENV["PLAYBOOK"]
if !PLAYBOOK || PLAYBOOK.empty?
# PLAYBOOK setting in .vagrant.yml (priority #2)
PLAYBOOK = settings['PLAYBOOK'] || 'default'
end
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.box = VAGRANT_BOX
config.vm.network "private_network", type: "dhcp"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "./scratch", "/vagrant/scratch"
config.ssh.forward_agent = SSH_FORWARD
# Machine Name
config.vm.define :moxie do |moxie| #
end
# Libvrit provider
config.vm.provider :libvirt do |libvirt|
libvirt.cpus = 2
libvirt.memory = 4096
libvirt.default_prefix = ""
libvirt.cpus = VAGRANT_CPUS
libvirt.memory = VAGRANT_MEM
end
config.vm.provider "virtualbox" do |vbox|
vbox.memory = 4096
# Virtualbox provider
config.vm.provider :virtualbox do |vbox|
vbox.cpus = VAGRANT_CPUS
vbox.memory = VAGRANT_MEM
end
# Provision with Ansible
@@ -45,6 +43,6 @@ Vagrant.configure("2") do |config|
ENV['ANSIBLE_ROLES_PATH'] = File.dirname(__FILE__) + "/roles"
ansible.compatibility_mode = "2.0"
ansible.playbook = "dev/" + PLAYBOOK + ".yml"
ansible.raw_arguments = ["--diff"]
end
end

View File

@@ -1,6 +1,7 @@
[defaults]
inventory = ./environments/development
interpreter_python = /usr/bin/python3
roles_path = ./roles
[connection]
pipelining = true

4
dev/default.yml Normal file
View File

@@ -0,0 +1,4 @@
- name: Install 'default' aka nothing
hosts: all
become: true
tasks: []

8
dev/docker.yml Normal file
View File

@@ -0,0 +1,8 @@
- name: Install Docker Server
hosts: all
become: true
vars_files:
- host_vars/docker.yml
roles:
- base
- docker

View File

@@ -1,4 +1,4 @@
- name: Install Docker Box Server
- name: Install Dockerbox Server
hosts: all
become: true
vars_files:
@@ -8,7 +8,6 @@
- docker
- traefik
- nextcloud
- gitea
- jenkins
- prometheus
- nginx

10
dev/gitea.yml Normal file
View File

@@ -0,0 +1,10 @@
- name: Install Gitea Server
hosts: all
become: true
vars_files:
- host_vars/gitea.yml
roles:
- base
- docker
- mariadb
- gitea

View File

@@ -9,14 +9,14 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
# bitwarden
# Get Installation ID & Key at https://bitwarden.com/host/
bitwarden_domain: vault.vm.krislamo.org
bitwarden_domain: vault.local.krislamo.org
bitwarden_dbpass: password
bitwarden_install_id: 4ea840a3-532e-4cb6-a472-abd900728b23
bitwarden_install_key: 1yB3Z2gRI0KnnH90C6p

48
dev/host_vars/docker.yml Normal file
View File

@@ -0,0 +1,48 @@
# base
allow_reboot: false
manage_network: false
# Import my GPG key for git signature verification
root_gpgkeys:
- name: kris@lamoureux.io
id: FBF673CEEC030F8AECA814E73EDA9C3441EDA925
# docker
docker_users:
- vagrant
#docker_login_url: https://myregistry.example.com
#docker_login_user: myuser
#docker_login_pass: YOUR_PASSWD
docker_compose_env_nolog: false # dev only setting
docker_compose_deploy:
# Traefik
- name: traefik
url: https://github.com/krislamo/traefik
version: 31ee724feebc1d5f91cb17ffd6892c352537f194
enabled: true
accept_newhostkey: true # Consider verifying manually instead
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
# Traefik 2 (no other external compose to test currently)
- name: traefik2
url: https://github.com/krislamo/traefik
version: 31ee724feebc1d5f91cb17ffd6892c352537f194
enabled: true
accept_newhostkey: true # Consider verifying manually instead
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
VERSION: "2.10"
DOMAIN: traefik2.local.krislamo.org
NAME: traefik2
ROUTER: traefik2
NETWORK: traefik2
WEB_PORT: 127.0.0.1:8000:80
WEBSECURE_PORT: 127.0.0.1:4443:443
LOCAL_PORT: 127.0.0.1:8444:8443

View File

@@ -9,39 +9,36 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
traefik_web_entry: 0.0.0.0:80
traefik_websecure_entry: 0.0.0.0:443
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
#traefik_http_only: true # if behind reverse-proxy
# nextcloud
nextcloud_version: stable
nextcloud_admin: admin
nextcloud_pass: password
nextcloud_domain: cloud.vm.krislamo.org
nextcloud_domain: cloud.local.krislamo.org
nextcloud_dbversion: latest
nextcloud_dbpass: password
# gitea
gitea_domain: git.vm.krislamo.org
gitea_version: 1
gitea_dbversion: latest
gitea_dbpass: password
# jenkins
jenkins_version: lts
jenkins_domain: jenkins.vm.krislamo.org
jenkins_domain: jenkins.local.krislamo.org
# prometheus (includes grafana)
prom_version: latest
prom_domain: prom.vm.krislamo.org
prom_domain: prom.local.krislamo.org
grafana_version: latest
grafana_domain: grafana.vm.krislamo.org
grafana_domain: grafana.local.krislamo.org
prom_targets: "['10.0.2.15:9100']"
# nginx
nginx_domain: nginx.vm.krislamo.org
nginx_domain: nginx.local.krislamo.org
nginx_name: staticsite
nginx_repo_url: https://git.krislamo.org/kris/example-website/
nginx_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin

45
dev/host_vars/gitea.yml Normal file
View File

@@ -0,0 +1,45 @@
# base
allow_reboot: false
manage_network: false
users:
git:
uid: 1001
gid: 1001
home: true
# Import my GPG key for git signature verification
root_gpgkeys:
- name: kris@lamoureux.io
id: FBF673CEEC030F8AECA814E73EDA9C3441EDA925
# docker
docker_users:
- vagrant
docker_compose_env_nolog: false # dev only setting
docker_compose_deploy:
# Traefik
- name: traefik
url: https://github.com/krislamo/traefik
version: 398eb48d311db78b86abf783f903af4a1658d773
enabled: true
accept_newhostkey: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
# Gitea
- name: gitea
url: https://github.com/krislamo/gitea
version: b0ce66f6a1ab074172eed79eeeb36d7e9011ef8f
env:
USER_UID: "{{ users.git.uid }}"
USER_GID: "{{ users.git.gid }}"
DB_PASSWD: "{{ gitea.DB_PASSWD }}"
# gitea
gitea:
DB_NAME: gitea
DB_USER: gitea
DB_PASSWD: password

View File

@@ -0,0 +1,56 @@
base_domain: local.krislamo.org
# base
allow_reboot: false
manage_network: false
users:
- name: jellyfin
samba:
users:
- name: jellyfin
password: jellyfin
shares:
- name: jellyfin
path: /srv/jellyfin
owner: jellyfin
group: jellyfin
valid_users: jellyfin
firewall:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
# proxy
proxy:
#production: true
dns_cloudflare:
opts: --test-cert
#email: realemail@example.com
#api_token: CLOUDFLARE_DNS01_API_TOKEN
wildcard_domains:
- "{{ base_domain }}"
servers:
- domain: "{{ traefik_domain }}"
proxy_pass: "http://127.0.0.1:8000"
- domain: "{{ jellyfin_domain }}"
proxy_pass: "http://127.0.0.1:8000"
# docker
docker_users:
- vagrant
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: "traefik.{{ base_domain }}"
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
traefik_http_only: true # if behind reverse-proxy
# jellyfin
jellyfin_domain: "jellyfin.{{ base_domain }}"
jellyfin_version: latest
jellyfin_media: /srv/jellyfin

View File

@@ -5,14 +5,14 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
# container settings
nextcloud_version: stable
nextcloud_admin: admin
nextcloud_pass: password
nextcloud_domain: cloud.vm.krislamo.org
nextcloud_domain: cloud.local.krislamo.org
# database settings
nextcloud_dbversion: latest

View File

@@ -9,13 +9,13 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
# nginx
nginx_domain: nginx.vm.krislamo.org
nginx_domain: nginx.local.krislamo.org
nginx_name: staticsite
nginx_repo_url: https://git.krislamo.org/kris/example-website/
nginx_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin

View File

@@ -1,4 +1,4 @@
base_domain: vm.krislamo.org
base_domain: local.krislamo.org
# base
allow_reboot: false
@@ -18,8 +18,6 @@ proxy:
proxy_pass: "http://127.0.0.1:8080"
- domain: "{{ gitea_domain }}"
proxy_pass: "http://127.0.0.1:3000"
- domain: "{{ kutt_domain }}"
proxy_pass: "http://127.0.0.1:3030"
# docker
docker_users:
@@ -37,16 +35,3 @@ bitwarden_install_key: 1yB3Z2gRI0KnnH90C6p
gitea_domain: "git.{{ base_domain }}"
gitea_version: 1
gitea_dbpass: password
# kutt
kutt_version: latest
kutt_redis_version: 6
kutt_postgres_version: 12
kutt_domain: "kutt.{{ base_domain }}"
kutt_dbpass: password
kutt_jwt_secret: long&random
kutt_mail_user: kutt-noreply@example.com
kutt_mail_host: smtp.example.com
kutt_mail_password: realpassword
kutt_report_email: realemail@example.com
kutt_admin_emails: realemail@example.com

View File

@@ -9,14 +9,14 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
# container settings
wordpress_version: latest
wordpress_domain: wordpress.vm.krislamo.org
wordpress_domain: wordpress.local.krislamo.org
wordpress_multisite: true
# database settings

11
dev/mediaserver.yml Normal file
View File

@@ -0,0 +1,11 @@
- name: Install Media Server
hosts: all
become: true
vars_files:
- host_vars/mediaserver.yml
roles:
- base
- proxy
- docker
- traefik
- jellyfin

View File

@@ -10,4 +10,3 @@
- docker
- gitea
- bitwarden
- kutt

View File

@@ -1,21 +0,0 @@
# Copyright (C) 2020 Kris Lamoureux
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
- name: Install Docker Server
hosts: dockerhosts
become: true
roles:
- base
- docker
- jenkins

View File

@@ -1,25 +0,0 @@
# Copyright (C) 2020 Kris Lamoureux
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
- name: Install Docker Box Server
hosts: dockerhosts
become: true
roles:
- base
- docker
- traefik
- nextcloud
- jenkins
- prometheus
- nginx

91
forward-ssh.sh Executable file
View File

@@ -0,0 +1,91 @@
#!/bin/bash
# Finds the SSH private key under ./.vagrant and connects to
# the Vagrant box, port forwarding localhost ports: 8443, 80, 443
# Root check
if [ "$EUID" -ne 0 ]; then
echo "[ERROR]: Please run script as root"
exit 1
fi
# Clean environment
unset PRIVATE_KEY
unset HOST_IP
unset MATCH_PATTERN
unset PKILL_ANSWER
# Function to create the SSH tunnel
function ssh_connect {
read -rp "Start a new vagrant SSH tunnel? [y/N] " PSTART_ANSWER
echo
case "$PSTART_ANSWER" in
[yY])
printf "[INFO]: Starting new vagrant SSH tunnel on PID "
sudo -u "$USER" ssh -fNT -i "$PRIVATE_KEY" \
-L 22:localhost:22 \
-L 80:localhost:80 \
-L 443:localhost:443 \
-L 8443:localhost:8443 \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
vagrant@"$HOST_IP" 2>/dev/null
sleep 2
pgrep -f "$MATCH_PATTERN"
;;
*)
echo "[INFO]: Delined to start a new vagrant SSH tunnel"
exit 0
;;
esac
}
# Check for valid PRIVATE_KEY location
PRIVATE_KEY="$(find .vagrant -name "private_key" 2>/dev/null)"
if ! ssh-keygen -l -f "$PRIVATE_KEY" &>/dev/null; then
echo "[ERROR]: The SSH key '$PRIVATE_KEY' is not valid. Is your virtual machine running?"
exit 1
fi
echo "[CHECK]: Valid key at $PRIVATE_KEY"
# Grab first IP or use whatever HOST_IP_FIELD is set to and check that the guest is up
HOST_IP="$(vagrant ssh -c "hostname -I | cut -d' ' -f${HOST_IP_FIELD:-1}" 2>/dev/null)"
HOST_IP="${HOST_IP::-1}" # trim
if ! ping -c 1 "$HOST_IP" &>/dev/null; then
echo "[ERROR]: Cannot ping the host IP '$HOST_IP'"
exit 1
fi
echo "[CHECK]: Host at $HOST_IP is up"
# Pattern for matching processes running
MATCH_PATTERN="ssh -fNT -i ${PRIVATE_KEY}.*vagrant@"
# Check amount of processes that match the pattern
if [ "$(pgrep -afc "$MATCH_PATTERN")" -eq 0 ]; then
ssh_connect
else
# Processes found, so prompt to kill remaining ones then start tunnel
printf "\n[WARNING]: Found processes running:\n"
pgrep -fa "$MATCH_PATTERN"
printf '\n'
read -rp "Would you like to kill these processes? [y/N] " PKILL_ANSWER
echo
case "$PKILL_ANSWER" in
[yY])
echo "[WARNING]: Killing old vagrant SSH tunnel(s): "
pgrep -f "$MATCH_PATTERN" | tee >(xargs kill -15)
echo
if [ "$(pgrep -afc "$MATCH_PATTERN")" -eq 0 ]; then
ssh_connect
else
echo "[ERROR]: Unable to kill processes:"
pgrep -f "$MATCH_PATTERN"
exit 1
fi
;;
*)
echo "[INFO]: Declined to kill existing processes"
exit 0
;;
esac
fi

7
playbooks/docker.yml Normal file
View File

@@ -0,0 +1,7 @@
- name: Install Docker Server
hosts: "{{ PLAYBOOK_HOST | default('none') }}"
become: true
roles:
- base
- jenkins
- docker

11
playbooks/dockerbox.yml Normal file
View File

@@ -0,0 +1,11 @@
- name: Install Dockerbox Server
hosts: "{{ PLAYBOOK_HOST | default('none') }}"
become: true
roles:
- base
- docker
- traefik
- nextcloud
- jenkins
- prometheus
- nginx

10
playbooks/mediaserver.yml Normal file
View File

@@ -0,0 +1,10 @@
- name: Install Media Server
hosts: "{{ PLAYBOOK_HOST | default('none') }}"
become: true
roles:
- base
- jenkins
- proxy
- docker
- traefik
- jellyfin

23
roles/.gitignore vendored
View File

@@ -1,23 +0,0 @@
# Sort roles: tail -n +6 roles/.gitignore | sort
/*
!.gitignore
!requirements.yml
# roles
!base*/
!bitwarden*/
!docker*/
!gitea*/
!jenkins*/
!kutt*/
!libvirt*/
!mariadb*/
!minecraft*/
!nextcloud*/
!nginx*/
!postgresql*/
!prometheus*/
!proxy*/
!rsnapshot*/
!traefik*/
!unifi*/
!wordpress*/

View File

@@ -1,6 +1,8 @@
allow_reboot: true
manage_firewall: true
manage_network: false
network_type: static
allow_reboot: true
locale_default: en_US.UTF-8
packages:
- apache2-utils

View File

@@ -1,18 +1,30 @@
- name: Reboot host
reboot:
ansible.builtin.reboot:
msg: "Reboot initiated by Ansible"
connect_timeout: 5
listen: reboot_host
when: allow_reboot
- name: Restart WireGuard
service:
ansible.builtin.service:
name: wg-quick@wg0
state: restarted
listen: restart_wireguard
- name: Restart Fail2ban
service:
ansible.builtin.service:
name: fail2ban
state: restarted
listen: restart_fail2ban
- name: Restart ddclient
ansible.builtin.service:
name: ddclient
state: restarted
listen: restart_ddclient
- name: Restart Samba
ansible.builtin.service:
name: smbd
state: restarted
listen: restart_samba

View File

@@ -1,23 +1,5 @@
- name: 'Install Ansible dependency: python3-apt'
shell: 'apt-get update && apt-get install python3-apt -y'
args:
creates: /usr/lib/python3/dist-packages/apt
warn: false
- name: Install additional Ansible dependencies
apt:
name: "{{ item }}"
state: present
force_apt_get: true
update_cache: true
loop:
- aptitude
- python3-docker
- python3-pymysql
- python3-psycopg2
- name: Create Ansible's temporary remote directory
file:
ansible.builtin.file:
path: "~/.ansible/tmp"
state: directory
mode: 0700

View File

@@ -1,22 +1,17 @@
- name: Install ddclient
apt:
ansible.builtin.apt:
name: ddclient
state: present
- name: Install ddclient settings
template:
ansible.builtin.template:
src: ddclient.conf.j2
dest: /etc/ddclient.conf
mode: 0600
register: ddclient_settings
- name: Start ddclient and enable on boot
service:
ansible.builtin.service:
name: ddclient
state: started
enabled: true
- name: Restart ddclient
service:
name: ddclient
state: restarted
when: ddclient_settings.changed

View File

@@ -1,46 +1,48 @@
- name: Install the Uncomplicated Firewall
apt:
ansible.builtin.apt:
name: ufw
state: present
- name: Install Fail2ban
apt:
ansible.builtin.apt:
name: fail2ban
state: present
- name: Deny incoming traffic by default
ufw:
community.general.ufw:
default: deny
direction: incoming
- name: Allow outgoing traffic by default
ufw:
community.general.ufw:
default: allow
direction: outgoing
- name: Allow OpenSSH with rate limiting
ufw:
community.general.ufw:
name: ssh
rule: limit
- name: Remove Fail2ban defaults-debian.conf
file:
ansible.builtin.file:
path: /etc/fail2ban/jail.d/defaults-debian.conf
state: absent
- name: Install OpenSSH's Fail2ban jail
template:
ansible.builtin.template:
src: fail2ban-ssh.conf.j2
dest: /etc/fail2ban/jail.d/sshd.conf
mode: 0640
notify: restart_fail2ban
- name: Install Fail2ban IP allow list
template:
ansible.builtin.template:
src: fail2ban-allowlist.conf.j2
dest: /etc/fail2ban/jail.d/allowlist.conf
mode: 0640
when: fail2ban_ignoreip is defined
notify: restart_fail2ban
- name: Enable firewall
ufw:
community.general.ufw:
state: enabled

View File

@@ -1,5 +1,5 @@
- name: Install msmtp
apt:
ansible.builtin.apt:
name: "{{ item }}"
state: present
loop:
@@ -8,12 +8,13 @@
- mailutils
- name: Install msmtp configuration
template:
ansible.builtin.template:
src: msmtprc.j2
dest: /root/.msmtprc
mode: 0700
mode: 0600
- name: Install /etc/aliases
copy:
ansible.builtin.copy:
dest: /etc/aliases
content: "root: {{ mail.rootalias }}"
mode: 0644

View File

@@ -1,24 +1,37 @@
- import_tasks: ansible.yml
- name: Import Ansible tasks
ansible.builtin.import_tasks: ansible.yml
tags: ansible
- import_tasks: system.yml
- name: Import System tasks
ansible.builtin.import_tasks: system.yml
tags: system
- import_tasks: firewall.yml
- name: Import Firewall tasks
ansible.builtin.import_tasks: firewall.yml
tags: firewall
when: manage_firewall
- import_tasks: network.yml
- name: Import Network tasks
ansible.builtin.import_tasks: network.yml
tags: network
when: manage_network
- import_tasks: mail.yml
- name: Import Mail tasks
ansible.builtin.import_tasks: mail.yml
tags: mail
when: mail is defined
- import_tasks: ddclient.yml
- name: Import ddclient tasks
ansible.builtin.import_tasks: ddclient.yml
tags: ddclient
when: ddclient is defined
- import_tasks: wireguard.yml
- name: Import WireGuard tasks
ansible.builtin.import_tasks: wireguard.yml
tags: wireguard
when: wireguard is defined
- name: Import Samba tasks
ansible.builtin.import_tasks: samba.yml
tags: samba
when: samba is defined

View File

@@ -1,5 +1,5 @@
- name: Install network interfaces file
copy:
ansible.builtin.copy:
src: network-interfaces.cfg
dest: /etc/network/interfaces
owner: root
@@ -7,8 +7,9 @@
mode: '0644'
- name: Install network interfaces
template:
ansible.builtin.template:
src: "interface.j2"
dest: "/etc/network/interfaces.d/{{ item.name }}"
mode: 0400
loop: "{{ interfaces }}"
notify: reboot_host

View File

@@ -0,0 +1,53 @@
- name: Install Samba
ansible.builtin.apt:
name: samba
state: present
- name: Create nologin shell accounts for Samba
ansible.builtin.user:
name: "{{ item.name }}"
state: present
shell: /usr/sbin/nologin
createhome: false
system: yes
loop: "{{ samba.users }}"
when: item.manage_user is defined and item.manage_user is true
- name: Create Samba users
ansible.builtin.shell: "smbpasswd -a {{ item.name }}"
args:
stdin: "{{ item.password }}\n{{ item.password }}"
loop: "{{ samba.users }}"
register: samba_users
changed_when: "'User added' in samba_users.stdout"
- name: Ensure share directories exist
ansible.builtin.file:
path: "{{ item.path }}"
owner: "{{ item.owner }}"
group: "{{ item.group }}"
state: directory
mode: 0755
loop: "{{ samba.shares }}"
- name: Configure Samba shares
ansible.builtin.template:
src: smb.conf.j2
dest: /etc/samba/smb.conf
notify: restart_samba
- name: Start smbd and enable on boot
ansible.builtin.service:
name: smbd
state: started
enabled: true
- name: Allow SMB connections
community.general.ufw:
rule: allow
port: 445
proto: tcp
from: "{{ item }}"
state: enabled
loop: "{{ samba.firewall }}"
when: manage_firewall

View File

@@ -1,17 +1,106 @@
- name: Install useful software
apt:
ansible.builtin.apt:
name: "{{ packages }}"
state: present
update_cache: true
- name: Install GPG
ansible.builtin.apt:
name: gpg
state: present
- name: Check for existing GPG keys
command: "gpg --list-keys {{ item.id }} 2>/dev/null"
register: gpg_check
loop: "{{ root_gpgkeys }}"
failed_when: false
changed_when: false
when: root_gpgkeys is defined
- name: Import GPG keys
command: "gpg --keyserver {{ item.item.server | default('keys.openpgp.org') }} --recv-key {{ item.item.id }}"
register: gpg_check_import
loop: "{{ gpg_check.results }}"
loop_control:
label: "{{ item.item }}"
when: root_gpgkeys is defined and item.rc != 0
- name: Check GPG key imports
fail:
msg: "{{ item.stderr }}"
loop: "{{ gpg_check_import.results }}"
loop_control:
label: "{{ item.item.item }}"
when: (item.skipped | default(false) == false) and ('imported' not in item.stderr)
- name: Install NTPsec
ansible.builtin.apt:
name: ntpsec
state: present
- name: Install locales
ansible.builtin.apt:
name: locales
state: present
- name: Generate locale
community.general.locale_gen:
name: "{{ locale_default }}"
state: present
register: locale_gen_output
- name: Set the default locale
ansible.builtin.lineinfile:
path: /etc/default/locale
regexp: "^LANG="
line: "LANG={{ locale_default }}"
- name: Reconfigure locales
ansible.builtin.command: dpkg-reconfigure -f noninteractive locales
when: locale_gen_output.changed
- name: Manage root authorized_keys
template:
ansible.builtin.template:
src: authorized_keys.j2
dest: /root/.ssh/authorized_keys
mode: 0400
when: authorized_keys is defined
- name: Create system user groups
ansible.builtin.group:
name: "{{ item.key }}"
gid: "{{ item.value.gid }}"
state: present
loop: "{{ users | dict2items }}"
loop_control:
label: "{{ item.key }}"
when: users is defined
- name: Create system users
ansible.builtin.user:
name: "{{ item.key }}"
state: present
uid: "{{ item.value.uid }}"
group: "{{ item.value.gid }}"
shell: "{{ item.value.shell | default('/bin/bash') }}"
create_home: "{{ item.value.home | default(false) }}"
loop: "{{ users | dict2items }}"
loop_control:
label: "{{ item.key }}"
when: users is defined
- name: Set authorized_keys for system users
ansible.posix.authorized_key:
user: "{{ item.key }}"
key: "{{ item.value.key }}"
state: present
loop: "{{ users | dict2items }}"
loop_control:
label: "{{ item.key }}"
when: users is defined and item.value.key is defined
- name: Manage filesystem mounts
mount:
ansible.posix.mount:
path: "{{ item.path }}"
src: "UUID={{ item.uuid }}"
fstype: "{{ item.fstype }}"

View File

@@ -1,36 +1,39 @@
- name: Install WireGuard
apt:
ansible.builtin.apt:
name: wireguard
state: present
update_cache: true
- name: Generate WireGuard keys
shell: wg genkey | tee privatekey | wg pubkey > publickey
ansible.builtin.shell: |
set -o pipefail
wg genkey | tee privatekey | wg pubkey > publickey
args:
chdir: /etc/wireguard/
creates: /etc/wireguard/privatekey
executable: /usr/bin/bash
- name: Grab WireGuard private key for configuration
slurp:
ansible.builtin.slurp:
src: /etc/wireguard/privatekey
register: wgkey
- name: Install WireGuard configuration
template:
ansible.builtin.template:
src: wireguard.j2
dest: /etc/wireguard/wg0.conf
notify:
- restart_wireguard
mode: 0400
notify: restart_wireguard
- name: Start WireGuard interface
service:
ansible.builtin.service:
name: wg-quick@wg0
state: started
enabled: true
- name: Add WireGuard firewall rule
ufw:
community.general.ufw:
rule: allow
port: "{{ wireguard.listenport }}"
proto: tcp
proto: udp
when: wireguard.listenport is defined

View File

@@ -0,0 +1,28 @@
[global]
workgroup = WORKGROUP
server string = Samba Server %v
netbios name = {{ ansible_hostname }}
security = user
map to guest = bad user
dns proxy = no
{% for user in samba.users %}
smb encrypt = {{ 'mandatory' if user.encrypt | default(false) else 'disabled' }}
{% endfor %}
{% for share in samba.shares %}
[{{ share.name }}]
path = {{ share.path }}
browsable = yes
{% if share.guest_allow is defined and share.guest_allow %}
guest ok = yes
{% else %}
guest ok = no
{% endif %}
read only = {{ 'yes' if share.read_only | default(false) else 'no' }}
{% if share.valid_users is defined %}
valid users = {{ share.valid_users }}
{% endif %}
{% if share.force_user is defined %}
force user = {{ share.force_user }}
{% endif %}
{% endfor %}

View File

@@ -1,15 +1,15 @@
- name: Stop Bitwarden for rebuild
service:
ansible.builtin.service:
name: "{{ bitwarden_name }}"
state: stopped
listen: rebuild_bitwarden
- name: Rebuild Bitwarden
shell: "{{ bitwarden_root }}/bitwarden.sh rebuild"
ansible.builtin.shell: "{{ bitwarden_root }}/bitwarden.sh rebuild"
listen: rebuild_bitwarden
- name: Start Bitwarden after rebuild
service:
ansible.builtin.service:
name: "{{ bitwarden_name }}"
state: started
enabled: true

View File

@@ -1,40 +1,40 @@
- name: Install expect
apt:
ansible.builtin.apt:
name: expect
state: present
- name: Create Bitwarden directory
file:
ansible.builtin.file:
path: "{{ bitwarden_root }}"
state: directory
- name: Download Bitwarden script
get_url:
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/\
bitwarden/self-host/master/bitwarden.sh"
dest: "{{ bitwarden_root }}"
mode: u+x
- name: Install Bitwarden script wrapper
template:
ansible.builtin.template:
src: bw_wrapper.j2
dest: "{{ bitwarden_root }}/bw_wrapper"
mode: u+x
- name: Run Bitwarden installation script
shell: "{{ bitwarden_root }}/bw_wrapper"
ansible.builtin.shell: "{{ bitwarden_root }}/bw_wrapper"
args:
creates: "{{ bitwarden_root }}/bwdata/config.yml"
- name: Install docker-compose override
template:
ansible.builtin.template:
src: compose.override.yml.j2
dest: "{{ bitwarden_root }}/bwdata/docker/docker-compose.override.yml"
when: traefik_version is defined
notify: rebuild_bitwarden
- name: Disable bitwarden-nginx HTTP on 80
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^http_port: 80$"
replace: "http_port: 127.0.0.1:8080"
@@ -42,7 +42,7 @@
notify: rebuild_bitwarden
- name: Disable bitwarden-nginx HTTPS on 443
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^https_port: 443$"
replace: "https_port: 127.0.0.1:8443"
@@ -50,7 +50,7 @@
notify: rebuild_bitwarden
- name: Disable Bitwarden managed Lets Encrypt
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^ssl_managed_lets_encrypt: true$"
replace: "ssl_managed_lets_encrypt: false"
@@ -58,7 +58,7 @@
notify: rebuild_bitwarden
- name: Disable Bitwarden managed SSL
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^ssl: true$"
replace: "ssl: false"
@@ -66,39 +66,39 @@
notify: rebuild_bitwarden
- name: Define reverse proxy servers
lineinfile:
ansible.builtin.lineinfile:
path: "{{ bitwarden_root }}/bwdata/config.yml"
line: "- {{ bitwarden_realips }}"
insertafter: "^real_ips"
notify: rebuild_bitwarden
- name: Install Bitwarden systemd service
template:
ansible.builtin.template:
src: bitwarden.service.j2
dest: "/etc/systemd/system/{{ bitwarden_name }}.service"
register: bitwarden_systemd
notify: rebuild_bitwarden
- name: Create Bitwarden's initial logging directory
file:
ansible.builtin.file:
path: "{{ bitwarden_logs_identity }}"
state: directory
register: bitwarden_logs
- name: Create Bitwarden's initial log file
file:
ansible.builtin.file:
path: "{{ bitwarden_logs_identity }}/{{ bitwarden_logs_identity_date }}.txt"
state: touch
when: bitwarden_logs.changed
- name: Install Bitwarden's Fail2ban jail
template:
ansible.builtin.template:
src: fail2ban-jail.conf.j2
dest: /etc/fail2ban/jail.d/bitwarden.conf
notify: restart_fail2ban
- name: Reload systemd manager configuration
systemd:
ansible.builtin.systemd:
daemon_reload: true
when: bitwarden_systemd.changed
notify: rebuild_bitwarden

View File

@@ -1,3 +1,6 @@
docker_compose_root: /var/lib/compose
docker_compose: /usr/bin/docker-compose
docker_compose_service: compose
docker_compose: /usr/bin/docker-compose
docker_repos_keys: "{{ docker_repos_path }}/.keys"
docker_repos_keytype: rsa
docker_repos_path: /srv/.compose_repos

View File

@@ -0,0 +1,30 @@
- name: Reload systemd manager configuration
ansible.builtin.systemd:
daemon_reload: true
listen: compose_systemd
- name: Find which services had a docker-compose.yml updated
set_fact:
compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}"
loop: "{{ compose_update.results }}"
loop_control:
label: "{{ item.item.name }}"
when: item.changed
listen: compose_restart
- name: Find which services had their .env updated
set_fact:
compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}"
loop: "{{ compose_env_update.results }}"
loop_control:
label: "{{ item.item.name }}"
when: item.changed
listen: compose_restart
- name: Restart {{ docker_compose_service }} services
ansible.builtin.systemd:
state: restarted
name: "{{ docker_compose_service }}@{{ item }}"
loop: "{{ compose_restart_list | unique }}"
when: compose_restart_list is defined
listen: compose_restart

View File

@@ -1,27 +1,99 @@
- name: Install Docker
apt:
ansible.builtin.apt:
name: ['docker.io', 'docker-compose']
state: present
update_cache: true
- name: Login to private registry
community.docker.docker_login:
registry_url: "{{ docker_login_url | default('') }}"
username: "{{ docker_login_user }}"
password: "{{ docker_login_pass }}"
when: docker_login_user is defined and docker_login_pass is defined
- name: Create docker-compose root
file:
ansible.builtin.file:
path: "{{ docker_compose_root }}"
state: directory
mode: 0500
- name: Install docker-compose systemd service
template:
ansible.builtin.template:
src: docker-compose.service.j2
dest: "/etc/systemd/system/{{ docker_compose_service }}@.service"
register: compose_systemd
mode: 0400
notify: compose_systemd
- name: Reload systemd manager configuration
systemd:
daemon_reload: true
when: compose_systemd.changed
- name: Create directories to clone docker-compose repositories
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: 0400
loop:
- "{{ docker_repos_path }}"
- "{{ docker_repos_keys }}"
when: docker_compose_deploy is defined
- name: Generate OpenSSH deploy keys for docker-compose clones
community.crypto.openssh_keypair:
path: "{{ docker_repos_keys }}/id_{{ docker_repos_keytype }}"
type: "{{ docker_repos_keytype }}"
comment: "{{ ansible_hostname }}-deploy-key"
mode: 0400
state: present
when: docker_compose_deploy is defined
- name: Clone external docker-compose projects
ansible.builtin.git:
repo: "{{ item.url }}"
dest: "{{ docker_repos_path }}/{{ item.name }}"
version: "{{ item.version }}"
accept_newhostkey: "{{ item.accept_newhostkey | default('false') }}"
gpg_whitelist: "{{ item.trusted_keys | default([]) }}"
verify_commit: "{{ true if (item.trusted_keys is defined and item.trusted_keys) else false }}"
key_file: "{{ docker_repos_keys }}/id_{{ docker_repos_keytype }}"
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ item.url }}"
when: docker_compose_deploy is defined
- name: Create directories for docker-compose projects using the systemd service
ansible.builtin.file:
path: "{{ docker_compose_root }}/{{ item.name }}"
state: directory
mode: 0400
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ item.name }}"
when: docker_compose_deploy is defined
- name: Synchronize docker-compose.yml
ansible.posix.synchronize:
src: "{{ docker_repos_path }}/{{ item.name }}/{{ item.path | default('docker-compose.yml') }}"
dest: "{{ docker_compose_root }}/{{ item.name }}/docker-compose.yml"
delegate_to: "{{ inventory_hostname }}"
register: compose_update
notify: compose_restart
loop: "{{ docker_compose_deploy | default([]) }}"
loop_control:
label: "{{ item.name }}"
when: docker_compose_deploy is defined and docker_compose_deploy | length > 0
- name: Set environment variables for docker-compose projects
ansible.builtin.template:
src: docker-compose-env.j2
dest: "{{ docker_compose_root }}/{{ item.name }}/.env"
mode: 0400
register: compose_env_update
notify: compose_restart
no_log: "{{ docker_compose_env_nolog | default('true') }}"
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ item.name }}"
when: docker_compose_deploy is defined and item.env is defined
- name: Add users to docker group
user:
ansible.builtin.user:
name: "{{ item }}"
groups: docker
append: true
@@ -29,7 +101,17 @@
when: docker_users is defined
- name: Start Docker and enable on boot
service:
ansible.builtin.service:
name: docker
state: started
enabled: true
- name: Start docker-compose services and enable on boot
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ item.name }}"
state: started
enabled: true
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ docker_compose_service }}@{{ item.name }}"
when: item.enabled is defined and item.enabled is true

View File

@@ -0,0 +1,10 @@
# {{ ansible_managed }}
{% if item.env is defined %}
{% for key, value in item.env.items() %}
{% if value is boolean %}
{{ key }}={{ value | lower }}
{% else %}
{{ key }}={{ value }}
{% endif %}
{% endfor %}
{% endif %}

View File

@@ -1,5 +1,5 @@
- name: Restart Gitea
service:
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ gitea_name }}"
state: restarted
listen: restart_gitea

View File

@@ -1,111 +1,79 @@
- name: Create Gitea directory
file:
path: "{{ gitea_root }}"
state: directory
- name: Install MySQL module for Ansible
ansible.builtin.apt:
name: python3-pymysql
state: present
- name: Create Gitea database
mysql_db:
name: "{{ gitea_dbname }}"
community.mysql.mysql_db:
name: "{{ gitea.DB_NAME }}"
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Create Gitea database user
mysql_user:
name: "{{ gitea_dbuser }}"
password: "{{ gitea_dbpass }}"
community.mysql.mysql_user:
name: "{{ gitea.DB_USER }}"
password: "{{ gitea.DB_PASSWD }}"
host: '%'
state: present
priv: "{{ gitea_dbname }}.*:ALL"
priv: "{{ gitea.DB_NAME }}.*:ALL"
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Create git user
user:
name: git
state: present
- name: Git user uid
getent:
database: passwd
key: git
- name: Git user gid
getent:
database: group
key: git
- name: Create git's .ssh directory
file:
ansible.builtin.file:
path: /home/git/.ssh
state: directory
- name: Generate git's SSH keys
openssh_keypair:
community.crypto.openssh_keypair:
path: /home/git/.ssh/id_rsa
- name: Find git's public SSH key
slurp:
ansible.builtin.slurp:
src: /home/git/.ssh/id_rsa.pub
register: git_rsapub
- name: Get stats on git's authorized_keys file
stat:
ansible.builtin.stat:
path: /home/git/.ssh/authorized_keys
register: git_authkeys
- name: Create git's authorized_keys file
file:
ansible.builtin.file:
path: /home/git/.ssh/authorized_keys
state: touch
when: not git_authkeys.stat.exists
- name: Add git's public SSH key to authorized_keys
lineinfile:
ansible.builtin.lineinfile:
path: /home/git/.ssh/authorized_keys
regex: "^ssh-rsa"
line: "{{ git_rsapub['content'] | b64decode }}"
- name: Create Gitea host script for SSH
template:
ansible.builtin.template:
src: gitea.sh.j2
dest: /usr/local/bin/gitea
mode: 0755
- name: Install Gitea's docker-compose file
template:
src: docker-compose.yml.j2
dest: "{{ gitea_root }}/docker-compose.yml"
notify: restart_gitea
- name: Install Gitea's docker-compose variables
template:
src: compose-env.j2
dest: "{{ gitea_root }}/.env"
notify: restart_gitea
- name: Create Gitea's logging directory
file:
ansible.builtin.file:
name: /var/log/gitea
state: directory
- name: Create Gitea's initial log file
file:
name: /var/log/gitea/gitea.log
state: touch
- name: Install Gitea's Fail2ban filter
template:
ansible.builtin.template:
src: fail2ban-filter.conf.j2
dest: /etc/fail2ban/filter.d/gitea.conf
notify: restart_fail2ban
- name: Install Gitea's Fail2ban jail
template:
ansible.builtin.template:
src: fail2ban-jail.conf.j2
dest: /etc/fail2ban/jail.d/gitea.conf
notify: restart_fail2ban
- name: Start and enable Gitea service
service:
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ gitea_name }}"
state: started
enabled: true

View File

@@ -0,0 +1,4 @@
jellyfin_name: jellyfin
jellyfin_router: "{{ jellyfin_name }}"
jellyfin_rooturl: "https://{{ jellyfin_domain }}"
jellyfin_root: "{{ docker_compose_root }}/{{ jellyfin_name }}"

View File

@@ -0,0 +1,5 @@
- name: Restart Jellyfin
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ jellyfin_name }}"
state: restarted
listen: restart_jellyfin

View File

@@ -0,0 +1,35 @@
- name: Create Jellyfin directory
ansible.builtin.file:
path: "{{ jellyfin_root }}"
state: directory
mode: 0500
- name: Get user jellyfin uid
ansible.builtin.getent:
database: passwd
key: jellyfin
- name: Get user jellyfin gid
ansible.builtin.getent:
database: group
key: jellyfin
- name: Install Jellyfin's docker-compose file
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ jellyfin_root }}/docker-compose.yml"
mode: 0400
notify: restart_jellyfin
- name: Install Jellyfin's docker-compose variables
ansible.builtin.template:
src: compose-env.j2
dest: "{{ jellyfin_root }}/.env"
mode: 0400
notify: restart_jellyfin
- name: Start and enable Jellyfin service
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ jellyfin_name }}"
state: started
enabled: true

View File

@@ -0,0 +1,5 @@
# {{ ansible_managed }}
jellyfin_version={{ jellyfin_version }}
jellyfin_name={{ jellyfin_name }}
jellyfin_domain={{ jellyfin_domain }}
jellyfin_rooturl={{ jellyfin_rooturl }}

View File

@@ -0,0 +1,30 @@
version: '3.7'
volumes:
config:
cache:
networks:
traefik:
external: true
services:
jellyfin:
image: "jellyfin/jellyfin:${jellyfin_version}"
container_name: "${jellyfin_name}"
networks:
- traefik
labels:
- "traefik.http.routers.{{ jellyfin_router }}.rule=Host(`{{ jellyfin_domain }}`)"
{% if traefik_http_only %}
- "traefik.http.routers.{{ jellyfin_router }}.entrypoints=web"
{% else %}
- "traefik.http.routers.{{ jellyfin_router }}.entrypoints=websecure"
{% endif %}
- "traefik.http.services.{{ jellyfin_router }}.loadbalancer.server.port=8096"
- "traefik.docker.network=traefik"
- "traefik.enable=true"
volumes:
- config:/config
- cache:/cache
- {{ jellyfin_media }}:/media

View File

@@ -1,5 +1,5 @@
- name: Create Jenkins user
user:
ansible.builtin.user:
name: "{{ jenkins_user }}"
state: present
shell: /bin/bash
@@ -7,25 +7,25 @@
generate_ssh_key: true
- name: Set Jenkins authorized key
authorized_key:
ansible.posix.authorized_key:
user: jenkins
state: present
exclusive: true
key: "{{ jenkins_sshkey }}"
- name: Give Jenkins user passwordless sudo
template:
ansible.builtin.template:
src: jenkins_sudoers.j2
dest: /etc/sudoers.d/{{ jenkins_user }}
validate: "visudo -cf %s"
mode: 0440
- name: Install Ansible
apt:
ansible.builtin.apt:
name: ansible
state: present
- name: Install Java
apt:
ansible.builtin.apt:
name: default-jre
state: present

View File

@@ -1,5 +1,5 @@
- import_tasks: agent.yml
- ansible.builtin.import_tasks: agent.yml
when: jenkins_sshkey is defined
- import_tasks: server.yml
- ansible.builtin.import_tasks: server.yml
when: jenkins_domain is defined

View File

@@ -1,12 +1,12 @@
- name: Create Jenkin's directory
file:
ansible.builtin.file:
path: "{{ jenkins_root }}"
state: directory
owner: "1000"
group: "1000"
- name: Start Jenkins Container
docker_container:
community.general.docker_container:
name: "{{ jenkins_name }}"
image: jenkins/jenkins:{{ jenkins_version }}
state: started

View File

@@ -1,16 +0,0 @@
# container settings
kutt_name: kutt
kutt_default_domain: "{{ kutt_domain }}"
kutt_webport: 3030
kutt_web: "127.0.0.1:{{ kutt_webport }}"
# database settings
kutt_dbname: "{{ kutt_name }}"
kutt_dbuser: "{{ kutt_name }}"
kutt_postgres_volume: postgres_data
# redis
kutt_redis_volume: redis_data
# host
kutt_root: "{{ docker_compose_root }}/{{ kutt_name }}"

View File

@@ -1,5 +0,0 @@
- name: Restart Kutt
service:
name: "{{ docker_compose_service }}@{{ kutt_name }}"
state: restarted
listen: restart_kutt

View File

@@ -1,22 +0,0 @@
- name: Create Kutt directory
file:
path: "{{ kutt_root }}"
state: directory
- name: Install Kutt's docker-compose file
template:
src: docker-compose.yml.j2
dest: "{{ kutt_root }}/docker-compose.yml"
notify: restart_kutt
- name: Install Kutt's docker-compose variables
template:
src: compose-env.j2
dest: "{{ kutt_root }}/.env"
notify: restart_kutt
- name: Start and enable Gitea service
service:
name: "{{ docker_compose_service }}@{{ kutt_name }}"
state: started
enabled: true

View File

@@ -1,17 +0,0 @@
# {{ ansible_managed }}
kutt_version={{ kutt_version }}
kutt_web={{ kutt_web }}
kutt_domain={{ kutt_domain }}
kutt_default_domain={{ kutt_default_domain }}
kutt_jwt_secret={{ kutt_jwt_secret }}
kutt_dbname={{ kutt_dbname }}
kutt_dbuser={{ kutt_dbuser }}
kutt_dbpass={{ kutt_dbpass }}
kutt_mail_user={{ kutt_mail_user }}
kutt_mail_host={{ kutt_mail_host }}
kutt_mail_password={{ kutt_mail_password }}
kutt_report_email={{ kutt_report_email }}
kutt_admin_emails={{ kutt_admin_emails }}
kutt_redis_version={{ kutt_redis_version }}
kutt_postgres_version={{ kutt_postgres_version }}
kutt_postgres_volume={{ kutt_postgres_volume }}

View File

@@ -1,46 +0,0 @@
version: "3.7"
services:
kutt:
image: kutt/kutt:${kutt_version}
depends_on:
- postgres
- redis
command: ["./wait-for-it.sh", "postgres:5432", "--", "npm", "start"]
ports:
- ${kutt_web}:3000
environment:
SITE_NAME: ${kutt_domain}
DEFAULT_DOMAIN: ${kutt_default_domain}
JWT_SECRET: ${kutt_jwt_secret}
DB_HOST: postgres
DB_NAME: ${kutt_dbname}
DB_USER: ${kutt_dbuser}
DB_PASSWORD: ${kutt_dbpass}
REDIS_HOST: redis
MAIL_USER: ${kutt_mail_user}
MAIL_HOST: ${kutt_mail_host}
MAIL_PORT: ${kutt_mail_port}
MAIL_PASSWORD: ${kutt_mail_password}
REPORT_EMAIL: ${kutt_report_email}
ADMIN_EMAILS: ${kutt_admin_emails}
redis:
image: redis:${kutt_redis_version}
volumes:
- {{ kutt_redis_volume }}:/data
postgres:
image: postgres:${kutt_postgres_version}
environment:
POSTGRES_USER: ${kutt_dbuser}
POSTGRES_PASSWORD: ${kutt_dbpass}
POSTGRES_DB: ${kutt_dbname}
volumes:
- {{ kutt_postgres_volume }}:/var/lib/postgresql/data
volumes:
{{ kutt_redis_volume }}:
{{ kutt_postgres_volume }}:

View File

@@ -1,15 +1,15 @@
- name: Install QEMU/KVM
apt:
ansible.builtin.apt:
name: qemu-kvm
state: present
- name: Install Libvirt
apt:
ansible.builtin.apt:
name: ["libvirt-clients", "libvirt-daemon-system"]
state: present
- name: Add users to libvirt group
user:
ansible.builtin.user:
name: "{{ item }}"
groups: libvirt
append: yes
@@ -17,12 +17,12 @@
when: libvirt_users is defined
- name: Check for NODOWNLOAD file
stat:
ansible.builtin.stat:
path: /var/lib/libvirt/images/NODOWNLOAD
register: NODOWNLOAD
- name: Download GNU/Linux ISOs
get_url:
ansible.builtin.get_url:
url: "{{ item.url }}"
dest: /var/lib/libvirt/images
checksum: "{{ item.hash }}"
@@ -34,7 +34,7 @@
# Prevent downloaded ISOs from being rehashed every run
- name: Create NODOWNLOAD file
file:
ansible.builtin.file:
path: /var/lib/libvirt/images/NODOWNLOAD
state: touch
when: download_isos.changed

View File

@@ -1,3 +0,0 @@
mariadb_trust:
- "172.16.0.0/12"
- "192.168.0.0/16"

View File

@@ -0,0 +1,5 @@
- name: Restart MariaDB
ansible.builtin.service:
name: mariadb
state: restarted
listen: restart_mariadb

View File

@@ -1,25 +1,22 @@
- name: Install MariaDB
apt:
ansible.builtin.apt:
name: mariadb-server
state: present
- name: Change the bind-address to allow Docker
lineinfile:
- name: Regather facts for the potentially new docker0 interface
ansible.builtin.setup:
- name: Change the bind-address to allow from docker0
ansible.builtin.lineinfile:
path: /etc/mysql/mariadb.conf.d/50-server.cnf
regex: "^bind-address"
line: "bind-address = 0.0.0.0"
register: mariadb_conf
line: "bind-address = {{ ansible_facts.docker0.ipv4.address }}"
notify: restart_mariadb
- name: Restart MariaDB
service:
name: mariadb
state: restarted
when: mariadb_conf.changed
- name: Allow database connections
ufw:
- name: Allow database connections from Docker
community.general.ufw:
rule: allow
port: "3306"
proto: tcp
src: "{{ item }}"
loop: "{{ mariadb_trust }}"
loop: "{{ mariadb_trust | default(['172.16.0.0/12']) }}"

View File

@@ -1,28 +1,28 @@
- name: Install GPG
apt:
ansible.builtin.apt:
name: gpg
state: present
- name: Add AdoptOpenJDK's signing key
apt_key:
ansible.builtin.apt_key:
id: 8ED17AF5D7E675EB3EE3BCE98AC3B29174885C03
url: https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public
- name: Install AdoptOpenJDK repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/ buster main
mode: 0644
state: present
- name: Install Java
apt:
ansible.builtin.apt:
name: "adoptopenjdk-{{ item.java.version }}-hotspot"
state: present
when: item.java.version is defined
loop: "{{ minecraft }}"
- name: "Install default Java, version {{ minecraft_java }}"
apt:
ansible.builtin.apt:
name: "{{ minecraft_java_pkg }}"
state: present
when: item.java.version is not defined
@@ -30,7 +30,7 @@
register: minecraft_java_default
- name: "Activate default Java, version {{ minecraft_java }}"
alternatives:
community.general.alternatives:
name: java
path: "/usr/lib/jvm/{{ minecraft_java_pkg }}-amd64/bin/java"
when: minecraft_java_default.changed

View File

@@ -1,14 +1,14 @@
- import_tasks: system.yml
- ansible.builtin.import_tasks: system.yml
when: minecraft_eula
- import_tasks: java.yml
- ansible.builtin.import_tasks: java.yml
when: minecraft_eula
- import_tasks: vanilla.yml
- ansible.builtin.import_tasks: vanilla.yml
when: minecraft_eula
- import_tasks: modpacks.yml
- ansible.builtin.import_tasks: modpacks.yml
when: minecraft_eula
- import_tasks: service.yml
- ansible.builtin.import_tasks: service.yml
when: minecraft_eula

View File

@@ -1,5 +1,5 @@
- name: Download Minecraft modpack installer
get_url:
ansible.builtin.get_url:
url: "{{ minecraft_modpack_url }}"
dest: "{{ minecraft_home }}/{{ item.name }}/serverinstall_{{ item.modpack | replace ('/', '_') }}"
owner: "{{ minecraft_user }}"
@@ -9,7 +9,7 @@
when: item.modpack is defined and item.sha1 is not defined
- name: Run Minecraft modpack installer
command: "sudo -u {{ minecraft_user }} ./serverinstall_{{ item.modpack | replace ('/', '_') }} --auto"
ansible.builtin.command: "sudo -u {{ minecraft_user }} ./serverinstall_{{ item.modpack | replace ('/', '_') }} --auto"
args:
creates: "{{ minecraft_home }}/{{ item.name }}/mods"
chdir: "{{ minecraft_home }}/{{ item.name }}"
@@ -17,7 +17,7 @@
when: item.modpack is defined and item.sha1 is not defined
- name: Find Minecraft Forge
find:
ansible.builtin.find:
paths: "{{ minecraft_home }}/{{ item.name }}"
patterns: "forge*.jar"
register: minecraft_forge
@@ -25,7 +25,7 @@
when: item.modpack is defined and item.sha1 is not defined
- name: Link to Minecraft Forge
file:
ansible.builtin.file:
src: "{{ item.files[0].path }}"
dest: "{{ minecraft_home }}/{{ item.item.name }}/minecraft_server.jar"
owner: "{{ minecraft_user }}"

View File

@@ -1,11 +1,11 @@
- name: Deploy Minecraft systemd service
template:
ansible.builtin.template:
src: minecraft.service.j2
dest: "/etc/systemd/system/minecraft@.service"
register: minecraft_systemd
- name: Deploy service environmental variables
template:
ansible.builtin.template:
src: environment.conf.j2
dest: "{{ minecraft_home }}/{{ item.name }}/environment.conf"
owner: "{{ minecraft_user }}"
@@ -13,25 +13,25 @@
loop: "{{ minecraft }}"
- name: Reload systemd manager configuration
systemd:
ansible.builtin.systemd:
daemon_reload: true
when: minecraft_systemd.changed
- name: Disable non-default service instances
service:
ansible.builtin.service:
name: "minecraft@{{ item.name }}"
enabled: false
loop: "{{ minecraft }}"
when: item.name != minecraft_onboot
- name: Enable default service instance
service:
ansible.builtin.service:
name: "minecraft@{{ minecraft_onboot }}"
enabled: true
when: minecraft_eula and minecraft_onboot is defined
- name: Run default service instance
service:
ansible.builtin.service:
name: "minecraft@{{ minecraft_onboot }}"
state: started
when: minecraft_eula and minecraft_onboot is defined and minecraft_onboot_run

View File

@@ -1,16 +1,16 @@
- name: Install Screen
apt:
ansible.builtin.apt:
name: screen
state: present
- name: Create Minecraft user
user:
ansible.builtin.user:
name: "{{ minecraft_user }}"
state: present
shell: /bin/bash
ansible.builtin.shell: /bin/bash
- name: Create Minecraft directory
file:
ansible.builtin.file:
path: "{{ minecraft_home }}/{{ item.name }}"
state: directory
owner: "{{ minecraft_user }}"
@@ -18,7 +18,7 @@
loop: "{{ minecraft }}"
- name: Answer to Mojang's EULA
template:
ansible.builtin.template:
src: eula.txt.j2
dest: "{{ minecraft_home }}/{{ item.name }}/eula.txt"
owner: "{{ minecraft_user }}"

View File

@@ -1,5 +1,5 @@
- name: Download Minecraft
get_url:
ansible.builtin.get_url:
url: "{{ minecraft_url }}"
dest: "{{ minecraft_home }}/{{ item.name }}/minecraft_server.jar"
checksum: "sha1:{{ item.sha1 }}"

View File

@@ -1,9 +1,9 @@
- name: Create Nextcloud network
docker_network:
community.general.docker_network:
name: "{{ nextcloud_container }}"
- name: Start Nextcloud's database container
docker_container:
community.general.docker_container:
name: "{{ nextcloud_dbcontainer }}"
image: mariadb:{{ nextcloud_dbversion }}
state: started
@@ -19,7 +19,7 @@
MYSQL_PASSWORD: "{{ nextcloud_dbpass }}"
- name: Start Nextcloud container
docker_container:
community.general.docker_container:
name: "{{ nextcloud_container }}"
image: nextcloud:{{ nextcloud_version }}
state: started
@@ -29,6 +29,8 @@
networks:
- name: "{{ nextcloud_container }}"
- name: traefik
env:
PHP_MEMORY_LIMIT: 1024M
labels:
traefik.http.routers.nextcloud.rule: "Host(`{{ nextcloud_domain }}`)"
traefik.http.routers.nextcloud.entrypoints: websecure
@@ -41,34 +43,34 @@
traefik.enable: "true"
- name: Grab Nextcloud database container information
docker_container_info:
community.general.docker_container_info:
name: "{{ nextcloud_dbcontainer }}"
register: nextcloud_dbinfo
- name: Grab Nextcloud container information
docker_container_info:
community.general.docker_container_info:
name: "{{ nextcloud_container }}"
register: nextcloud_info
- name: Wait for Nextcloud to become available
wait_for:
ansible.builtin.wait_for:
host: "{{ nextcloud_info.container.NetworkSettings.Networks.traefik.IPAddress }}"
port: 80
- name: Check Nextcloud status
command: "docker exec --user www-data {{ nextcloud_container }}
ansible.builtin.command: "docker exec --user www-data {{ nextcloud_container }}
php occ status"
register: nextcloud_status
args:
removes: "{{ nextcloud_root }}/config/CAN_INSTALL"
- name: Wait for Nextcloud database to become available
wait_for:
ansible.builtin.wait_for:
host: "{{ nextcloud_dbinfo.container.NetworkSettings.Networks.nextcloud.IPAddress }}"
port: 3306
- name: Install Nextcloud
command: 'docker exec --user www-data {{ nextcloud_container }}
ansible.builtin.command: 'docker exec --user www-data {{ nextcloud_container }}
php occ maintenance:install
--database "mysql"
--database-host "{{ nextcloud_dbcontainer }}"
@@ -83,19 +85,19 @@
- nextcloud_domain is defined
- name: Set Nextcloud's Trusted Proxy
command: 'docker exec --user www-data {{ nextcloud_container }}
ansible.builtin.command: 'docker exec --user www-data {{ nextcloud_container }}
php occ config:system:set trusted_proxies 0
--value="{{ traefik_name }}"'
when: nextcloud_install.changed
- name: Set Nextcloud's Trusted Domain
command: 'docker exec --user www-data {{ nextcloud_container }}
ansible.builtin.command: 'docker exec --user www-data {{ nextcloud_container }}
php occ config:system:set trusted_domains 0
--value="{{ nextcloud_domain }}"'
when: nextcloud_install.changed
- name: Preform Nextcloud database maintenance
command: "docker exec --user www-data {{ nextcloud_container }} {{ item }}"
ansible.builtin.command: "docker exec --user www-data {{ nextcloud_container }} {{ item }}"
loop:
- "php occ maintenance:mode --on"
- "php occ db:add-missing-indices"
@@ -103,7 +105,14 @@
- "php occ maintenance:mode --off"
when: nextcloud_install.changed
- name: Install Nextcloud background jobs cron
ansible.builtin.cron:
name: Nextcloud background job
minute: "*/5"
job: "/usr/bin/docker exec -u www-data nextcloud /usr/local/bin/php -f /var/www/html/cron.php"
user: root
- name: Remove Nextcloud's CAN_INSTALL file
file:
ansible.builtin.file:
path: "{{ nextcloud_root }}/config/CAN_INSTALL"
state: absent

View File

@@ -1,15 +1,15 @@
- name: Create nginx root
file:
ansible.builtin.file:
path: "{{ nginx_root }}"
state: directory
- name: Generate deploy keys
openssh_keypair:
community.crypto.openssh_keypair:
path: "{{ nginx_repo_key }}"
state: present
- name: Clone static website files
git:
ansible.builtin.git:
repo: "{{ nginx_repo_url }}"
dest: "{{ nginx_html }}"
version: "{{ nginx_repo_branch }}"
@@ -17,7 +17,7 @@
separate_git_dir: "{{ nginx_repo_dest }}"
- name: Start nginx container
docker_container:
community.general.docker_container:
name: "{{ nginx_name }}"
image: nginx:{{ nginx_version }}
state: started

View File

@@ -1,10 +1,10 @@
- name: Install PostgreSQL
apt:
ansible.builtin.apt:
name: postgresql
state: present
- name: Trust connections to PostgreSQL
postgresql_pg_hba:
community.general.postgresql_pg_hba:
dest: "{{ postgresql_config }}"
contype: host
databases: all
@@ -15,7 +15,7 @@
loop: "{{ postgresql_trust }}"
- name: Change PostgreSQL listen addresses
postgresql_set:
community.general.postgresql_set:
name: listen_addresses
value: "{{ postgresql_listen }}"
become: true
@@ -23,19 +23,19 @@
register: postgresql_config
- name: Reload PostgreSQL
service:
ansible.builtin.service:
name: postgresql
state: reloaded
when: postgresql_hba.changed and not postgresql_config.changed
- name: Restart PostgreSQL
service:
ansible.builtin.service:
name: postgresql
state: restarted
when: postgresql_config.changed
- name: Allow database connections
ufw:
community.general.ufw:
rule: allow
port: "5432"
proto: tcp

View File

@@ -1,35 +1,35 @@
- name: Install Prometheus node exporter
apt:
ansible.builtin.apt:
name: prometheus-node-exporter
state: present
- name: Run Prometheus node exporter
service:
ansible.builtin.service:
name: prometheus-node-exporter
state: started
- name: Create Prometheus data directory
file:
ansible.builtin.file:
path: "{{ prom_root }}/prometheus"
state: directory
owner: nobody
- name: Create Prometheus config directory
file:
ansible.builtin.file:
path: "{{ prom_root }}/config"
state: directory
- name: Install Prometheus configuration
template:
ansible.builtin.template:
src: prometheus.yml.j2
dest: "{{ prom_root }}/config/prometheus.yml"
- name: Create Prometheus network
docker_network:
community.general.docker_network:
name: "{{ prom_name }}"
- name: Start Prometheus container
docker_container:
community.general.docker_container:
name: "{{ prom_name }}"
image: prom/prometheus:{{ prom_version }}
state: started
@@ -51,7 +51,7 @@
traefik.enable: "true"
- name: Start Grafana container
docker_container:
community.general.docker_container:
name: "{{ grafana_name }}"
image: grafana/grafana:{{ grafana_version }}
state: started

View File

@@ -1,5 +1,5 @@
- name: Reload nginx
service:
ansible.builtin.service:
name: nginx
state: reloaded
listen: reload_nginx

View File

@@ -1,47 +1,48 @@
- name: Install nginx
apt:
ansible.builtin.apt:
name: nginx
state: present
update_cache: true
- name: Start nginx and enable on boot
service:
ansible.builtin.service:
name: nginx
state: started
enabled: true
- name: Generate DH Parameters
openssl_dhparam:
community.crypto.openssl_dhparam:
path: /etc/ssl/dhparams.pem
size: 4096
- name: Install nginx base configuration
template:
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
mode: '0644'
mode: 0644
notify: reload_nginx
- name: Install nginx sites configuration
template:
ansible.builtin.template:
src: server-nginx.conf.j2
dest: "/etc/nginx/sites-available/{{ item.domain }}.conf"
mode: '0644'
mode: 0400
loop: "{{ proxy.servers }}"
notify: reload_nginx
register: nginx_sites
- name: Enable nginx sites configuration
file:
ansible.builtin.file:
src: "/etc/nginx/sites-available/{{ item.item.domain }}.conf"
dest: "/etc/nginx/sites-enabled/{{ item.item.domain }}.conf"
state: link
mode: 0400
loop: "{{ nginx_sites.results }}"
when: item.changed
notify: reload_nginx
- name: Generate self-signed certificate
shell: 'openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes \
ansible.builtin.command: 'openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes \
-subj "/C=US/ST=Local/L=Local/O=Org/OU=IT/CN=example.com" \
-keyout /etc/ssl/private/nginx-selfsigned.key \
-out /etc/ssl/certs/nginx-selfsigned.crt'
@@ -51,33 +52,34 @@
notify: reload_nginx
- name: Install LE's certbot
apt:
ansible.builtin.apt:
name: ['certbot', 'python3-certbot-dns-cloudflare']
state: present
when: proxy.production is defined and proxy.production
- name: Install Cloudflare API token
template:
ansible.builtin.template:
src: cloudflare.ini.j2
dest: /root/.cloudflare.ini
mode: '0600'
mode: 0400
when: proxy.production is defined and proxy.production and proxy.dns_cloudflare is defined
- name: Create nginx post renewal hook directory
file:
ansible.builtin.file:
path: /etc/letsencrypt/renewal-hooks/post
state: directory
mode: 0500
when: proxy.production is defined and proxy.production
- name: Install nginx post renewal hook
copy:
ansible.builtin.copy:
src: reload-nginx.sh
dest: /etc/letsencrypt/renewal-hooks/post/reload-nginx.sh
mode: '0755'
when: proxy.production is defined and proxy.production
- name: Run Cloudflare DNS-01 challenges on wildcard domains
shell: '/usr/bin/certbot certonly \
ansible.builtin.shell: '/usr/bin/certbot certonly \
--non-interactive \
--agree-tos \
--email "{{ proxy.dns_cloudflare.email }}" \
@@ -93,7 +95,7 @@
notify: reload_nginx
- name: Add HTTP and HTTPS firewall rule
ufw:
community.general.ufw:
rule: allow
port: "{{ item }}"
proto: tcp

View File

@@ -46,6 +46,12 @@ server {
proxy_pass {{ item.proxy_pass }};
{% if item.proxy_ssl_verify is defined and item.proxy_ssl_verify is false %}
proxy_ssl_verify off;
{% endif %}
{% if item.websockets is defined and item.websockets %}
proxy_http_version 1.1;
proxy_set_header Connection $http_connection;
proxy_set_header Origin http://$host;
proxy_set_header Upgrade $http_upgrade;
{% endif %}
}
}

View File

@@ -13,12 +13,12 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
- name: Install rsnapshot
apt:
ansible.builtin.apt:
name: rsnapshot
state: present
- name: Create rsnapshot system directories
file:
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
@@ -26,19 +26,19 @@
- "{{ rsnapshot_logdir }}"
- name: Create snapshot_root directories
file:
ansible.builtin.file:
path: "{{ item.root | default(rsnapshot_root) }}"
state: directory
loop: "{{ rsnapshot }}"
- name: Install rsnapshot configuration
template:
ansible.builtin.template:
src: rsnapshot.conf.j2
dest: "{{ rsnapshot_confdir }}/{{ item.name }}.conf"
loop: "{{ rsnapshot }}"
- name: Install rsnapshot crons
cron:
ansible.builtin.cron:
name: "{{ item.1.interval }} rsnapshot of {{ item.0.name }}"
job: "/usr/bin/rsnapshot -c {{ rsnapshot_confdir }}/{{ item.0.name }}.conf {{ item.1.interval }} >/dev/null"
user: "root"
@@ -53,13 +53,13 @@
- cron
- name: Install rsnapshot report script
template:
ansible.builtin.template:
src: rsnapshot-report.sh.j2
dest: /usr/local/bin/rsnapshot-report
mode: '0750'
- name: Install rsnapshot report crons
cron:
ansible.builtin.cron:
name: "{{ item.name }} rsnapshot report email"
job: "/usr/local/bin/rsnapshot-report {{ rsnapshot_reportlog }}
| mail -s '{{ item.report.subject | default('Backup Report') }}' {{ item.report.to }}"

View File

@@ -1,12 +1,18 @@
# Container settings
traefik_name: traefik
traefik_dashboard: false
traefik_root: "/opt/{{ traefik_name }}"
traefik_standalone: true
traefik_http_only: false
traefik_debug: false
traefik_web_entry: "127.0.0.1:8000"
traefik_websecure_entry: "127.0.0.1:8443"
traefik_localonly: "10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 127.0.0.0/8"
# HTTPS settings
traefik_production: false
traefik_hsts_enable: false
traefik_hsts_preload: false
traefik_hsts_seconds: 0
traefik_http_redirect: false
traefik_ports:
- "80:80"
- "443:443"
traefik_http_redirect: true
# Host settings
traefik_root: "{{ docker_compose_root }}/{{ traefik_name }}"

View File

@@ -1,14 +1,12 @@
- name: Reload Traefik container
file:
ansible.builtin.file:
path: "{{ traefik_root }}/config/dynamic"
state: touch
mode: 0500
listen: reload_traefik
- name: Restart Traefik container
docker_container:
name: "{{ traefik_name }}"
image: traefik:{{ traefik_version }}
state: started
container_default_behavior: "no_defaults"
restart: yes
- name: Restart Traefik
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ traefik_name }}"
state: restarted
listen: restart_traefik

View File

@@ -1,56 +1,49 @@
- name: Create Traefik configuration directories
file:
- name: Create Traefik directories
ansible.builtin.file:
path: "{{ traefik_root }}/config/dynamic"
mode: 0500
state: directory
- name: Install static Traefik configuration
template:
src: traefik.yml.j2
dest: "{{ traefik_root }}/config/traefik.yml"
notify: restart_traefik
- name: Install dynamic security configuration
template:
ansible.builtin.template:
src: security.yml.j2
dest: "{{ traefik_root }}/config/dynamic/security.yml"
owner: root
group: root
mode: 0600
mode: 0400
notify: reload_traefik
- name: Install dynamic non-docker configuration
template:
ansible.builtin.template:
src: "external.yml.j2"
dest: "{{ traefik_root }}/config/dynamic/{{ item.name }}.yml"
mode: 0400
loop: "{{ traefik_external }}"
when: traefik_external is defined
- name: Create Traefik network
docker_network:
name: traefik
- name: Install Traefik's docker-compose file
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ traefik_root }}/docker-compose.yml"
mode: 0400
notify: restart_traefik
- name: Start Traefik container
docker_container:
name: "{{ traefik_name }}"
image: traefik:{{ traefik_version }}
- name: Install Traefik's docker-compose variables
ansible.builtin.template:
src: compose-env.j2
dest: "{{ traefik_root }}/.env"
mode: 0400
notify: restart_traefik
- name: Install static Traefik configuration
ansible.builtin.template:
src: traefik.yml.j2
dest: "{{ traefik_root }}/config/traefik.yml"
mode: 0400
notify: restart_traefik
- name: Start and enable Traefik service
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ traefik_name }}"
state: started
restart_policy: always
ports: "{{ traefik_ports }}"
container_default_behavior: "no_defaults"
networks_cli_compatible: "false"
networks:
- name: traefik
labels:
traefik.http.routers.traefik.rule: "Host(`{{ traefik_domain }}`)"
#traefik.http.middlewares.auth.basicauth.users: "{{ traefik_auth }}"
#traefik.http.middlewares.localonly.ipwhitelist.sourcerange: "{{ traefik_localonly }}"
#traefik.http.routers.traefik.tls.certresolver: letsencrypt
#traefik.http.routers.traefik.middlewares: "securehttps@file,auth@docker,localonly"
traefik.http.routers.traefik.service: "api@internal"
traefik.http.routers.traefik.entrypoints: websecure
traefik.http.routers.traefik.tls: "true"
traefik.docker.network: traefik
traefik.enable: "{{ traefik_dashboard | string }}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "{{ traefik_root }}/config:/etc/traefik"
enabled: true

View File

@@ -0,0 +1,8 @@
# {{ ansible_managed }}
traefik_version={{ traefik_version }}
traefik_name={{ traefik_name }}
traefik_domain={{ traefik_domain }}
traefik_dashboard={{ traefik_dashboard | string | lower }}
traefik_debug={{ traefik_debug | string | lower }}
traefik_web_entry={{ traefik_web_entry }}
traefik_websecure_entry={{ traefik_websecure_entry }}

View File

@@ -0,0 +1,25 @@
version: '3.7'
networks:
traefik:
name: traefik
services:
traefik:
image: "traefik:${traefik_version}"
container_name: "${traefik_name}"
ports:
- "${traefik_web_entry}:80"
{% if traefik_standalone and not traefik_http_only %}
- "${traefik_websecure_entry}:443"
{% endif %}
networks:
- traefik
labels:
- "traefik.http.routers.traefik.rule=Host(`{{ traefik_domain }}`)"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.docker.network=traefik"
- "traefik.enable=${traefik_dashboard}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "{{ traefik_root }}/config:/etc/traefik"

View File

@@ -10,7 +10,7 @@ providers:
entrypoints:
web:
address: ':80'
{% if traefik_http_redirect is defined and traefik_http_redirect %}
{% if traefik_http_redirect is defined and traefik_http_redirect and not traefik_http_only %}
http:
redirections:
entrypoint:
@@ -18,10 +18,12 @@ entrypoints:
scheme: https
permanent: true
{% endif %}
{% if not traefik_http_only is defined or not traefik_http_only %}
websecure:
address: ':443'
http:
tls: {}
{% endif %}
{% if traefik_acme_email is defined %}
certificatesResolvers:

View File

@@ -1,52 +1,52 @@
- name: Install GnuPG
apt:
ansible.builtin.apt:
name: gnupg
state: present
- name: Add AdoptOpenJDK's signing key
apt_key:
ansible.builtin.apt_key:
id: 8ED17AF5D7E675EB3EE3BCE98AC3B29174885C03
url: https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public
- name: Add MongoDB 3.6's signing key
apt_key:
ansible.builtin.apt_key:
id: 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
url: https://www.mongodb.org/static/pgp/server-3.6.asc
- name: Add UniFi's signing key
apt_key:
ansible.builtin.apt_key:
id: 4A228B2D358A5094178285BE06E85760C0A52C50
keyserver: keyserver.ubuntu.com
- name: Install AdoptOpenJDK repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/ buster main
mode: 0644
state: present
- name: Install MongoDB 3.6 repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/3.6 main
mode: 0644
state: present
- name: Install UniFi repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb https://www.ui.com/downloads/unifi/debian stable ubiquiti
mode: 0644
state: present
- name: Install MongoDB 3.6
apt:
ansible.builtin.apt:
name: mongodb-org
state: present
- name: Install OpenJDK 8 LTS
apt:
ansible.builtin.apt:
name: adoptopenjdk-8-hotspot
state: present
- name: Install UniFi
apt:
ansible.builtin.apt:
name: unifi
state: present

View File

@@ -1,5 +1,5 @@
- name: Start WordPress database container
docker_container:
community.general.docker_container:
name: "{{ wordpress_dbcontainer }}"
image: mariadb:{{ wordpress_dbversion }}
restart_policy: always
@@ -11,7 +11,7 @@
MYSQL_PASSWORD: "{{ wordpress_dbpass }}"
- name: Start WordPress container
docker_container:
community.general.docker_container:
name: "{{ wordpress_container }}"
image: wordpress:{{ wordpress_version }}
restart_policy: always

View File

@@ -1,42 +0,0 @@
#!/bin/bash
COMMENT="Project Moxie"
DOMAIN="vm.krislamo.org"
HOST[0]="traefik.${DOMAIN}"
HOST[1]="cloud.${DOMAIN}"
HOST[2]="git.${DOMAIN}"
HOST[3]="jenkins.${DOMAIN}"
HOST[4]="prom.${DOMAIN}"
HOST[5]="grafana.${DOMAIN}"
HOST[6]="nginx.${DOMAIN}"
HOST[7]="vault.${DOMAIN}"
HOST[8]="wordpress.${DOMAIN}"
HOST[9]="site1.wordpress.${DOMAIN}"
HOST[10]="site2.wordpress.${DOMAIN}"
HOST[11]="unifi.${DOMAIN}"
HOST[11]="kutt.${DOMAIN}"
# Get Vagrantbox guest IP
VAGRANT_OUTPUT=$(vagrant ssh -c "hostname -I | cut -d' ' -f2" 2>/dev/null)
# Remove ^M from the end
[ ${#VAGRANT_OUTPUT} -gt 1 ] && IP=${VAGRANT_OUTPUT::-1}
echo "Purging project addresses from /etc/hosts"
sudo sed -i "s/# $COMMENT//g" /etc/hosts
for address in "${HOST[@]}"; do
sudo sed -i "/$address/d" /etc/hosts
done
# Remove trailing newline
sudo sed -i '${/^$/d}' /etc/hosts
if [ -n "$IP" ]; then
echo -e "Adding new addresses...\n"
echo -e "# $COMMENT" | sudo tee -a /etc/hosts
for address in "${HOST[@]}"; do
echo -e "$IP\t$address" | sudo tee -a /etc/hosts
done
else
echo "Cannot find address. Is the Vagrant box running?"
fi