Compare commits

..

51 Commits
kutt ... main

Author SHA1 Message Date
3102c621f0
Add optional IP restriction for nginx site configs 2024-10-19 21:08:15 -04:00
e3f03edf3f
Use file-based preshared keys for WireGuard
- Include proxy role in standard Docker playbook
2024-10-13 22:27:27 -04:00
f481a965dd
Update Samba and WireGuard configuration
- Adjust Samba config file permissions to 644
- Introduce PresharedKey option in WireGuard config template
2024-09-10 22:35:20 -04:00
a0aa289c05
Restrict GitHub Actions to a dedicated branch
- The Vagrant testing setup on macos-latest is broken
- Temporary measure until fixed or abandoned
2024-09-10 22:11:31 -04:00
324fe0b191
Upgrade Nextcloud setup to use compose files
- Integrated MariaDB role into Dockerbox configuration
- Moved proxy role to the end to avoid early endpoint activation
- Temporarily disabled select roles for future re-evaluation
- Introduced flush_handlers task for early MariaDB restart
- Moved a few Nextcloud tasks to handlers
- Configured Nextcloud to utilize the host's MariaDB instance
- Enhanced overall code linting quality
2024-04-21 22:27:48 -04:00
6fbd3c53bb
Add Vagrant cache option for dhparams.pem 2024-03-26 21:51:39 -04:00
01e8e22c01
Prevent running 'vagrant ssh' as root
Resolve possible issues with 'vagrant ssh' when executed as root
2024-03-04 23:42:40 -05:00
a31bf233dc
Slight message tweaks in forward-ssh.sh script 2023-12-09 13:16:46 -05:00
60fafed9cd
Update forward-ssh.sh script for Swarm support
- Address limitations in Swarm with loopback binding
- Ensure compatibility with localhost DNS wildcard A record
- Enable port forwarding on 80 and 443 using VM IP for Swarm compatibility
- Retain 8443:localhost:8443 for non-Swarm setups
2023-12-09 13:04:07 -05:00
2c00858590
Update README.md 2023-11-18 17:37:27 -05:00
be80681485
Add multi-machine support to forward-ssh.sh
- Detects multiple private keys
- Adds validation for all discovered keys
- Defaults to "default" machine, with override via the first parameter
2023-11-05 21:37:33 -05:00
a2e60972c7
Comply with linting on proxy setup 2023-11-05 21:34:19 -05:00
598359854f
Update proxy role to comply with linting 2023-11-03 00:47:06 -04:00
ef812c1877
Add copyright notice on forward-ssh.sh 2023-11-03 00:12:12 -04:00
385e60aee5
Update proxy playbook 2023-11-02 23:29:54 -04:00
5633468f41
Fix linting issues on Docker role 2023-10-22 13:48:20 -04:00
7f91b24adb
Add Debian/Official Docker repo toggle
- Default docker_official toggle to false (for now)
- Preempt MariaDB restart before container restarts
- Start containers in a handler
2023-10-22 11:33:05 -04:00
5b09029239
Update base role to pass linting 2023-10-20 21:30:25 -04:00
7adb5f10e9
Update Gitea role for docker_compose_deploy
- Add MariaDB to dev playbook
- Set Git user in "users:"
- Define Gitea external compose project
- Forward SSH port in forwarding script
- Create user groups with system users
- Install python3-pymysql for Ansible
- Strip old Gitea deployment methods
- Bind MariaDB to docker0 for Docker access
2023-10-20 15:41:44 -04:00
c3b4321667
Add Gitea dev playbook and host_vars 2023-10-19 16:40:34 -04:00
d05c5d3086
Slight tweaks on Ansible output 2023-10-19 16:36:05 -04:00
ac412f16ef
Simplify the "Import GPG keys" loop 2023-10-19 14:09:10 -04:00
2354a8fb8c
Verify successful GPG imports 2023-10-19 13:37:35 -04:00
251a7c0dd5
Import PGP key and verify git commits 2023-10-19 02:56:36 -04:00
1d8ae8a0b6
Install ntpsec 2023-10-19 01:27:31 -04:00
6b2feaee5e
Hide docker-compose secrets from diff output 2023-10-18 23:03:52 -04:00
31e0538b84
Add locale configuration tasks to base role 2023-10-18 16:32:09 -04:00
a65c4b9cf6
Handle Ansible undefined loop variable
- Default docker_compose_deploy to empty list if undefined
- Add conditional check to avoid looping through an empty list
2023-10-10 00:14:52 -04:00
7ee6e4810d
Convert booleans to lowercase 2023-10-10 00:00:00 -04:00
87aa7ecf8b
Add external compose support in the docker role
- Use ansible.posix.synchronize for compose.yml
- Set fact for compose service restarts
- Introduce plain Docker dev host
- Optionally verify repos via GPG before sync
- Hide docker_repos_path in .folder
- Tweak .env for conciseness
- Add --diff to Ansible in Vagrantfile
- Clean output with loop_control
- Embed GPG in base role
2023-10-09 23:47:49 -04:00
0377a5e642
Add option for private OCI registry auth 2023-09-29 22:18:59 -04:00
2e02efcbb7
Add Makefile, roles_path, and SSH tunnel variable 2023-09-26 21:14:06 -04:00
8fed63792b
Ask permission for starting vagrant SSH tunnels 2023-09-16 00:04:58 -04:00
2c4fcbacc3
Introduce forward-ssh.sh method & reorganize
- Abandoned update-hosts.sh in favor of loopback SSH forwarding
- Adopted *.local.krislamo.org as a wildcard loopback domain
- Bound Traefik to ports 443/80 on Dockerbox dev
- Removed outdated Gitea config from Dockerbox
- Relocated production playbooks to a new directory
2023-09-15 23:46:45 -04:00
b81372c07a
Fix the Vagrantfile for Github runners 2023-08-30 19:45:42 -04:00
9b5be29a1a
Update Vagrantfile to use external settings 2023-08-21 18:46:47 -04:00
ef5aacdbbd
No deploy keys without compose deploy variable 2023-07-21 23:52:18 -04:00
a635c7aa48
Add option to deploy external docker-compose stack 2023-07-20 03:51:44 -04:00
56aee460ad
Limit Github actions to specific branches 2023-07-20 00:33:42 -04:00
027ba46f6b
Add Github actions and remove old ansible stuff 2023-07-08 23:43:52 -04:00
48216db8f9
Updated Nextcloud settings and added cron job 2023-06-18 23:52:10 -04:00
fa1dc4acb7
Fix WireGuard firewall rule 2023-06-15 03:09:13 -04:00
228cd5795b
Config adjustments for Jellyfin/Samba deployment
- Ignored .vscode
- Added firewall exclusion option
- Allowed guest access in Samba
2023-06-09 22:26:47 -04:00
74a559f1f6
Update mediaserver playbook and fix Wireguard task 2023-06-08 03:47:54 -04:00
4c2a1550c4
Adding samba and general user management 2023-06-07 02:12:17 -04:00
f02cf7b0cc
Refactor docker playbook
- Removed copyright notice
- Variablize 'hosts' value in the playbook
- Install Jenkins agent before running Docker role
2023-05-08 16:26:16 -04:00
9142254a57
Improvements for ansible-linting 2023-05-04 01:44:18 -04:00
dfd93dd5f8
Updated Ansible tasks to FQCN format 2023-05-03 23:42:55 -04:00
81d2ea447a
Add mediaserver, rm .gitignore, FQCN, Jellyfin
- Added development "mediaserver" playbook for testing
- rm .gitignore in roles dir since no external ansible roles are used
- Update a part of the base role to use FQCN for linting
- Added "jellyfin" role to install Jellyfin with docker-compose
- Updated Traefik to use the loopback for default web entry points
- Simplified Traefik docker-compose vars, Ansible sets defaults
2023-04-26 02:26:50 -04:00
9512212b84
Refactor Traefik deploy: docker-compose + systemd
- Replace docker_container ansible with new setup
- Add option to disable HTTPS for alternate reverse proxy use
2023-04-21 03:04:53 -04:00
c67a39982e
Option to enable websockets for the noVNC console 2022-12-06 00:15:10 -05:00
101 changed files with 1528 additions and 808 deletions

40
.github/workflows/vagrant.yml vendored Normal file
View File

@ -0,0 +1,40 @@
name: homelab-ci
on:
push:
branches:
- github_actions
# - main
# - testing
jobs:
homelab-ci:
runs-on: macos-latest
steps:
- uses: actions/checkout@v3
- name: Cache Vagrant boxes
uses: actions/cache@v3
with:
path: ~/.vagrant.d/boxes
key: ${{ runner.os }}-vagrant-${{ hashFiles('Vagrantfile') }}
restore-keys: |
${{ runner.os }}-vagrant-
- name: Install Ansible
run: brew install ansible@7
- name: Software Versions
run: |
printf "VirtualBox "
vboxmanage --version
vagrant --version
export PATH="/usr/local/opt/ansible@7/bin:$PATH"
ansible --version
- name: Vagrant Up with Dockerbox Playbook
run: |
export PATH="/usr/local/opt/ansible@7/bin:$PATH"
PLAYBOOK=dockerbox vagrant up
vagrant ssh -c "docker ps"

15
.gitignore vendored
View File

@ -1,13 +1,4 @@
.vagrant
.playbook
/*.yml
/*.yaml
!backup.yml
!moxie.yml
!docker.yml
!dockerbox.yml
!hypervisor.yml
!minecraft.yml
!proxy.yml
!unifi.yml
/environments/
.vagrant*
.vscode
/environments/

10
Makefile Normal file
View File

@ -0,0 +1,10 @@
.PHONY: clean install
all: install
install:
vagrant up --no-destroy-on-error
sudo ./forward-ssh.sh
clean:
vagrant destroy -f && rm -rf .vagrant

View File

@ -1,41 +1,76 @@
# Project Moxie
# Homelab
Project Moxie is a personal IT homelab project written in Ansible and executed by Jenkins. It is a growing collection of infrastructure as code (IaC) I write out of curiosity and for reference purposes, keeping a handful of beneficial projects managed and secured.
This project is my personal IT homelab initiative for self-hosting and
exploring Free and Open Source Software (FOSS) infrastructure. As a technology
enthusiast and professional, this project is primarily a practical tool for
hosting services. It serves as a playground for engaging with systems
technology in functional, intriguing, and gratifying ways. Self-hosting
empowers individuals to govern their digital space, ensuring that their online
environments reflect personal ethics rather than centralized entities' opaque
policies.
Built on Debian Stable, this project utilizes Ansible and Vagrant, providing
relatively easy-to-use reproducible ephemeral environments to test
infrastructure automation before pushing to live systems.
## Quick Start
To configure a local virtual machine for testing, follow these simple steps.
### Prerequisites
Vagrant and VirtualBox are used to develop Project Moxie. You will need to install these before continuing.
### Installation
1. Clone this repository
```
git clone https://github.com/krislamo/moxie
git clone https://git.krislamo.org/kris/homelab
```
Optionally clone from the GitHub mirror instead:
```
git clone https://github.com/krislamo/homelab
```
2. Set the `PLAYBOOK` environmental variable to a development playbook name in the `dev/` directory
The following `PLAYBOOK` names are available: `dockerbox`, `hypervisor`, `minecraft`, `bitwarden`, `nextcloud`, `nginx`
To list available options in the `dev/` directory and choose a suitable PLAYBOOK, run:
```
ls dev/*.yml | xargs -n 1 basename -s .yml
```
Export the `PLAYBOOK` variable
```
export PLAYBOOK=dockerbox
```
3. Bring the Vagrant box up
3. Clean up any previous provision and build the VM
```
vagrant up
make clean && make
```
#### Copyright and License
Copyright (C) 2020-2021 Kris Lamoureux
## Vagrant Settings
The Vagrantfile configures the environment based on settings from `.vagrant.yml`,
with default values including:
- PLAYBOOK: `default`
- Runs a `default` playbook that does nothing.
- You can set this by an environmental variable with the same name.
- VAGRANT_BOX: `debian/bookworm64`
- Current Debian Stable codename
- VAGRANT_CPUS: `2`
- Threads or cores per node, depending on CPU architecture
- VAGRANT_MEM: `2048`
- Specifies the amount of memory (in MB) allocated
- SSH_FORWARD: `false`
- Enable this if you need to forward SSH agents to the Vagrant machine
## Copyright and License
Copyright (C) 2019-2023 Kris Lamoureux
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, version 3 of the License.
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3 of the License.
This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.
You should have received a copy of the GNU General Public License along with
this program. If not, see <https://www.gnu.org/licenses/>.

50
Vagrantfile vendored
View File

@ -1,43 +1,41 @@
# -*- mode: ruby -*-
# vi: set ft=ruby :
SSH_FORWARD=ENV["SSH_FORWARD"]
if !(SSH_FORWARD == "true")
SSH_FORWARD = false
require 'yaml'
settings_path = '.vagrant.yml'
settings = {}
if File.exist?(settings_path)
settings = YAML.load_file(settings_path)
end
PLAYBOOK=ENV["PLAYBOOK"]
if !PLAYBOOK
if File.exist?('.playbook')
PLAYBOOK = IO.read('.playbook').split("\n")[0]
end
VAGRANT_BOX = settings['VAGRANT_BOX'] || 'debian/bookworm64'
VAGRANT_CPUS = settings['VAGRANT_CPUS'] || 2
VAGRANT_MEM = settings['VAGRANT_MEM'] || 2048
SSH_FORWARD = settings['SSH_FORWARD'] || false
if !PLAYBOOK || PLAYBOOK.empty?
PLAYBOOK = "\nERROR: Set env PLAYBOOK"
end
else
File.write(".playbook", PLAYBOOK)
# Default to shell environment variable: PLAYBOOK (priority #1)
PLAYBOOK=ENV["PLAYBOOK"]
if !PLAYBOOK || PLAYBOOK.empty?
# PLAYBOOK setting in .vagrant.yml (priority #2)
PLAYBOOK = settings['PLAYBOOK'] || 'default'
end
Vagrant.configure("2") do |config|
config.vm.box = "debian/bullseye64"
config.vm.box = VAGRANT_BOX
config.vm.network "private_network", type: "dhcp"
config.vm.synced_folder ".", "/vagrant", disabled: true
config.vm.synced_folder "./scratch", "/vagrant/scratch"
config.ssh.forward_agent = SSH_FORWARD
# Machine Name
config.vm.define :moxie do |moxie| #
end
# Libvrit provider
config.vm.provider :libvirt do |libvirt|
libvirt.cpus = 2
libvirt.memory = 4096
libvirt.default_prefix = ""
libvirt.cpus = VAGRANT_CPUS
libvirt.memory = VAGRANT_MEM
end
config.vm.provider "virtualbox" do |vbox|
vbox.memory = 4096
# Virtualbox provider
config.vm.provider :virtualbox do |vbox|
vbox.cpus = VAGRANT_CPUS
vbox.memory = VAGRANT_MEM
end
# Provision with Ansible
@ -45,6 +43,6 @@ Vagrant.configure("2") do |config|
ENV['ANSIBLE_ROLES_PATH'] = File.dirname(__FILE__) + "/roles"
ansible.compatibility_mode = "2.0"
ansible.playbook = "dev/" + PLAYBOOK + ".yml"
ansible.raw_arguments = ["--diff"]
end
end

View File

@ -1,6 +1,7 @@
[defaults]
inventory = ./environments/development
interpreter_python = /usr/bin/python3
roles_path = ./roles
[connection]
pipelining = true

4
dev/default.yml Normal file
View File

@ -0,0 +1,4 @@
- name: Install 'default' aka nothing
hosts: all
become: true
tasks: []

8
dev/docker.yml Normal file
View File

@ -0,0 +1,8 @@
- name: Install Docker Server
hosts: all
become: true
vars_files:
- host_vars/docker.yml
roles:
- base
- docker

View File

@ -1,4 +1,4 @@
- name: Install Docker Box Server
- name: Install Dockerbox Server
hosts: all
become: true
vars_files:
@ -6,9 +6,7 @@
roles:
- base
- docker
- mariadb
- traefik
- nextcloud
- gitea
- jenkins
- prometheus
- nginx
- proxy

10
dev/gitea.yml Normal file
View File

@ -0,0 +1,10 @@
- name: Install Gitea Server
hosts: all
become: true
vars_files:
- host_vars/gitea.yml
roles:
- base
- docker
- mariadb
- gitea

View File

@ -9,14 +9,14 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
# bitwarden
# Get Installation ID & Key at https://bitwarden.com/host/
bitwarden_domain: vault.vm.krislamo.org
bitwarden_domain: vault.local.krislamo.org
bitwarden_dbpass: password
bitwarden_install_id: 4ea840a3-532e-4cb6-a472-abd900728b23
bitwarden_install_key: 1yB3Z2gRI0KnnH90C6p

48
dev/host_vars/docker.yml Normal file
View File

@ -0,0 +1,48 @@
# base
allow_reboot: false
manage_network: false
# Import my GPG key for git signature verification
root_gpgkeys:
- name: kris@lamoureux.io
id: FBF673CEEC030F8AECA814E73EDA9C3441EDA925
# docker
docker_users:
- vagrant
#docker_login_url: https://myregistry.example.com
#docker_login_user: myuser
#docker_login_pass: YOUR_PASSWD
docker_compose_env_nolog: false # dev only setting
docker_compose_deploy:
# Traefik
- name: traefik
url: https://github.com/krislamo/traefik
version: 31ee724feebc1d5f91cb17ffd6892c352537f194
enabled: true
accept_newhostkey: true # Consider verifying manually instead
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
# Traefik 2 (no other external compose to test currently)
- name: traefik2
url: https://github.com/krislamo/traefik
version: 31ee724feebc1d5f91cb17ffd6892c352537f194
enabled: true
accept_newhostkey: true # Consider verifying manually instead
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
VERSION: "2.10"
DOMAIN: traefik2.local.krislamo.org
NAME: traefik2
ROUTER: traefik2
NETWORK: traefik2
WEB_PORT: 127.0.0.1:8000:80
WEBSECURE_PORT: 127.0.0.1:4443:443
LOCAL_PORT: 127.0.0.1:8444:8443

View File

@ -2,47 +2,47 @@
allow_reboot: false
manage_network: false
# Import my GPG key for git signature verification
root_gpgkeys:
- name: kris@lamoureux.io
id: FBF673CEEC030F8AECA814E73EDA9C3441EDA925
# proxy
proxy:
servers:
- domain: cloud.local.krislamo.org
proxy_pass: http://127.0.0.1:8000
# docker
docker_official: true # docker's apt repos
docker_users:
- vagrant
docker_compose_env_nolog: false # dev only setting
docker_compose_deploy:
# Traefik
- name: traefik
url: https://github.com/krislamo/traefik
version: d62bd06b37ecf0993962b0449a9d708373f9e381
enabled: true
accept_newhostkey: true # Consider verifying manually instead
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
DASHBOARD: true
# Nextcloud
- name: nextcloud
url: https://github.com/krislamo/nextcloud
version: fe6d349749f178e91ae7ff726d557f48ebf84356
env:
DATA: ./data
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
traefik:
ENABLE: true
# nextcloud
nextcloud_version: stable
nextcloud_admin: admin
nextcloud_pass: password
nextcloud_domain: cloud.vm.krislamo.org
nextcloud_dbversion: latest
nextcloud_dbpass: password
# gitea
gitea_domain: git.vm.krislamo.org
gitea_version: 1
gitea_dbversion: latest
gitea_dbpass: password
# jenkins
jenkins_version: lts
jenkins_domain: jenkins.vm.krislamo.org
# prometheus (includes grafana)
prom_version: latest
prom_domain: prom.vm.krislamo.org
grafana_version: latest
grafana_domain: grafana.vm.krislamo.org
prom_targets: "['10.0.2.15:9100']"
# nginx
nginx_domain: nginx.vm.krislamo.org
nginx_name: staticsite
nginx_repo_url: https://git.krislamo.org/kris/example-website/
nginx_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
nginx_version: latest
nextcloud:
DOMAIN: cloud.local.krislamo.org
DB_PASSWD: password
ADMIN_PASSWD: password

50
dev/host_vars/gitea.yml Normal file
View File

@ -0,0 +1,50 @@
# base
allow_reboot: false
manage_network: false
users:
git:
uid: 1001
gid: 1001
home: true
system: true
# Import my GPG key for git signature verification
root_gpgkeys:
- name: kris@lamoureux.io
id: FBF673CEEC030F8AECA814E73EDA9C3441EDA925
# docker
docker_official: true # docker's apt repos
docker_users:
- vagrant
docker_compose_env_nolog: false # dev only setting
docker_compose_deploy:
# Traefik
- name: traefik
url: https://github.com/krislamo/traefik
version: 398eb48d311db78b86abf783f903af4a1658d773
enabled: true
accept_newhostkey: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
# Gitea
- name: gitea
url: https://github.com/krislamo/gitea
version: b0ce66f6a1ab074172eed79eeeb36d7e9011ef8f
enabled: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
USER_UID: "{{ users.git.uid }}"
USER_GID: "{{ users.git.gid }}"
DB_PASSWD: "{{ gitea.DB_PASSWD }}"
# gitea
gitea:
DB_NAME: gitea
DB_USER: gitea
DB_PASSWD: password

View File

@ -0,0 +1,61 @@
base_domain: local.krislamo.org
# base
allow_reboot: false
manage_network: false
users:
jellyfin:
uid: 1001
gid: 1001
shell: /usr/sbin/nologin
home: false
system: true
samba:
users:
- name: jellyfin
password: jellyfin
shares:
- name: jellyfin
path: /srv/jellyfin
owner: jellyfin
group: jellyfin
valid_users: jellyfin
firewall:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
# proxy
proxy:
#production: true
dns_cloudflare:
opts: --test-cert
#email: realemail@example.com
#api_token: CLOUDFLARE_DNS01_API_TOKEN
wildcard_domains:
- "{{ base_domain }}"
servers:
- domain: "{{ traefik_domain }}"
proxy_pass: "http://127.0.0.1:8000"
- domain: "{{ jellyfin_domain }}"
proxy_pass: "http://127.0.0.1:8000"
# docker
docker_users:
- vagrant
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: "traefik.{{ base_domain }}"
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
traefik_http_only: true # if behind reverse-proxy
# jellyfin
jellyfin_domain: "jellyfin.{{ base_domain }}"
jellyfin_version: latest
jellyfin_media: /srv/jellyfin

View File

@ -5,14 +5,14 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
# container settings
nextcloud_version: stable
nextcloud_admin: admin
nextcloud_pass: password
nextcloud_domain: cloud.vm.krislamo.org
nextcloud_domain: cloud.local.krislamo.org
# database settings
nextcloud_dbversion: latest

View File

@ -9,13 +9,13 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
# nginx
nginx_domain: nginx.vm.krislamo.org
nginx_domain: nginx.local.krislamo.org
nginx_name: staticsite
nginx_repo_url: https://git.krislamo.org/kris/example-website/
nginx_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin

View File

@ -1,9 +1,21 @@
base_domain: vm.krislamo.org
base_domain: local.krislamo.org
# base
allow_reboot: false
manage_network: false
users:
git:
uid: 1001
gid: 1001
home: true
system: true
# Import my GPG key for git signature verification
root_gpgkeys:
- name: kris@lamoureux.io
id: FBF673CEEC030F8AECA814E73EDA9C3441EDA925
# proxy
proxy:
#production: true
@ -15,16 +27,49 @@ proxy:
- "{{ base_domain }}"
servers:
- domain: "{{ bitwarden_domain }}"
proxy_pass: "http://127.0.0.1:8080"
proxy_pass: "http://127.0.0.1"
- domain: "{{ gitea_domain }}"
proxy_pass: "http://127.0.0.1:3000"
- domain: "{{ kutt_domain }}"
proxy_pass: "http://127.0.0.1:3030"
proxy_pass: "http://127.0.0.1"
# docker
docker_official: true # docker's apt repos
docker_users:
- vagrant
docker_compose_env_nolog: false # dev only setting
docker_compose_deploy:
# Traefik
- name: traefik
url: https://github.com/krislamo/traefik
version: e97db75e2e214582fac5f5e495687ab5cdf855ad
path: docker-compose.web.yml
enabled: true
accept_newhostkey: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
# Gitea
- name: gitea
url: https://github.com/krislamo/gitea
version: b0ce66f6a1ab074172eed79eeeb36d7e9011ef8f
enabled: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENTRYPOINT: web
ENABLE_TLS: false
USER_UID: "{{ users.git.uid }}"
USER_GID: "{{ users.git.gid }}"
DB_PASSWD: "{{ gitea.DB_PASSWD }}"
# gitea
gitea_domain: "git.{{ base_domain }}"
gitea:
DB_NAME: gitea
DB_USER: gitea
DB_PASSWD: password
# bitwarden
# Get Installation ID & Key at https://bitwarden.com/host/
bitwarden_domain: "vault.{{ base_domain }}"
@ -32,21 +77,3 @@ bitwarden_dbpass: password
bitwarden_install_id: 4ea840a3-532e-4cb6-a472-abd900728b23
bitwarden_install_key: 1yB3Z2gRI0KnnH90C6p
#bitwarden_prodution: true
# gitea
gitea_domain: "git.{{ base_domain }}"
gitea_version: 1
gitea_dbpass: password
# kutt
kutt_version: latest
kutt_redis_version: 6
kutt_postgres_version: 12
kutt_domain: "kutt.{{ base_domain }}"
kutt_dbpass: password
kutt_jwt_secret: long&random
kutt_mail_user: kutt-noreply@example.com
kutt_mail_host: smtp.example.com
kutt_mail_password: realpassword
kutt_report_email: realemail@example.com
kutt_admin_emails: realemail@example.com

View File

@ -9,14 +9,14 @@ docker_users:
# traefik
traefik_version: latest
traefik_dashboard: true
traefik_domain: traefik.vm.krislamo.org
traefik_domain: traefik.local.krislamo.org
traefik_auth: admin:$apr1$T1l.BCFz$Jyg8msXYEAUi3LLH39I9d1 # admin:admin
#traefik_acme_email: realemail@example.com # Let's Encrypt settings
#traefik_production: true
# container settings
wordpress_version: latest
wordpress_domain: wordpress.vm.krislamo.org
wordpress_domain: wordpress.local.krislamo.org
wordpress_multisite: true
# database settings

11
dev/mediaserver.yml Normal file
View File

@ -0,0 +1,11 @@
- name: Install Media Server
hosts: all
become: true
vars_files:
- host_vars/mediaserver.yml
roles:
- base
- proxy
- docker
- traefik
- jellyfin

View File

@ -5,9 +5,8 @@
- host_vars/proxy.yml
roles:
- base
- mariadb
- proxy
- docker
- mariadb
- gitea
- bitwarden
- kutt

View File

@ -1,21 +0,0 @@
# Copyright (C) 2020 Kris Lamoureux
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
- name: Install Docker Server
hosts: dockerhosts
become: true
roles:
- base
- docker
- jenkins

View File

@ -1,25 +0,0 @@
# Copyright (C) 2020 Kris Lamoureux
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
- name: Install Docker Box Server
hosts: dockerhosts
become: true
roles:
- base
- docker
- traefik
- nextcloud
- jenkins
- prometheus
- nginx

125
forward-ssh.sh Executable file
View File

@ -0,0 +1,125 @@
#!/bin/bash
# Finds the SSH private key under ./.vagrant and connects to
# the Vagrant box, port forwarding localhost ports: 8443, 443, 80, 22
#
# Download the latest script:
# https://git.krislamo.org/kris/homelab/raw/branch/main/forward-ssh.sh
#
# Copyright (C) 2023 Kris Lamoureux
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# Root check
if [ "$EUID" -ne 0 ]; then
echo "[ERROR]: Please run this script as root"
exit 1
fi
# Clean environment
unset PRIVATE_KEY
unset HOST_IP
unset MATCH_PATTERN
unset PKILL_ANSWER
# Function to create the SSH tunnel
function ssh_connect {
read -rp "Start a new vagrant SSH tunnel? [y/N] " PSTART_ANSWER
echo
case "$PSTART_ANSWER" in
[yY])
printf "[INFO]: Starting new vagrant SSH tunnel on PID "
sudo -u "$USER" ssh -fNT -i "$PRIVATE_KEY" \
-L 22:localhost:22 \
-L 80:"$HOST_IP":80 \
-L 443:"$HOST_IP":443 \
-L 8443:localhost:8443 \
-o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \
vagrant@"$HOST_IP" 2>/dev/null
sleep 2
pgrep -f "$MATCH_PATTERN"
;;
*)
echo "[INFO]: Declined to start a new vagrant SSH tunnel"
exit 0
;;
esac
}
# Check for valid PRIVATE_KEY location
PRIVATE_KEY="$(find .vagrant -name "private_key" 2>/dev/null | sort)"
# Single vagrant machine or multiple
if [ "$(echo "$PRIVATE_KEY" | wc -l)" -gt 1 ]; then
while IFS= read -r KEYFILE; do
if ! ssh-keygen -l -f "$KEYFILE" &>/dev/null; then
echo "[ERROR]: The SSH key '$KEYFILE' is not valid. Are your virtual machines running?"
exit 1
fi
echo "[CHECK]: Valid key at $KEYFILE"
done < <(echo "$PRIVATE_KEY")
PRIVATE_KEY="$(echo "$PRIVATE_KEY" | grep -m1 "${1:-default}")"
elif ! ssh-keygen -l -f "$PRIVATE_KEY" &>/dev/null; then
echo "[ERROR]: The SSH key '$PRIVATE_KEY' is not valid. Is your virtual machine running?"
exit 1
else
echo "[CHECK]: Valid key at $PRIVATE_KEY"
fi
# Grab first IP or use whatever HOST_IP_FIELD is set to and check that the guest is up
HOST_IP="$(sudo -u "$SUDO_USER" vagrant ssh -c "hostname -I | cut -d' ' -f${HOST_IP_FIELD:-1}" "${1:-default}" 2>/dev/null)"
if [ -z "$HOST_IP" ]; then
echo "[ERROR]: Failed to find ${1:-default}'s IP"
exit 1
fi
HOST_IP="${HOST_IP::-1}" # trim
if ! ping -c 1 "$HOST_IP" &>/dev/null; then
echo "[ERROR]: Cannot ping the host IP '$HOST_IP'"
exit 1
fi
echo "[CHECK]: Host at $HOST_IP (${1:-default}) is up"
# Pattern for matching processes running
MATCH_PATTERN="ssh -fNT -i ${PRIVATE_KEY}.*vagrant@"
# Check amount of processes that match the pattern
if [ "$(pgrep -afc "$MATCH_PATTERN")" -eq 0 ]; then
ssh_connect
else
# Processes found, so prompt to kill remaining ones then start tunnel
printf "\n[WARNING]: Found processes running:\n"
pgrep -fa "$MATCH_PATTERN"
printf '\n'
read -rp "Would you like to kill these processes? [y/N] " PKILL_ANSWER
echo
case "$PKILL_ANSWER" in
[yY])
echo "[WARNING]: Killing old vagrant SSH tunnel(s): "
pgrep -f "$MATCH_PATTERN" | tee >(xargs kill -15)
echo
if [ "$(pgrep -afc "$MATCH_PATTERN")" -eq 0 ]; then
ssh_connect
else
echo "[ERROR]: Unable to kill processes:"
pgrep -f "$MATCH_PATTERN"
exit 1
fi
;;
*)
echo "[INFO]: Declined to kill existing processes"
exit 0
;;
esac
fi

8
playbooks/docker.yml Normal file
View File

@ -0,0 +1,8 @@
- name: Install Docker Server
hosts: "{{ PLAYBOOK_HOST | default('none') }}"
become: true
roles:
- base
- jenkins
- proxy
- docker

11
playbooks/dockerbox.yml Normal file
View File

@ -0,0 +1,11 @@
- name: Install Dockerbox Server
hosts: "{{ PLAYBOOK_HOST | default('none') }}"
become: true
roles:
- base
- docker
- traefik
- nextcloud
- jenkins
- prometheus
- nginx

10
playbooks/mediaserver.yml Normal file
View File

@ -0,0 +1,10 @@
- name: Install Media Server
hosts: "{{ PLAYBOOK_HOST | default('none') }}"
become: true
roles:
- base
- jenkins
- proxy
- docker
- traefik
- jellyfin

23
roles/.gitignore vendored
View File

@ -1,23 +0,0 @@
# Sort roles: tail -n +6 roles/.gitignore | sort
/*
!.gitignore
!requirements.yml
# roles
!base*/
!bitwarden*/
!docker*/
!gitea*/
!jenkins*/
!kutt*/
!libvirt*/
!mariadb*/
!minecraft*/
!nextcloud*/
!nginx*/
!postgresql*/
!prometheus*/
!proxy*/
!rsnapshot*/
!traefik*/
!unifi*/
!wordpress*/

View File

@ -1,6 +1,8 @@
allow_reboot: true
manage_firewall: true
manage_network: false
network_type: static
allow_reboot: true
locale_default: en_US.UTF-8
packages:
- apache2-utils

View File

@ -1,18 +1,34 @@
- name: Reboot host
reboot:
ansible.builtin.reboot:
msg: "Reboot initiated by Ansible"
connect_timeout: 5
listen: reboot_host
when: allow_reboot
- name: Reconfigure locales
ansible.builtin.command: dpkg-reconfigure -f noninteractive locales
listen: reconfigure_locales
- name: Restart WireGuard
service:
ansible.builtin.service:
name: wg-quick@wg0
state: restarted
listen: restart_wireguard
- name: Restart Fail2ban
service:
ansible.builtin.service:
name: fail2ban
state: restarted
listen: restart_fail2ban
- name: Restart ddclient
ansible.builtin.service:
name: ddclient
state: restarted
listen: restart_ddclient
- name: Restart Samba
ansible.builtin.service:
name: smbd
state: restarted
listen: restart_samba

View File

@ -1,23 +1,5 @@
- name: 'Install Ansible dependency: python3-apt'
shell: 'apt-get update && apt-get install python3-apt -y'
args:
creates: /usr/lib/python3/dist-packages/apt
warn: false
- name: Install additional Ansible dependencies
apt:
name: "{{ item }}"
state: present
force_apt_get: true
update_cache: true
loop:
- aptitude
- python3-docker
- python3-pymysql
- python3-psycopg2
- name: Create Ansible's temporary remote directory
file:
ansible.builtin.file:
path: "~/.ansible/tmp"
state: directory
mode: 0700
mode: "700"

View File

@ -1,22 +1,17 @@
- name: Install ddclient
apt:
ansible.builtin.apt:
name: ddclient
state: present
- name: Install ddclient settings
template:
ansible.builtin.template:
src: ddclient.conf.j2
dest: /etc/ddclient.conf
mode: "600"
register: ddclient_settings
- name: Start ddclient and enable on boot
service:
ansible.builtin.service:
name: ddclient
state: started
enabled: true
- name: Restart ddclient
service:
name: ddclient
state: restarted
when: ddclient_settings.changed

View File

@ -1,46 +1,48 @@
- name: Install the Uncomplicated Firewall
apt:
ansible.builtin.apt:
name: ufw
state: present
- name: Install Fail2ban
apt:
ansible.builtin.apt:
name: fail2ban
state: present
- name: Deny incoming traffic by default
ufw:
community.general.ufw:
default: deny
direction: incoming
- name: Allow outgoing traffic by default
ufw:
community.general.ufw:
default: allow
direction: outgoing
- name: Allow OpenSSH with rate limiting
ufw:
community.general.ufw:
name: ssh
rule: limit
- name: Remove Fail2ban defaults-debian.conf
file:
ansible.builtin.file:
path: /etc/fail2ban/jail.d/defaults-debian.conf
state: absent
- name: Install OpenSSH's Fail2ban jail
template:
ansible.builtin.template:
src: fail2ban-ssh.conf.j2
dest: /etc/fail2ban/jail.d/sshd.conf
mode: "640"
notify: restart_fail2ban
- name: Install Fail2ban IP allow list
template:
ansible.builtin.template:
src: fail2ban-allowlist.conf.j2
dest: /etc/fail2ban/jail.d/allowlist.conf
mode: "640"
when: fail2ban_ignoreip is defined
notify: restart_fail2ban
- name: Enable firewall
ufw:
community.general.ufw:
state: enabled

View File

@ -1,5 +1,5 @@
- name: Install msmtp
apt:
ansible.builtin.apt:
name: "{{ item }}"
state: present
loop:
@ -8,12 +8,13 @@
- mailutils
- name: Install msmtp configuration
template:
ansible.builtin.template:
src: msmtprc.j2
dest: /root/.msmtprc
mode: 0700
mode: "600"
- name: Install /etc/aliases
copy:
ansible.builtin.copy:
dest: /etc/aliases
content: "root: {{ mail.rootalias }}"
mode: "644"

View File

@ -1,24 +1,37 @@
- import_tasks: ansible.yml
- name: Import Ansible tasks
ansible.builtin.import_tasks: ansible.yml
tags: ansible
- import_tasks: system.yml
- name: Import System tasks
ansible.builtin.import_tasks: system.yml
tags: system
- import_tasks: firewall.yml
- name: Import Firewall tasks
ansible.builtin.import_tasks: firewall.yml
tags: firewall
when: manage_firewall
- import_tasks: network.yml
- name: Import Network tasks
ansible.builtin.import_tasks: network.yml
tags: network
when: manage_network
- import_tasks: mail.yml
- name: Import Mail tasks
ansible.builtin.import_tasks: mail.yml
tags: mail
when: mail is defined
- import_tasks: ddclient.yml
- name: Import ddclient tasks
ansible.builtin.import_tasks: ddclient.yml
tags: ddclient
when: ddclient is defined
- import_tasks: wireguard.yml
- name: Import WireGuard tasks
ansible.builtin.import_tasks: wireguard.yml
tags: wireguard
when: wireguard is defined
- name: Import Samba tasks
ansible.builtin.import_tasks: samba.yml
tags: samba
when: samba is defined

View File

@ -1,5 +1,5 @@
- name: Install network interfaces file
copy:
ansible.builtin.copy:
src: network-interfaces.cfg
dest: /etc/network/interfaces
owner: root
@ -7,8 +7,9 @@
mode: '0644'
- name: Install network interfaces
template:
ansible.builtin.template:
src: "interface.j2"
dest: "/etc/network/interfaces.d/{{ item.name }}"
mode: "400"
loop: "{{ interfaces }}"
notify: reboot_host

View File

@ -0,0 +1,46 @@
- name: Install Samba
ansible.builtin.apt:
name: samba
state: present
- name: Create Samba users
ansible.builtin.command: "smbpasswd -a {{ item.name }}"
args:
stdin: "{{ item.password }}\n{{ item.password }}"
loop: "{{ samba.users }}"
loop_control:
label: "{{ item.name }}"
register: samba_users
changed_when: "'Added user' in samba_users.stdout"
- name: Ensure share directories exist
ansible.builtin.file:
path: "{{ item.path }}"
owner: "{{ item.owner }}"
group: "{{ item.group }}"
state: directory
mode: "755"
loop: "{{ samba.shares }}"
- name: Configure Samba shares
ansible.builtin.template:
src: smb.conf.j2
dest: /etc/samba/smb.conf
mode: "644"
notify: restart_samba
- name: Start smbd and enable on boot
ansible.builtin.service:
name: smbd
state: started
enabled: true
- name: Allow SMB connections
community.general.ufw:
rule: allow
port: 445
proto: tcp
from: "{{ item }}"
state: enabled
loop: "{{ samba.firewall }}"
when: manage_firewall

View File

@ -1,17 +1,105 @@
- name: Install useful software
apt:
ansible.builtin.apt:
name: "{{ packages }}"
state: present
update_cache: true
- name: Install GPG
ansible.builtin.apt:
name: gpg
state: present
- name: Check for existing GPG keys
ansible.builtin.command: "gpg --list-keys {{ item.id }} 2>/dev/null"
register: gpg_check
loop: "{{ root_gpgkeys }}"
failed_when: false
changed_when: false
when: root_gpgkeys is defined
- name: Import GPG keys
ansible.builtin.command:
"gpg --keyserver {{ item.item.server | default('keys.openpgp.org') }} --recv-key {{ item.item.id }}"
register: gpg_check_import
loop: "{{ gpg_check.results }}"
loop_control:
label: "{{ item.item }}"
changed_when: false
when: root_gpgkeys is defined and item.rc != 0
- name: Check GPG key imports
ansible.builtin.fail:
msg: "{{ item.stderr }}"
loop: "{{ gpg_check_import.results }}"
loop_control:
label: "{{ item.item.item }}"
when: root_gpgkeys is defined and (not item.skipped | default(false)) and ('imported' not in item.stderr)
- name: Install NTPsec
ansible.builtin.apt:
name: ntpsec
state: present
- name: Install locales
ansible.builtin.apt:
name: locales
state: present
- name: Generate locale
community.general.locale_gen:
name: "{{ locale_default }}"
state: present
notify: reconfigure_locales
- name: Set the default locale
ansible.builtin.lineinfile:
path: /etc/default/locale
regexp: "^LANG="
line: "LANG={{ locale_default }}"
- name: Manage root authorized_keys
template:
ansible.builtin.template:
src: authorized_keys.j2
dest: /root/.ssh/authorized_keys
mode: "400"
when: authorized_keys is defined
- name: Create system user groups
ansible.builtin.group:
name: "{{ item.key }}"
gid: "{{ item.value.gid }}"
state: present
loop: "{{ users | dict2items }}"
loop_control:
label: "{{ item.key }}"
when: users is defined
- name: Create system users
ansible.builtin.user:
name: "{{ item.key }}"
state: present
uid: "{{ item.value.uid }}"
group: "{{ item.value.gid }}"
shell: "{{ item.value.shell | default('/bin/bash') }}"
create_home: "{{ item.value.home | default(false) }}"
system: "{{ item.value.system | default(false) }}"
loop: "{{ users | dict2items }}"
loop_control:
label: "{{ item.key }}"
when: users is defined
- name: Set authorized_keys for system users
ansible.posix.authorized_key:
user: "{{ item.key }}"
key: "{{ item.value.key }}"
state: present
loop: "{{ users | dict2items }}"
loop_control:
label: "{{ item.key }}"
when: users is defined and item.value.key is defined
- name: Manage filesystem mounts
mount:
ansible.posix.mount:
path: "{{ item.path }}"
src: "UUID={{ item.uuid }}"
fstype: "{{ item.fstype }}"

View File

@ -1,36 +1,61 @@
- name: Install WireGuard
apt:
ansible.builtin.apt:
name: wireguard
state: present
update_cache: true
- name: Generate WireGuard keys
shell: wg genkey | tee privatekey | wg pubkey > publickey
ansible.builtin.shell: |
set -o pipefail
wg genkey | tee privatekey | wg pubkey > publickey
args:
chdir: /etc/wireguard/
creates: /etc/wireguard/privatekey
executable: /usr/bin/bash
- name: Grab WireGuard private key for configuration
slurp:
ansible.builtin.slurp:
src: /etc/wireguard/privatekey
register: wgkey
- name: Check if WireGuard preshared key file exists
ansible.builtin.stat:
path: /etc/wireguard/presharedkey-{{ item.name }}
loop: "{{ wireguard.peers }}"
loop_control:
label: "{{ item.name }}"
register: presharedkey_files
- name: Grab WireGuard preshared key for configuration
ansible.builtin.slurp:
src: /etc/wireguard/presharedkey-{{ item.item.name }}
register: wgshared
loop: "{{ presharedkey_files.results }}"
loop_control:
label: "{{ item.item.name }}"
when: item.stat.exists
- name: Grab WireGuard private key for configuration
ansible.builtin.slurp:
src: /etc/wireguard/privatekey
register: wgkey
- name: Install WireGuard configuration
template:
ansible.builtin.template:
src: wireguard.j2
dest: /etc/wireguard/wg0.conf
notify:
- restart_wireguard
mode: "400"
notify: restart_wireguard
- name: Start WireGuard interface
service:
ansible.builtin.service:
name: wg-quick@wg0
state: started
enabled: true
- name: Add WireGuard firewall rule
ufw:
community.general.ufw:
rule: allow
port: "{{ wireguard.listenport }}"
proto: tcp
proto: udp
when: wireguard.listenport is defined

View File

@ -0,0 +1,28 @@
[global]
workgroup = WORKGROUP
server string = Samba Server %v
netbios name = {{ ansible_hostname }}
security = user
map to guest = bad user
dns proxy = no
{% for user in samba.users %}
smb encrypt = {{ 'mandatory' if user.encrypt | default(false) else 'disabled' }}
{% endfor %}
{% for share in samba.shares %}
[{{ share.name }}]
path = {{ share.path }}
browsable = yes
{% if share.guest_allow is defined and share.guest_allow %}
guest ok = yes
{% else %}
guest ok = no
{% endif %}
read only = {{ 'yes' if share.read_only | default(false) else 'no' }}
{% if share.valid_users is defined %}
valid users = {{ share.valid_users }}
{% endif %}
{% if share.force_user is defined %}
force user = {{ share.force_user }}
{% endif %}
{% endfor %}

View File

@ -1,4 +1,6 @@
[Interface]
# {{ ansible_managed }}
[Interface] # {{ ansible_hostname }}
PrivateKey = {{ wgkey['content'] | b64decode | trim }}
Address = {{ wireguard.address }}
{% if wireguard.listenport is defined %}
@ -6,8 +8,26 @@ ListenPort = {{ wireguard.listenport }}
{% endif %}
{% for peer in wireguard.peers %}
{% if peer.name is defined %}
[Peer] # {{ peer.name }}
{% else %}
[Peer]
{% endif %}
PublicKey = {{ peer.publickey }}
{% if peer.presharedkey is defined %}
PresharedKey = {{ peer.presharedkey }}
{% else %}
{% set preshared_key = (
wgshared.results
| selectattr('item.item.name', 'equalto', peer.name)
| first
).content
| default(none)
%}
{% if preshared_key is not none %}
PresharedKey = {{ preshared_key | b64decode | trim }}
{% endif %}
{% endif %}
{% if peer.endpoint is defined %}
Endpoint = {{ peer.endpoint }}
{% endif %}

View File

@ -1,16 +1,28 @@
- name: Stop Bitwarden for rebuild
service:
ansible.builtin.service:
name: "{{ bitwarden_name }}"
state: stopped
listen: rebuild_bitwarden
- name: Rebuild Bitwarden
shell: "{{ bitwarden_root }}/bitwarden.sh rebuild"
ansible.builtin.command: "{{ bitwarden_root }}/bitwarden.sh rebuild"
listen: rebuild_bitwarden
- name: Reload systemd manager configuration
ansible.builtin.systemd:
daemon_reload: true
listen: rebuild_bitwarden
- name: Start Bitwarden after rebuild
service:
ansible.builtin.service:
name: "{{ bitwarden_name }}"
state: started
enabled: true
listen: rebuild_bitwarden
- name: Create Bitwarden's initial log file
ansible.builtin.file:
path: "{{ bitwarden_logs_identity }}/{{ bitwarden_logs_identity_date }}.txt"
state: touch
mode: "644"
listen: touch_bitwarden

View File

@ -1,56 +1,58 @@
- name: Install expect
apt:
ansible.builtin.apt:
name: expect
state: present
- name: Create Bitwarden directory
file:
ansible.builtin.file:
path: "{{ bitwarden_root }}"
state: directory
mode: "755"
- name: Download Bitwarden script
get_url:
ansible.builtin.get_url:
url: "https://raw.githubusercontent.com/\
bitwarden/self-host/master/bitwarden.sh"
dest: "{{ bitwarden_root }}"
mode: u+x
- name: Install Bitwarden script wrapper
template:
ansible.builtin.template:
src: bw_wrapper.j2
dest: "{{ bitwarden_root }}/bw_wrapper"
mode: u+x
- name: Run Bitwarden installation script
shell: "{{ bitwarden_root }}/bw_wrapper"
ansible.builtin.command: "{{ bitwarden_root }}/bw_wrapper"
args:
creates: "{{ bitwarden_root }}/bwdata/config.yml"
- name: Install docker-compose override
template:
- name: Install compose override
ansible.builtin.template:
src: compose.override.yml.j2
dest: "{{ bitwarden_root }}/bwdata/docker/docker-compose.override.yml"
when: traefik_version is defined
mode: "644"
when: bitwarden_override | default(true)
notify: rebuild_bitwarden
- name: Disable bitwarden-nginx HTTP on 80
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^http_port: 80$"
replace: "http_port: 127.0.0.1:8080"
replace: "http_port: {{ bitwarden_http_port | default('127.0.0.1:9080') }}"
when: not bitwarden_standalone
notify: rebuild_bitwarden
- name: Disable bitwarden-nginx HTTPS on 443
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^https_port: 443$"
replace: "https_port: 127.0.0.1:8443"
replace: "https_port: {{ bitwarden_https_port | default('127.0.0.1:9443') }}"
when: not bitwarden_standalone
notify: rebuild_bitwarden
- name: Disable Bitwarden managed Lets Encrypt
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^ssl_managed_lets_encrypt: true$"
replace: "ssl_managed_lets_encrypt: false"
@ -58,7 +60,7 @@
notify: rebuild_bitwarden
- name: Disable Bitwarden managed SSL
replace:
ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^ssl: true$"
replace: "ssl: false"
@ -66,39 +68,30 @@
notify: rebuild_bitwarden
- name: Define reverse proxy servers
lineinfile:
ansible.builtin.lineinfile:
path: "{{ bitwarden_root }}/bwdata/config.yml"
line: "- {{ bitwarden_realips }}"
insertafter: "^real_ips"
notify: rebuild_bitwarden
- name: Install Bitwarden systemd service
template:
ansible.builtin.template:
src: bitwarden.service.j2
dest: "/etc/systemd/system/{{ bitwarden_name }}.service"
mode: "644"
register: bitwarden_systemd
notify: rebuild_bitwarden
- name: Create Bitwarden's initial logging directory
file:
ansible.builtin.file:
path: "{{ bitwarden_logs_identity }}"
state: directory
register: bitwarden_logs
- name: Create Bitwarden's initial log file
file:
path: "{{ bitwarden_logs_identity }}/{{ bitwarden_logs_identity_date }}.txt"
state: touch
when: bitwarden_logs.changed
mode: "755"
notify: touch_bitwarden
- name: Install Bitwarden's Fail2ban jail
template:
ansible.builtin.template:
src: fail2ban-jail.conf.j2
dest: /etc/fail2ban/jail.d/bitwarden.conf
mode: "640"
notify: restart_fail2ban
- name: Reload systemd manager configuration
systemd:
daemon_reload: true
when: bitwarden_systemd.changed
notify: rebuild_bitwarden

View File

@ -23,10 +23,13 @@ send "{{ bitwarden_install_id }}\r"
expect "Enter your installation key:"
send "{{ bitwarden_install_key }}\r"
expect "Do you have a SSL certificate to use? (y/n):"
expect "Enter your region (US/EU) \\\[US\\\]:"
send "US\r"
expect "Do you have a SSL certificate to use? (y/N):"
send "n\r"
expect "Do you want to generate a self-signed SSL certificate? (y/n):"
expect "Do you want to generate a self-signed SSL certificate? (y/N):"
{% if bitwarden_standalone and not bitwarden_production %}
send "y\r"
{% else %}

View File

@ -6,13 +6,11 @@ services:
- traefik
labels:
traefik.http.routers.bitwarden.rule: "Host(`{{ bitwarden_domain }}`)"
traefik.http.routers.bitwarden.entrypoints: websecure
traefik.http.routers.bitwarden.tls.certresolver: letsencrypt
traefik.http.routers.bitwarden.middlewares: "securehttps@file"
traefik.http.routers.bitwarden.entrypoints: {{ bitwarden_entrypoint | default('web') }}
traefik.http.routers.bitwarden.tls: {{ bitwarden_traefik_tls | default('false') }}
traefik.http.services.bitwarden.loadbalancer.server.port: 8080
traefik.docker.network: traefik
traefik.enable: "true"
networks:
traefik:
external: true

View File

@ -1,3 +1,11 @@
docker_apt_keyring: /etc/apt/keyrings/docker.asc
docker_apt_keyring_hash: 1500c1f56fa9e26b9b8f42452a553675796ade0807cdce11975eb98170b3a570
docker_apt_keyring_url: https://download.docker.com/linux/debian/gpg
docker_apt_repo: https://download.docker.com/linux/debian
docker_compose_root: /var/lib/compose
docker_compose: /usr/bin/docker-compose
docker_compose_service: compose
docker_compose: "{{ (docker_official | bool) | ternary('/usr/bin/docker compose', '/usr/bin/docker-compose') }}"
docker_official: false
docker_repos_keys: "{{ docker_repos_path }}/.keys"
docker_repos_keytype: rsa
docker_repos_path: /srv/.compose_repos

View File

@ -0,0 +1,54 @@
- name: Reload systemd manager configuration
ansible.builtin.systemd:
daemon_reload: true
listen: compose_systemd
- name: Find which services had a docker-compose.yml updated
ansible.builtin.set_fact:
compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}"
loop: "{{ compose_update.results }}"
loop_control:
label: "{{ item.item.name }}"
when: item.changed
listen: compose_restart
- name: Find which services had their .env updated
ansible.builtin.set_fact:
compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}"
loop: "{{ compose_env_update.results }}"
loop_control:
label: "{{ item.item.name }}"
when: item.changed
listen: compose_restart
- name: Restart MariaDB
ansible.builtin.service:
name: mariadb
state: restarted
when: not mariadb_restarted
listen: restart_mariadb # hijack handler for early restart
- name: Set MariaDB as restarted
ansible.builtin.set_fact:
mariadb_restarted: true
when: not mariadb_restarted
listen: restart_mariadb
- name: Restart compose services
ansible.builtin.systemd:
state: restarted
name: "{{ docker_compose_service }}@{{ item }}"
loop: "{{ compose_restart_list | default([]) | unique }}"
when: compose_restart_list is defined
listen: compose_restart
- name: Start compose services and enable on boot
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ item.name }}"
state: started
enabled: true
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ docker_compose_service }}@{{ item.name }}"
when: item.enabled is defined and item.enabled is true
listen: compose_enable

View File

@ -1,27 +1,142 @@
- name: Install Docker
apt:
name: ['docker.io', 'docker-compose']
state: present
- name: Add official Docker APT key
ansible.builtin.get_url:
url: "{{ docker_apt_keyring_url }}"
dest: "{{ docker_apt_keyring }}"
checksum: "sha256:{{ docker_apt_keyring_hash }}"
mode: "644"
owner: root
group: root
when: docker_official
- name: Remove official Docker APT key
ansible.builtin.file:
path: "{{ docker_apt_keyring }}"
state: absent
when: not docker_official
- name: Add/remove official Docker APT repository
ansible.builtin.apt_repository:
repo: >
deb [arch=amd64 signed-by={{ docker_apt_keyring }}]
{{ docker_apt_repo }} {{ ansible_distribution_release }} stable
state: "{{ 'present' if docker_official else 'absent' }}"
filename: "{{ docker_apt_keyring | regex_replace('^.*/', '') }}"
- name: Install/uninstall Docker from Debian repositories
ansible.builtin.apt:
name: ['docker.io', 'docker-compose', 'containerd', 'runc']
state: "{{ 'absent' if docker_official else 'present' }}"
autoremove: true
update_cache: true
- name: Install/uninstall Docker from Docker repositories
ansible.builtin.apt:
name: ['docker-ce', 'docker-ce-cli', 'containerd.io',
'docker-buildx-plugin', 'docker-compose-plugin']
state: "{{ 'present' if docker_official else 'absent' }}"
autoremove: true
update_cache: true
- name: Login to private registry
community.docker.docker_login:
registry_url: "{{ docker_login_url | default('') }}"
username: "{{ docker_login_user }}"
password: "{{ docker_login_pass }}"
when: docker_login_user is defined and docker_login_pass is defined
- name: Create docker-compose root
file:
ansible.builtin.file:
path: "{{ docker_compose_root }}"
state: directory
mode: "500"
- name: Install docker-compose systemd service
template:
ansible.builtin.template:
src: docker-compose.service.j2
dest: "/etc/systemd/system/{{ docker_compose_service }}@.service"
register: compose_systemd
mode: "400"
notify: compose_systemd
- name: Reload systemd manager configuration
systemd:
daemon_reload: true
when: compose_systemd.changed
- name: Create directories to clone docker-compose repositories
ansible.builtin.file:
path: "{{ item }}"
state: directory
mode: "400"
loop:
- "{{ docker_repos_path }}"
- "{{ docker_repos_keys }}"
when: docker_compose_deploy is defined
- name: Generate OpenSSH deploy keys for docker-compose clones
community.crypto.openssh_keypair:
path: "{{ docker_repos_keys }}/id_{{ docker_repos_keytype }}"
type: "{{ docker_repos_keytype }}"
comment: "{{ ansible_hostname }}-deploy-key"
mode: "400"
state: present
when: docker_compose_deploy is defined
- name: Check for git installation
ansible.builtin.apt:
name: git
state: present
when: docker_compose_deploy is defined
- name: Clone external docker-compose projects
ansible.builtin.git:
repo: "{{ item.url }}"
dest: "{{ docker_repos_path }}/{{ item.name }}"
version: "{{ item.version }}"
accept_newhostkey: "{{ item.accept_newhostkey | default(false) }}"
gpg_whitelist: "{{ item.trusted_keys | default([]) }}"
verify_commit: "{{ true if (item.trusted_keys is defined and item.trusted_keys) else false }}"
key_file: "{{ docker_repos_keys }}/id_{{ docker_repos_keytype }}"
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ item.url }}"
when: docker_compose_deploy is defined
- name: Create directories for docker-compose projects using the systemd service
ansible.builtin.file:
path: "{{ docker_compose_root }}/{{ item.name }}"
state: directory
mode: "400"
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ item.name }}"
when: docker_compose_deploy is defined
- name: Synchronize docker-compose.yml
ansible.posix.synchronize:
src: "{{ docker_repos_path }}/{{ item.name }}/{{ item.path | default('docker-compose.yml') }}"
dest: "{{ docker_compose_root }}/{{ item.name }}/docker-compose.yml"
delegate_to: "{{ inventory_hostname }}"
register: compose_update
notify:
- compose_restart
- compose_enable
loop: "{{ docker_compose_deploy | default([]) }}"
loop_control:
label: "{{ item.name }}"
when: docker_compose_deploy is defined and docker_compose_deploy | length > 0
- name: Set environment variables for docker-compose projects
ansible.builtin.template:
src: docker-compose-env.j2
dest: "{{ docker_compose_root }}/{{ item.name }}/.env"
mode: "400"
register: compose_env_update
notify:
- compose_restart
- compose_enable
no_log: "{{ docker_compose_env_nolog | default(true) }}"
loop: "{{ docker_compose_deploy }}"
loop_control:
label: "{{ item.name }}"
when: docker_compose_deploy is defined and item.env is defined
- name: Add users to docker group
user:
ansible.builtin.user:
name: "{{ item }}"
groups: docker
append: true
@ -29,7 +144,8 @@
when: docker_users is defined
- name: Start Docker and enable on boot
service:
ansible.builtin.service:
name: docker
state: started
enabled: true
when: docker_managed | default(true)

View File

@ -0,0 +1,10 @@
# {{ ansible_managed }}
{% if item.env is defined %}
{% for key, value in item.env.items() %}
{% if value is boolean %}
{{ key }}={{ value | lower }}
{% else %}
{{ key }}={{ value }}
{% endif %}
{% endfor %}
{% endif %}

View File

@ -1,5 +1,5 @@
[Unit]
Description=%i docker-compose service
Description=%i {{ docker_compose_service }} service
PartOf=docker.service
After=docker.service

View File

@ -1,5 +1,5 @@
- name: Restart Gitea
service:
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ gitea_name }}"
state: restarted
listen: restart_gitea

View File

@ -1,111 +1,78 @@
- name: Create Gitea directory
file:
path: "{{ gitea_root }}"
state: directory
- name: Install MySQL module for Ansible
ansible.builtin.apt:
name: python3-pymysql
state: present
- name: Create Gitea database
mysql_db:
name: "{{ gitea_dbname }}"
community.mysql.mysql_db:
name: "{{ gitea.DB_NAME }}"
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Create Gitea database user
mysql_user:
name: "{{ gitea_dbuser }}"
password: "{{ gitea_dbpass }}"
community.mysql.mysql_user:
name: "{{ gitea.DB_USER }}"
password: "{{ gitea.DB_PASSWD }}"
host: '%'
state: present
priv: "{{ gitea_dbname }}.*:ALL"
priv: "{{ gitea.DB_NAME }}.*:ALL"
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Create git user
user:
name: git
state: present
- name: Git user uid
getent:
database: passwd
key: git
- name: Git user gid
getent:
database: group
key: git
- name: Create git's .ssh directory
file:
ansible.builtin.file:
path: /home/git/.ssh
mode: "700"
state: directory
- name: Generate git's SSH keys
openssh_keypair:
community.crypto.openssh_keypair:
path: /home/git/.ssh/id_rsa
- name: Find git's public SSH key
slurp:
ansible.builtin.slurp:
src: /home/git/.ssh/id_rsa.pub
register: git_rsapub
- name: Get stats on git's authorized_keys file
stat:
ansible.builtin.stat:
path: /home/git/.ssh/authorized_keys
register: git_authkeys
- name: Create git's authorized_keys file
file:
ansible.builtin.file:
path: /home/git/.ssh/authorized_keys
mode: "600"
state: touch
when: not git_authkeys.stat.exists
- name: Add git's public SSH key to authorized_keys
lineinfile:
ansible.builtin.lineinfile:
path: /home/git/.ssh/authorized_keys
regex: "^ssh-rsa"
line: "{{ git_rsapub['content'] | b64decode }}"
- name: Create Gitea host script for SSH
template:
ansible.builtin.template:
src: gitea.sh.j2
dest: /usr/local/bin/gitea
mode: 0755
- name: Install Gitea's docker-compose file
template:
src: docker-compose.yml.j2
dest: "{{ gitea_root }}/docker-compose.yml"
notify: restart_gitea
- name: Install Gitea's docker-compose variables
template:
src: compose-env.j2
dest: "{{ gitea_root }}/.env"
notify: restart_gitea
mode: "755"
- name: Create Gitea's logging directory
file:
ansible.builtin.file:
name: /var/log/gitea
state: directory
- name: Create Gitea's initial log file
file:
name: /var/log/gitea/gitea.log
state: touch
mode: "755"
- name: Install Gitea's Fail2ban filter
template:
ansible.builtin.template:
src: fail2ban-filter.conf.j2
dest: /etc/fail2ban/filter.d/gitea.conf
mode: "644"
notify: restart_fail2ban
- name: Install Gitea's Fail2ban jail
template:
ansible.builtin.template:
src: fail2ban-jail.conf.j2
dest: /etc/fail2ban/jail.d/gitea.conf
mode: "640"
notify: restart_fail2ban
- name: Start and enable Gitea service
service:
name: "{{ docker_compose_service }}@{{ gitea_name }}"
state: started
enabled: true

View File

@ -0,0 +1,4 @@
jellyfin_name: jellyfin
jellyfin_router: "{{ jellyfin_name }}"
jellyfin_rooturl: "https://{{ jellyfin_domain }}"
jellyfin_root: "{{ docker_compose_root }}/{{ jellyfin_name }}"

View File

@ -0,0 +1,5 @@
- name: Restart Jellyfin
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ jellyfin_name }}"
state: restarted
listen: restart_jellyfin

View File

@ -0,0 +1,35 @@
- name: Create Jellyfin directory
ansible.builtin.file:
path: "{{ jellyfin_root }}"
state: directory
mode: 0500
- name: Get user jellyfin uid
ansible.builtin.getent:
database: passwd
key: jellyfin
- name: Get user jellyfin gid
ansible.builtin.getent:
database: group
key: jellyfin
- name: Install Jellyfin's docker-compose file
ansible.builtin.template:
src: docker-compose.yml.j2
dest: "{{ jellyfin_root }}/docker-compose.yml"
mode: 0400
notify: restart_jellyfin
- name: Install Jellyfin's docker-compose variables
ansible.builtin.template:
src: compose-env.j2
dest: "{{ jellyfin_root }}/.env"
mode: 0400
notify: restart_jellyfin
- name: Start and enable Jellyfin service
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ jellyfin_name }}"
state: started
enabled: true

View File

@ -0,0 +1,5 @@
# {{ ansible_managed }}
jellyfin_version={{ jellyfin_version }}
jellyfin_name={{ jellyfin_name }}
jellyfin_domain={{ jellyfin_domain }}
jellyfin_rooturl={{ jellyfin_rooturl }}

View File

@ -0,0 +1,30 @@
version: '3.7'
volumes:
config:
cache:
networks:
traefik:
external: true
services:
jellyfin:
image: "jellyfin/jellyfin:${jellyfin_version}"
container_name: "${jellyfin_name}"
networks:
- traefik
labels:
- "traefik.http.routers.{{ jellyfin_router }}.rule=Host(`{{ jellyfin_domain }}`)"
{% if traefik_http_only %}
- "traefik.http.routers.{{ jellyfin_router }}.entrypoints=web"
{% else %}
- "traefik.http.routers.{{ jellyfin_router }}.entrypoints=websecure"
{% endif %}
- "traefik.http.services.{{ jellyfin_router }}.loadbalancer.server.port=8096"
- "traefik.docker.network=traefik"
- "traefik.enable=true"
volumes:
- config:/config
- cache:/cache
- {{ jellyfin_media }}:/media

View File

@ -1,5 +1,5 @@
- name: Create Jenkins user
user:
ansible.builtin.user:
name: "{{ jenkins_user }}"
state: present
shell: /bin/bash
@ -7,25 +7,25 @@
generate_ssh_key: true
- name: Set Jenkins authorized key
authorized_key:
ansible.posix.authorized_key:
user: jenkins
state: present
exclusive: true
key: "{{ jenkins_sshkey }}"
- name: Give Jenkins user passwordless sudo
template:
ansible.builtin.template:
src: jenkins_sudoers.j2
dest: /etc/sudoers.d/{{ jenkins_user }}
validate: "visudo -cf %s"
mode: 0440
- name: Install Ansible
apt:
ansible.builtin.apt:
name: ansible
state: present
- name: Install Java
apt:
ansible.builtin.apt:
name: default-jre
state: present

View File

@ -1,5 +1,5 @@
- import_tasks: agent.yml
- ansible.builtin.import_tasks: agent.yml
when: jenkins_sshkey is defined
- import_tasks: server.yml
- ansible.builtin.import_tasks: server.yml
when: jenkins_domain is defined

View File

@ -1,12 +1,12 @@
- name: Create Jenkin's directory
file:
ansible.builtin.file:
path: "{{ jenkins_root }}"
state: directory
owner: "1000"
group: "1000"
- name: Start Jenkins Container
docker_container:
community.general.docker_container:
name: "{{ jenkins_name }}"
image: jenkins/jenkins:{{ jenkins_version }}
state: started

View File

@ -1,16 +0,0 @@
# container settings
kutt_name: kutt
kutt_default_domain: "{{ kutt_domain }}"
kutt_webport: 3030
kutt_web: "127.0.0.1:{{ kutt_webport }}"
# database settings
kutt_dbname: "{{ kutt_name }}"
kutt_dbuser: "{{ kutt_name }}"
kutt_postgres_volume: postgres_data
# redis
kutt_redis_volume: redis_data
# host
kutt_root: "{{ docker_compose_root }}/{{ kutt_name }}"

View File

@ -1,5 +0,0 @@
- name: Restart Kutt
service:
name: "{{ docker_compose_service }}@{{ kutt_name }}"
state: restarted
listen: restart_kutt

View File

@ -1,22 +0,0 @@
- name: Create Kutt directory
file:
path: "{{ kutt_root }}"
state: directory
- name: Install Kutt's docker-compose file
template:
src: docker-compose.yml.j2
dest: "{{ kutt_root }}/docker-compose.yml"
notify: restart_kutt
- name: Install Kutt's docker-compose variables
template:
src: compose-env.j2
dest: "{{ kutt_root }}/.env"
notify: restart_kutt
- name: Start and enable Gitea service
service:
name: "{{ docker_compose_service }}@{{ kutt_name }}"
state: started
enabled: true

View File

@ -1,17 +0,0 @@
# {{ ansible_managed }}
kutt_version={{ kutt_version }}
kutt_web={{ kutt_web }}
kutt_domain={{ kutt_domain }}
kutt_default_domain={{ kutt_default_domain }}
kutt_jwt_secret={{ kutt_jwt_secret }}
kutt_dbname={{ kutt_dbname }}
kutt_dbuser={{ kutt_dbuser }}
kutt_dbpass={{ kutt_dbpass }}
kutt_mail_user={{ kutt_mail_user }}
kutt_mail_host={{ kutt_mail_host }}
kutt_mail_password={{ kutt_mail_password }}
kutt_report_email={{ kutt_report_email }}
kutt_admin_emails={{ kutt_admin_emails }}
kutt_redis_version={{ kutt_redis_version }}
kutt_postgres_version={{ kutt_postgres_version }}
kutt_postgres_volume={{ kutt_postgres_volume }}

View File

@ -1,46 +0,0 @@
version: "3.7"
services:
kutt:
image: kutt/kutt:${kutt_version}
depends_on:
- postgres
- redis
command: ["./wait-for-it.sh", "postgres:5432", "--", "npm", "start"]
ports:
- ${kutt_web}:3000
environment:
SITE_NAME: ${kutt_domain}
DEFAULT_DOMAIN: ${kutt_default_domain}
JWT_SECRET: ${kutt_jwt_secret}
DB_HOST: postgres
DB_NAME: ${kutt_dbname}
DB_USER: ${kutt_dbuser}
DB_PASSWORD: ${kutt_dbpass}
REDIS_HOST: redis
MAIL_USER: ${kutt_mail_user}
MAIL_HOST: ${kutt_mail_host}
MAIL_PORT: ${kutt_mail_port}
MAIL_PASSWORD: ${kutt_mail_password}
REPORT_EMAIL: ${kutt_report_email}
ADMIN_EMAILS: ${kutt_admin_emails}
redis:
image: redis:${kutt_redis_version}
volumes:
- {{ kutt_redis_volume }}:/data
postgres:
image: postgres:${kutt_postgres_version}
environment:
POSTGRES_USER: ${kutt_dbuser}
POSTGRES_PASSWORD: ${kutt_dbpass}
POSTGRES_DB: ${kutt_dbname}
volumes:
- {{ kutt_postgres_volume }}:/var/lib/postgresql/data
volumes:
{{ kutt_redis_volume }}:
{{ kutt_postgres_volume }}:

View File

@ -1,15 +1,15 @@
- name: Install QEMU/KVM
apt:
ansible.builtin.apt:
name: qemu-kvm
state: present
- name: Install Libvirt
apt:
ansible.builtin.apt:
name: ["libvirt-clients", "libvirt-daemon-system"]
state: present
- name: Add users to libvirt group
user:
ansible.builtin.user:
name: "{{ item }}"
groups: libvirt
append: yes
@ -17,12 +17,12 @@
when: libvirt_users is defined
- name: Check for NODOWNLOAD file
stat:
ansible.builtin.stat:
path: /var/lib/libvirt/images/NODOWNLOAD
register: NODOWNLOAD
- name: Download GNU/Linux ISOs
get_url:
ansible.builtin.get_url:
url: "{{ item.url }}"
dest: /var/lib/libvirt/images
checksum: "{{ item.hash }}"
@ -34,7 +34,7 @@
# Prevent downloaded ISOs from being rehashed every run
- name: Create NODOWNLOAD file
file:
ansible.builtin.file:
path: /var/lib/libvirt/images/NODOWNLOAD
state: touch
when: download_isos.changed

View File

@ -1,3 +0,0 @@
mariadb_trust:
- "172.16.0.0/12"
- "192.168.0.0/16"

View File

@ -0,0 +1,12 @@
- name: Restart MariaDB
ansible.builtin.service:
name: mariadb
state: restarted
when: not mariadb_restarted
listen: restart_mariadb
- name: Set MariaDB as restarted
ansible.builtin.set_fact:
mariadb_restarted: true
when: not mariadb_restarted
listen: restart_mariadb

View File

@ -1,25 +1,30 @@
- name: Install MariaDB
apt:
ansible.builtin.apt:
name: mariadb-server
state: present
- name: Change the bind-address to allow Docker
lineinfile:
- name: Set MariaDB restarted fact
ansible.builtin.set_fact:
mariadb_restarted: false
- name: Regather facts for the potentially new docker0 interface
ansible.builtin.setup:
- name: Change the bind-address to allow from docker0
ansible.builtin.lineinfile:
path: /etc/mysql/mariadb.conf.d/50-server.cnf
regex: "^bind-address"
line: "bind-address = 0.0.0.0"
register: mariadb_conf
line: "bind-address = {{ ansible_facts.docker0.ipv4.address }}"
notify: restart_mariadb
- name: Restart MariaDB
service:
name: mariadb
state: restarted
when: mariadb_conf.changed
- name: Flush handlers to ensure MariaDB restarts immediately
ansible.builtin.meta: flush_handlers
tags: restart_mariadb
- name: Allow database connections
ufw:
- name: Allow database connections from Docker
community.general.ufw:
rule: allow
port: "3306"
proto: tcp
src: "{{ item }}"
loop: "{{ mariadb_trust }}"
loop: "{{ mariadb_trust | default(['172.16.0.0/12']) }}"

View File

@ -1,28 +1,28 @@
- name: Install GPG
apt:
ansible.builtin.apt:
name: gpg
state: present
- name: Add AdoptOpenJDK's signing key
apt_key:
ansible.builtin.apt_key:
id: 8ED17AF5D7E675EB3EE3BCE98AC3B29174885C03
url: https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public
- name: Install AdoptOpenJDK repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/ buster main
mode: 0644
state: present
- name: Install Java
apt:
ansible.builtin.apt:
name: "adoptopenjdk-{{ item.java.version }}-hotspot"
state: present
when: item.java.version is defined
loop: "{{ minecraft }}"
- name: "Install default Java, version {{ minecraft_java }}"
apt:
ansible.builtin.apt:
name: "{{ minecraft_java_pkg }}"
state: present
when: item.java.version is not defined
@ -30,7 +30,7 @@
register: minecraft_java_default
- name: "Activate default Java, version {{ minecraft_java }}"
alternatives:
community.general.alternatives:
name: java
path: "/usr/lib/jvm/{{ minecraft_java_pkg }}-amd64/bin/java"
when: minecraft_java_default.changed

View File

@ -1,14 +1,14 @@
- import_tasks: system.yml
- ansible.builtin.import_tasks: system.yml
when: minecraft_eula
- import_tasks: java.yml
- ansible.builtin.import_tasks: java.yml
when: minecraft_eula
- import_tasks: vanilla.yml
- ansible.builtin.import_tasks: vanilla.yml
when: minecraft_eula
- import_tasks: modpacks.yml
- ansible.builtin.import_tasks: modpacks.yml
when: minecraft_eula
- import_tasks: service.yml
- ansible.builtin.import_tasks: service.yml
when: minecraft_eula

View File

@ -1,5 +1,5 @@
- name: Download Minecraft modpack installer
get_url:
ansible.builtin.get_url:
url: "{{ minecraft_modpack_url }}"
dest: "{{ minecraft_home }}/{{ item.name }}/serverinstall_{{ item.modpack | replace ('/', '_') }}"
owner: "{{ minecraft_user }}"
@ -9,7 +9,7 @@
when: item.modpack is defined and item.sha1 is not defined
- name: Run Minecraft modpack installer
command: "sudo -u {{ minecraft_user }} ./serverinstall_{{ item.modpack | replace ('/', '_') }} --auto"
ansible.builtin.command: "sudo -u {{ minecraft_user }} ./serverinstall_{{ item.modpack | replace ('/', '_') }} --auto"
args:
creates: "{{ minecraft_home }}/{{ item.name }}/mods"
chdir: "{{ minecraft_home }}/{{ item.name }}"
@ -17,7 +17,7 @@
when: item.modpack is defined and item.sha1 is not defined
- name: Find Minecraft Forge
find:
ansible.builtin.find:
paths: "{{ minecraft_home }}/{{ item.name }}"
patterns: "forge*.jar"
register: minecraft_forge
@ -25,7 +25,7 @@
when: item.modpack is defined and item.sha1 is not defined
- name: Link to Minecraft Forge
file:
ansible.builtin.file:
src: "{{ item.files[0].path }}"
dest: "{{ minecraft_home }}/{{ item.item.name }}/minecraft_server.jar"
owner: "{{ minecraft_user }}"

View File

@ -1,11 +1,11 @@
- name: Deploy Minecraft systemd service
template:
ansible.builtin.template:
src: minecraft.service.j2
dest: "/etc/systemd/system/minecraft@.service"
register: minecraft_systemd
- name: Deploy service environmental variables
template:
ansible.builtin.template:
src: environment.conf.j2
dest: "{{ minecraft_home }}/{{ item.name }}/environment.conf"
owner: "{{ minecraft_user }}"
@ -13,25 +13,25 @@
loop: "{{ minecraft }}"
- name: Reload systemd manager configuration
systemd:
ansible.builtin.systemd:
daemon_reload: true
when: minecraft_systemd.changed
- name: Disable non-default service instances
service:
ansible.builtin.service:
name: "minecraft@{{ item.name }}"
enabled: false
loop: "{{ minecraft }}"
when: item.name != minecraft_onboot
- name: Enable default service instance
service:
ansible.builtin.service:
name: "minecraft@{{ minecraft_onboot }}"
enabled: true
when: minecraft_eula and minecraft_onboot is defined
- name: Run default service instance
service:
ansible.builtin.service:
name: "minecraft@{{ minecraft_onboot }}"
state: started
when: minecraft_eula and minecraft_onboot is defined and minecraft_onboot_run

View File

@ -1,16 +1,16 @@
- name: Install Screen
apt:
ansible.builtin.apt:
name: screen
state: present
- name: Create Minecraft user
user:
ansible.builtin.user:
name: "{{ minecraft_user }}"
state: present
shell: /bin/bash
ansible.builtin.shell: /bin/bash
- name: Create Minecraft directory
file:
ansible.builtin.file:
path: "{{ minecraft_home }}/{{ item.name }}"
state: directory
owner: "{{ minecraft_user }}"
@ -18,7 +18,7 @@
loop: "{{ minecraft }}"
- name: Answer to Mojang's EULA
template:
ansible.builtin.template:
src: eula.txt.j2
dest: "{{ minecraft_home }}/{{ item.name }}/eula.txt"
owner: "{{ minecraft_user }}"

View File

@ -1,5 +1,5 @@
- name: Download Minecraft
get_url:
ansible.builtin.get_url:
url: "{{ minecraft_url }}"
dest: "{{ minecraft_home }}/{{ item.name }}/minecraft_server.jar"
checksum: "sha1:{{ item.sha1 }}"

View File

@ -1,11 +1 @@
# container names
nextcloud_container: nextcloud
nextcloud_dbcontainer: "{{ nextcloud_container }}-db"
# database settings
nextcloud_dbname: "{{ nextcloud_container }}"
nextcloud_dbuser: "{{ nextcloud_dbname }}"
# host mounts
nextcloud_root: "/opt/{{ nextcloud_container }}/public_html"
nextcloud_dbroot: "/opt/{{ nextcloud_container }}/database"
nextcloud_name: nextcloud

View File

@ -0,0 +1,25 @@
- name: Set Nextcloud's Trusted Proxy
ansible.builtin.command: >
docker exec --user www-data "{{ nextcloud_name }}"
php occ config:system:set trusted_proxies 0 --value="{{ traefik_name }}"
register: nextcloud_trusted_proxy
changed_when: "nextcloud_trusted_proxy.stdout == 'System config value trusted_proxies => 0 set to string ' ~ traefik_name"
listen: install_nextcloud
- name: Set Nextcloud's Trusted Domain
ansible.builtin.command: >
docker exec --user www-data "{{ nextcloud_name }}"
php occ config:system:set trusted_domains 0 --value="{{ nextcloud.DOMAIN }}"
register: nextcloud_trusted_domains
changed_when: "nextcloud_trusted_domains.stdout == 'System config value trusted_domains => 0 set to string ' ~ nextcloud.DOMAIN"
listen: install_nextcloud
- name: Preform Nextcloud database maintenance
ansible.builtin.command: >
docker exec --user www-data "{{ nextcloud_name }}" {{ item }}
loop:
- "php occ maintenance:mode --on"
- "php occ db:add-missing-indices"
- "php occ db:convert-filecache-bigint"
- "php occ maintenance:mode --off"
listen: install_nextcloud

View File

@ -1,109 +1,66 @@
- name: Create Nextcloud network
docker_network:
name: "{{ nextcloud_container }}"
- name: Install MySQL module for Ansible
ansible.builtin.apt:
name: python3-pymysql
state: present
- name: Start Nextcloud's database container
docker_container:
name: "{{ nextcloud_dbcontainer }}"
image: mariadb:{{ nextcloud_dbversion }}
- name: Create Nextcloud database
community.mysql.mysql_db:
name: "{{ nextcloud.DB_NAME | default('nextcloud') }}"
state: present
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Create Nextcloud database user
community.mysql.mysql_user:
name: "{{ nextcloud.DB_USER | default('nextcloud') }}"
password: "{{ nextcloud.DB_PASSWD }}"
host: '%'
state: present
priv: "{{ nextcloud.DB_NAME | default('nextcloud') }}.*:ALL"
login_unix_socket: /var/run/mysqld/mysqld.sock
- name: Start Nextcloud service and enable on boot
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ nextcloud_name }}"
state: started
restart_policy: always
volumes: "{{ nextcloud_dbroot }}:/var/lib/mysql"
networks_cli_compatible: true
networks:
- name: "{{ nextcloud_container }}"
env:
MYSQL_RANDOM_ROOT_PASSWORD: "true"
MYSQL_DATABASE: "{{ nextcloud_dbname }}"
MYSQL_USER: "{{ nextcloud_dbuser }}"
MYSQL_PASSWORD: "{{ nextcloud_dbpass }}"
- name: Start Nextcloud container
docker_container:
name: "{{ nextcloud_container }}"
image: nextcloud:{{ nextcloud_version }}
state: started
restart_policy: always
volumes: "{{ nextcloud_root }}:/var/www/html"
networks_cli_compatible: true
networks:
- name: "{{ nextcloud_container }}"
- name: traefik
labels:
traefik.http.routers.nextcloud.rule: "Host(`{{ nextcloud_domain }}`)"
traefik.http.routers.nextcloud.entrypoints: websecure
traefik.http.routers.nextcloud.tls.certresolver: letsencrypt
traefik.http.routers.nextcloud.middlewares: "securehttps@file,nextcloud-webdav"
traefik.http.middlewares.nextcloud-webdav.redirectregex.regex: "https://(.*)/.well-known/(card|cal)dav"
traefik.http.middlewares.nextcloud-webdav.redirectregex.replacement: "https://${1}/remote.php/dav/"
traefik.http.middlewares.nextcloud-webdav.redirectregex.permanent: "true"
traefik.docker.network: traefik
traefik.enable: "true"
- name: Grab Nextcloud database container information
docker_container_info:
name: "{{ nextcloud_dbcontainer }}"
register: nextcloud_dbinfo
enabled: true
when: nextcloud.ENABLE | default('false')
- name: Grab Nextcloud container information
docker_container_info:
name: "{{ nextcloud_container }}"
community.general.docker_container_info:
name: "{{ nextcloud_name }}"
register: nextcloud_info
- name: Wait for Nextcloud to become available
wait_for:
ansible.builtin.wait_for:
host: "{{ nextcloud_info.container.NetworkSettings.Networks.traefik.IPAddress }}"
delay: 10
port: 80
- name: Check Nextcloud status
command: "docker exec --user www-data {{ nextcloud_container }}
php occ status"
ansible.builtin.command: >
docker exec --user www-data "{{ nextcloud_name }}" php occ status
register: nextcloud_status
args:
removes: "{{ nextcloud_root }}/config/CAN_INSTALL"
- name: Wait for Nextcloud database to become available
wait_for:
host: "{{ nextcloud_dbinfo.container.NetworkSettings.Networks.nextcloud.IPAddress }}"
port: 3306
changed_when: false
- name: Install Nextcloud
command: 'docker exec --user www-data {{ nextcloud_container }}
php occ maintenance:install
--database "mysql"
--database-host "{{ nextcloud_dbcontainer }}"
--database-name "{{ nextcloud_dbname }}"
--database-user "{{ nextcloud_dbuser }}"
--database-pass "{{ nextcloud_dbpass }}"
--admin-user "{{ nextcloud_admin }}"
--admin-pass "{{ nextcloud_pass }}"'
ansible.builtin.command: >
docker exec --user www-data {{ nextcloud_name }}
php occ maintenance:install
--database "mysql"
--database-host "{{ nextcloud.DB_HOST | default('host.docker.internal') }}"
--database-name "{{ nextcloud.DB_NAME | default('nextcloud') }}"
--database-user "{{ nextcloud.DB_USER | default('nextcloud') }}"
--database-pass "{{ nextcloud.DB_PASSWD }}"
--admin-user "{{ nextcloud.ADMIN_USER | default('admin') }}"
--admin-pass "{{ nextcloud.ADMIN_PASSWD }}"
register: nextcloud_install
when:
- nextcloud_status.stdout[:26] == "Nextcloud is not installed"
- nextcloud_domain is defined
when: nextcloud_status.stderr[:26] == "Nextcloud is not installed"
changed_when: nextcloud_install.stdout == "Nextcloud was successfully installed"
notify: install_nextcloud
- name: Set Nextcloud's Trusted Proxy
command: 'docker exec --user www-data {{ nextcloud_container }}
php occ config:system:set trusted_proxies 0
--value="{{ traefik_name }}"'
when: nextcloud_install.changed
- name: Set Nextcloud's Trusted Domain
command: 'docker exec --user www-data {{ nextcloud_container }}
php occ config:system:set trusted_domains 0
--value="{{ nextcloud_domain }}"'
when: nextcloud_install.changed
- name: Preform Nextcloud database maintenance
command: "docker exec --user www-data {{ nextcloud_container }} {{ item }}"
loop:
- "php occ maintenance:mode --on"
- "php occ db:add-missing-indices"
- "php occ db:convert-filecache-bigint"
- "php occ maintenance:mode --off"
when: nextcloud_install.changed
- name: Remove Nextcloud's CAN_INSTALL file
file:
path: "{{ nextcloud_root }}/config/CAN_INSTALL"
state: absent
- name: Install Nextcloud background jobs cron
ansible.builtin.cron:
name: Nextcloud background job
minute: "*/5"
job: "/usr/bin/docker exec -u www-data nextcloud /usr/local/bin/php -f /var/www/html/cron.php"
user: root

View File

@ -1,15 +1,15 @@
- name: Create nginx root
file:
ansible.builtin.file:
path: "{{ nginx_root }}"
state: directory
- name: Generate deploy keys
openssh_keypair:
community.crypto.openssh_keypair:
path: "{{ nginx_repo_key }}"
state: present
- name: Clone static website files
git:
ansible.builtin.git:
repo: "{{ nginx_repo_url }}"
dest: "{{ nginx_html }}"
version: "{{ nginx_repo_branch }}"
@ -17,7 +17,7 @@
separate_git_dir: "{{ nginx_repo_dest }}"
- name: Start nginx container
docker_container:
community.general.docker_container:
name: "{{ nginx_name }}"
image: nginx:{{ nginx_version }}
state: started

View File

@ -1,10 +1,10 @@
- name: Install PostgreSQL
apt:
ansible.builtin.apt:
name: postgresql
state: present
- name: Trust connections to PostgreSQL
postgresql_pg_hba:
community.general.postgresql_pg_hba:
dest: "{{ postgresql_config }}"
contype: host
databases: all
@ -15,7 +15,7 @@
loop: "{{ postgresql_trust }}"
- name: Change PostgreSQL listen addresses
postgresql_set:
community.general.postgresql_set:
name: listen_addresses
value: "{{ postgresql_listen }}"
become: true
@ -23,19 +23,19 @@
register: postgresql_config
- name: Reload PostgreSQL
service:
ansible.builtin.service:
name: postgresql
state: reloaded
when: postgresql_hba.changed and not postgresql_config.changed
- name: Restart PostgreSQL
service:
ansible.builtin.service:
name: postgresql
state: restarted
when: postgresql_config.changed
- name: Allow database connections
ufw:
community.general.ufw:
rule: allow
port: "5432"
proto: tcp

View File

@ -1,35 +1,35 @@
- name: Install Prometheus node exporter
apt:
ansible.builtin.apt:
name: prometheus-node-exporter
state: present
- name: Run Prometheus node exporter
service:
ansible.builtin.service:
name: prometheus-node-exporter
state: started
- name: Create Prometheus data directory
file:
ansible.builtin.file:
path: "{{ prom_root }}/prometheus"
state: directory
owner: nobody
- name: Create Prometheus config directory
file:
ansible.builtin.file:
path: "{{ prom_root }}/config"
state: directory
- name: Install Prometheus configuration
template:
ansible.builtin.template:
src: prometheus.yml.j2
dest: "{{ prom_root }}/config/prometheus.yml"
- name: Create Prometheus network
docker_network:
community.general.docker_network:
name: "{{ prom_name }}"
- name: Start Prometheus container
docker_container:
community.general.docker_container:
name: "{{ prom_name }}"
image: prom/prometheus:{{ prom_version }}
state: started
@ -51,7 +51,7 @@
traefik.enable: "true"
- name: Start Grafana container
docker_container:
community.general.docker_container:
name: "{{ grafana_name }}"
image: grafana/grafana:{{ grafana_version }}
state: started

View File

@ -0,0 +1 @@
cached_dhparams_pem: /vagrant/scratch/dhparams.pem

View File

@ -1,5 +1,15 @@
- name: Enable nginx sites configuration
ansible.builtin.file:
src: "/etc/nginx/sites-available/{{ item.item.domain }}.conf"
dest: "/etc/nginx/sites-enabled/{{ item.item.domain }}.conf"
state: link
mode: "400"
loop: "{{ nginx_sites.results }}"
when: item.changed
listen: reload_nginx
- name: Reload nginx
service:
ansible.builtin.service:
name: nginx
state: reloaded
listen: reload_nginx

View File

@ -1,47 +1,51 @@
- name: Install nginx
apt:
ansible.builtin.apt:
name: nginx
state: present
update_cache: true
- name: Start nginx and enable on boot
service:
ansible.builtin.service:
name: nginx
state: started
enabled: true
- name: Check for cached dhparams.pem file
ansible.builtin.stat:
path: "{{ cached_dhparams_pem }}"
register: dhparams_file
- name: Copy cached dhparams.pem to /etc/ssl/
ansible.builtin.copy:
src: "{{ cached_dhparams_pem }}"
dest: /etc/ssl/dhparams.pem
mode: "600"
remote_src: true
when: dhparams_file.stat.exists
- name: Generate DH Parameters
openssl_dhparam:
community.crypto.openssl_dhparam:
path: /etc/ssl/dhparams.pem
size: 4096
- name: Install nginx base configuration
template:
ansible.builtin.template:
src: nginx.conf.j2
dest: /etc/nginx/nginx.conf
mode: '0644'
mode: "644"
notify: reload_nginx
- name: Install nginx sites configuration
template:
ansible.builtin.template:
src: server-nginx.conf.j2
dest: "/etc/nginx/sites-available/{{ item.domain }}.conf"
mode: '0644'
mode: "400"
loop: "{{ proxy.servers }}"
notify: reload_nginx
register: nginx_sites
- name: Enable nginx sites configuration
file:
src: "/etc/nginx/sites-available/{{ item.item.domain }}.conf"
dest: "/etc/nginx/sites-enabled/{{ item.item.domain }}.conf"
state: link
loop: "{{ nginx_sites.results }}"
when: item.changed
notify: reload_nginx
- name: Generate self-signed certificate
shell: 'openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes \
ansible.builtin.command: 'openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes \
-subj "/C=US/ST=Local/L=Local/O=Org/OU=IT/CN=example.com" \
-keyout /etc/ssl/private/nginx-selfsigned.key \
-out /etc/ssl/certs/nginx-selfsigned.crt'
@ -51,33 +55,34 @@
notify: reload_nginx
- name: Install LE's certbot
apt:
ansible.builtin.apt:
name: ['certbot', 'python3-certbot-dns-cloudflare']
state: present
when: proxy.production is defined and proxy.production
- name: Install Cloudflare API token
template:
ansible.builtin.template:
src: cloudflare.ini.j2
dest: /root/.cloudflare.ini
mode: '0600'
mode: "400"
when: proxy.production is defined and proxy.production and proxy.dns_cloudflare is defined
- name: Create nginx post renewal hook directory
file:
ansible.builtin.file:
path: /etc/letsencrypt/renewal-hooks/post
state: directory
mode: "500"
when: proxy.production is defined and proxy.production
- name: Install nginx post renewal hook
copy:
ansible.builtin.copy:
src: reload-nginx.sh
dest: /etc/letsencrypt/renewal-hooks/post/reload-nginx.sh
mode: '0755'
when: proxy.production is defined and proxy.production
- name: Run Cloudflare DNS-01 challenges on wildcard domains
shell: '/usr/bin/certbot certonly \
ansible.builtin.shell: '/usr/bin/certbot certonly \
--non-interactive \
--agree-tos \
--email "{{ proxy.dns_cloudflare.email }}" \
@ -93,7 +98,7 @@
notify: reload_nginx
- name: Add HTTP and HTTPS firewall rule
ufw:
community.general.ufw:
rule: allow
port: "{{ item }}"
proto: tcp

View File

@ -35,7 +35,13 @@ server {
client_max_body_size {{ item.client_max_body_size }};
{% endif %}
location / {
{% if item.restrict is defined and item.restrict %}
{% if item.allowedips is defined %}
{% for ip in item.allowedips %}
allow {{ ip }};
{% endfor %}
deny all;
{% endif %}
{% if item.restrict is defined and item.restrict %}
auth_basic "{{ item.restrict_name | default('Restricted Access') }}";
auth_basic_user_file {{ item.restrict_file | default('/etc/nginx/.htpasswd') }};
proxy_set_header Authorization "";
@ -46,6 +52,12 @@ server {
proxy_pass {{ item.proxy_pass }};
{% if item.proxy_ssl_verify is defined and item.proxy_ssl_verify is false %}
proxy_ssl_verify off;
{% endif %}
{% if item.websockets is defined and item.websockets %}
proxy_http_version 1.1;
proxy_set_header Connection $http_connection;
proxy_set_header Origin http://$host;
proxy_set_header Upgrade $http_upgrade;
{% endif %}
}
}

View File

@ -13,12 +13,12 @@
# along with this program. If not, see <https://www.gnu.org/licenses/>.
- name: Install rsnapshot
apt:
ansible.builtin.apt:
name: rsnapshot
state: present
- name: Create rsnapshot system directories
file:
ansible.builtin.file:
path: "{{ item }}"
state: directory
loop:
@ -26,19 +26,19 @@
- "{{ rsnapshot_logdir }}"
- name: Create snapshot_root directories
file:
ansible.builtin.file:
path: "{{ item.root | default(rsnapshot_root) }}"
state: directory
loop: "{{ rsnapshot }}"
- name: Install rsnapshot configuration
template:
ansible.builtin.template:
src: rsnapshot.conf.j2
dest: "{{ rsnapshot_confdir }}/{{ item.name }}.conf"
loop: "{{ rsnapshot }}"
- name: Install rsnapshot crons
cron:
ansible.builtin.cron:
name: "{{ item.1.interval }} rsnapshot of {{ item.0.name }}"
job: "/usr/bin/rsnapshot -c {{ rsnapshot_confdir }}/{{ item.0.name }}.conf {{ item.1.interval }} >/dev/null"
user: "root"
@ -53,13 +53,13 @@
- cron
- name: Install rsnapshot report script
template:
ansible.builtin.template:
src: rsnapshot-report.sh.j2
dest: /usr/local/bin/rsnapshot-report
mode: '0750'
- name: Install rsnapshot report crons
cron:
ansible.builtin.cron:
name: "{{ item.name }} rsnapshot report email"
job: "/usr/local/bin/rsnapshot-report {{ rsnapshot_reportlog }}
| mail -s '{{ item.report.subject | default('Backup Report') }}' {{ item.report.to }}"

View File

@ -1,12 +1,18 @@
# Container settings
traefik_name: traefik
traefik_dashboard: false
traefik_root: "/opt/{{ traefik_name }}"
traefik_standalone: true
traefik_http_only: false
traefik_debug: false
traefik_web_entry: "127.0.0.1:8000"
traefik_websecure_entry: "127.0.0.1:8443"
traefik_localonly: "10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 127.0.0.0/8"
# HTTPS settings
traefik_production: false
traefik_hsts_enable: false
traefik_hsts_preload: false
traefik_hsts_seconds: 0
traefik_http_redirect: false
traefik_ports:
- "80:80"
- "443:443"
traefik_http_redirect: true
# Host settings
traefik_root: "{{ docker_compose_root }}/{{ traefik_name }}"

View File

@ -1,14 +1,12 @@
- name: Reload Traefik container
file:
ansible.builtin.file:
path: "{{ traefik_root }}/config/dynamic"
state: touch
mode: 0500
listen: reload_traefik
- name: Restart Traefik container
docker_container:
name: "{{ traefik_name }}"
image: traefik:{{ traefik_version }}
state: started
container_default_behavior: "no_defaults"
restart: yes
- name: Restart Traefik
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ traefik_name }}"
state: restarted
listen: restart_traefik

View File

@ -1,56 +1,36 @@
- name: Create Traefik configuration directories
file:
- name: Create Traefik directories
ansible.builtin.file:
path: "{{ traefik_root }}/config/dynamic"
mode: 0500
state: directory
- name: Install static Traefik configuration
template:
src: traefik.yml.j2
dest: "{{ traefik_root }}/config/traefik.yml"
notify: restart_traefik
- name: Install dynamic security configuration
template:
ansible.builtin.template:
src: security.yml.j2
dest: "{{ traefik_root }}/config/dynamic/security.yml"
owner: root
group: root
mode: 0600
mode: 0400
notify: reload_traefik
- name: Install dynamic non-docker configuration
template:
ansible.builtin.template:
src: "external.yml.j2"
dest: "{{ traefik_root }}/config/dynamic/{{ item.name }}.yml"
mode: 0400
loop: "{{ traefik_external }}"
when: traefik_external is defined
- name: Create Traefik network
docker_network:
name: traefik
- name: Install static Traefik configuration
ansible.builtin.template:
src: traefik.yml.j2
dest: "{{ traefik_root }}/config/traefik.yml"
mode: 0400
notify: restart_traefik
- name: Start Traefik container
docker_container:
name: "{{ traefik_name }}"
image: traefik:{{ traefik_version }}
- name: Start Traefik service and enable on boot
ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ traefik_name }}"
state: started
restart_policy: always
ports: "{{ traefik_ports }}"
container_default_behavior: "no_defaults"
networks_cli_compatible: "false"
networks:
- name: traefik
labels:
traefik.http.routers.traefik.rule: "Host(`{{ traefik_domain }}`)"
#traefik.http.middlewares.auth.basicauth.users: "{{ traefik_auth }}"
#traefik.http.middlewares.localonly.ipwhitelist.sourcerange: "{{ traefik_localonly }}"
#traefik.http.routers.traefik.tls.certresolver: letsencrypt
#traefik.http.routers.traefik.middlewares: "securehttps@file,auth@docker,localonly"
traefik.http.routers.traefik.service: "api@internal"
traefik.http.routers.traefik.entrypoints: websecure
traefik.http.routers.traefik.tls: "true"
traefik.docker.network: traefik
traefik.enable: "{{ traefik_dashboard | string }}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "{{ traefik_root }}/config:/etc/traefik"
enabled: true
when: traefik.ENABLED | default('false')

View File

@ -0,0 +1,8 @@
# {{ ansible_managed }}
traefik_version={{ traefik_version }}
traefik_name={{ traefik_name }}
traefik_domain={{ traefik_domain }}
traefik_dashboard={{ traefik_dashboard | string | lower }}
traefik_debug={{ traefik_debug | string | lower }}
traefik_web_entry={{ traefik_web_entry }}
traefik_websecure_entry={{ traefik_websecure_entry }}

View File

@ -0,0 +1,25 @@
version: '3.7'
networks:
traefik:
name: traefik
services:
traefik:
image: "traefik:${traefik_version}"
container_name: "${traefik_name}"
ports:
- "${traefik_web_entry}:80"
{% if traefik_standalone and not traefik_http_only %}
- "${traefik_websecure_entry}:443"
{% endif %}
networks:
- traefik
labels:
- "traefik.http.routers.traefik.rule=Host(`{{ traefik_domain }}`)"
- "traefik.http.routers.traefik.service=api@internal"
- "traefik.docker.network=traefik"
- "traefik.enable=${traefik_dashboard}"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- "{{ traefik_root }}/config:/etc/traefik"

View File

@ -10,7 +10,7 @@ providers:
entrypoints:
web:
address: ':80'
{% if traefik_http_redirect is defined and traefik_http_redirect %}
{% if traefik_http_redirect is defined and traefik_http_redirect and not traefik_http_only %}
http:
redirections:
entrypoint:
@ -18,10 +18,12 @@ entrypoints:
scheme: https
permanent: true
{% endif %}
{% if not traefik_http_only is defined or not traefik_http_only %}
websecure:
address: ':443'
http:
tls: {}
{% endif %}
{% if traefik_acme_email is defined %}
certificatesResolvers:

View File

@ -1,52 +1,52 @@
- name: Install GnuPG
apt:
ansible.builtin.apt:
name: gnupg
state: present
- name: Add AdoptOpenJDK's signing key
apt_key:
ansible.builtin.apt_key:
id: 8ED17AF5D7E675EB3EE3BCE98AC3B29174885C03
url: https://adoptopenjdk.jfrog.io/adoptopenjdk/api/gpg/key/public
- name: Add MongoDB 3.6's signing key
apt_key:
ansible.builtin.apt_key:
id: 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
url: https://www.mongodb.org/static/pgp/server-3.6.asc
- name: Add UniFi's signing key
apt_key:
ansible.builtin.apt_key:
id: 4A228B2D358A5094178285BE06E85760C0A52C50
keyserver: keyserver.ubuntu.com
- name: Install AdoptOpenJDK repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb https://adoptopenjdk.jfrog.io/adoptopenjdk/deb/ buster main
mode: 0644
state: present
- name: Install MongoDB 3.6 repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb http://repo.mongodb.org/apt/debian stretch/mongodb-org/3.6 main
mode: 0644
state: present
- name: Install UniFi repository
apt_repository:
ansible.builtin.apt_repository:
repo: deb https://www.ui.com/downloads/unifi/debian stable ubiquiti
mode: 0644
state: present
- name: Install MongoDB 3.6
apt:
ansible.builtin.apt:
name: mongodb-org
state: present
- name: Install OpenJDK 8 LTS
apt:
ansible.builtin.apt:
name: adoptopenjdk-8-hotspot
state: present
- name: Install UniFi
apt:
ansible.builtin.apt:
name: unifi
state: present

View File

@ -1,5 +1,5 @@
- name: Start WordPress database container
docker_container:
community.general.docker_container:
name: "{{ wordpress_dbcontainer }}"
image: mariadb:{{ wordpress_dbversion }}
restart_policy: always
@ -11,7 +11,7 @@
MYSQL_PASSWORD: "{{ wordpress_dbpass }}"
- name: Start WordPress container
docker_container:
community.general.docker_container:
name: "{{ wordpress_container }}"
image: wordpress:{{ wordpress_version }}
restart_policy: always

Some files were not shown because too many files have changed in this diff Show More