11 Commits

Author SHA1 Message Date
d8eba3b7be testing 2024-03-05 00:00:53 -05:00
01e8e22c01 Prevent running 'vagrant ssh' as root
Resolve possible issues with 'vagrant ssh' when executed as root
2024-03-04 23:42:40 -05:00
a31bf233dc Slight message tweaks in forward-ssh.sh script 2023-12-09 13:16:46 -05:00
60fafed9cd Update forward-ssh.sh script for Swarm support
- Address limitations in Swarm with loopback binding
- Ensure compatibility with localhost DNS wildcard A record
- Enable port forwarding on 80 and 443 using VM IP for Swarm compatibility
- Retain 8443:localhost:8443 for non-Swarm setups
2023-12-09 13:04:07 -05:00
2c00858590 Update README.md 2023-11-18 17:37:27 -05:00
be80681485 Add multi-machine support to forward-ssh.sh
- Detects multiple private keys
- Adds validation for all discovered keys
- Defaults to "default" machine, with override via the first parameter
2023-11-05 21:37:33 -05:00
a2e60972c7 Comply with linting on proxy setup 2023-11-05 21:34:19 -05:00
598359854f Update proxy role to comply with linting 2023-11-03 00:47:06 -04:00
ef812c1877 Add copyright notice on forward-ssh.sh 2023-11-03 00:12:12 -04:00
385e60aee5 Update proxy playbook 2023-11-02 23:29:54 -04:00
5633468f41 Fix linting issues on Docker role 2023-10-22 13:48:20 -04:00
17 changed files with 218 additions and 86 deletions

View File

@@ -1,41 +1,76 @@
# Project Moxie # Homelab
Project Moxie is a personal IT homelab project written in Ansible and executed by Jenkins. It is a growing collection of infrastructure as code (IaC) I write out of curiosity and for reference purposes, keeping a handful of beneficial projects managed and secured. This project is my personal IT homelab initiative for self-hosting and
exploring Free and Open Source Software (FOSS) infrastructure. As a technology
enthusiast and professional, this project is primarily a practical tool for
hosting services. It serves as a playground for engaging with systems
technology in functional, intriguing, and gratifying ways. Self-hosting
empowers individuals to govern their digital space, ensuring that their online
environments reflect personal ethics rather than centralized entities' opaque
policies.
Built on Debian Stable, this project utilizes Ansible and Vagrant, providing
relatively easy-to-use reproducible ephemeral environments to test
infrastructure automation before pushing to live systems.
## Quick Start ## Quick Start
To configure a local virtual machine for testing, follow these simple steps. To configure a local virtual machine for testing, follow these simple steps.
### Prerequisites
Vagrant and VirtualBox are used to develop Project Moxie. You will need to install these before continuing.
### Installation ### Installation
1. Clone this repository 1. Clone this repository
``` ```
git clone https://github.com/krislamo/moxie git clone https://git.krislamo.org/kris/homelab
```
Optionally clone from the GitHub mirror instead:
```
git clone https://github.com/krislamo/homelab
``` ```
2. Set the `PLAYBOOK` environmental variable to a development playbook name in the `dev/` directory 2. Set the `PLAYBOOK` environmental variable to a development playbook name in the `dev/` directory
The following `PLAYBOOK` names are available: `dockerbox`, `hypervisor`, `minecraft`, `bitwarden`, `nextcloud`, `nginx` To list available options in the `dev/` directory and choose a suitable PLAYBOOK, run:
```
ls dev/*.yml | xargs -n 1 basename -s .yml
```
Export the `PLAYBOOK` variable
``` ```
export PLAYBOOK=dockerbox export PLAYBOOK=dockerbox
``` ```
3. Bring the Vagrant box up 3. Clean up any previous provision and build the VM
``` ```
vagrant up make clean && make
``` ```
#### Copyright and License ## Vagrant Settings
Copyright (C) 2020-2021 Kris Lamoureux The Vagrantfile configures the environment based on settings from `.vagrant.yml`,
with default values including:
- PLAYBOOK: `default`
- Runs a `default` playbook that does nothing.
- You can set this by an environmental variable with the same name.
- VAGRANT_BOX: `debian/bookworm64`
- Current Debian Stable codename
- VAGRANT_CPUS: `2`
- Threads or cores per node, depending on CPU architecture
- VAGRANT_MEM: `2048`
- Specifies the amount of memory (in MB) allocated
- SSH_FORWARD: `false`
- Enable this if you need to forward SSH agents to the Vagrant machine
## Copyright and License
Copyright (C) 2019-2023 Kris Lamoureux
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0) [![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
This program is free software: you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free Software
Foundation, version 3 of the License.
This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, version 3 of the License. This program is distributed in the hope that it will be useful, but WITHOUT ANY
WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A
PARTICULAR PURPOSE. See the GNU General Public License for more details.
This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with
this program. If not, see <https://www.gnu.org/licenses/>.
You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>.

View File

@@ -7,6 +7,7 @@ users:
uid: 1001 uid: 1001
gid: 1001 gid: 1001
home: true home: true
system: true
# Import my GPG key for git signature verification # Import my GPG key for git signature verification
root_gpgkeys: root_gpgkeys:
@@ -35,6 +36,8 @@ docker_compose_deploy:
url: https://github.com/krislamo/gitea url: https://github.com/krislamo/gitea
version: b0ce66f6a1ab074172eed79eeeb36d7e9011ef8f version: b0ce66f6a1ab074172eed79eeeb36d7e9011ef8f
enabled: true enabled: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env: env:
USER_UID: "{{ users.git.uid }}" USER_UID: "{{ users.git.uid }}"
USER_GID: "{{ users.git.gid }}" USER_GID: "{{ users.git.gid }}"

View File

@@ -4,6 +4,18 @@ base_domain: local.krislamo.org
allow_reboot: false allow_reboot: false
manage_network: false manage_network: false
users:
git:
uid: 1001
gid: 1001
home: true
system: true
# Import my GPG key for git signature verification
root_gpgkeys:
- name: kris@lamoureux.io
id: FBF673CEEC030F8AECA814E73EDA9C3441EDA925
# proxy # proxy
proxy: proxy:
#production: true #production: true
@@ -15,14 +27,49 @@ proxy:
- "{{ base_domain }}" - "{{ base_domain }}"
servers: servers:
- domain: "{{ bitwarden_domain }}" - domain: "{{ bitwarden_domain }}"
proxy_pass: "http://127.0.0.1:8080" proxy_pass: "http://127.0.0.1"
- domain: "{{ gitea_domain }}" - domain: "{{ gitea_domain }}"
proxy_pass: "http://127.0.0.1:3000" proxy_pass: "http://127.0.0.1"
# docker # docker
docker_official: true # docker's apt repos
docker_users: docker_users:
- vagrant - vagrant
docker_compose_env_nolog: false # dev only setting
docker_compose_deploy:
# Traefik
- name: traefik
url: https://github.com/krislamo/traefik
version: e97db75e2e214582fac5f5e495687ab5cdf855ad
path: docker-compose.web.yml
enabled: true
accept_newhostkey: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENABLE: true
# Gitea
- name: gitea
url: https://github.com/krislamo/gitea
version: b0ce66f6a1ab074172eed79eeeb36d7e9011ef8f
enabled: true
trusted_keys:
- FBF673CEEC030F8AECA814E73EDA9C3441EDA925
env:
ENTRYPOINT: web
ENABLE_TLS: false
USER_UID: "{{ users.git.uid }}"
USER_GID: "{{ users.git.gid }}"
DB_PASSWD: "{{ gitea.DB_PASSWD }}"
# gitea
gitea_domain: "git.{{ base_domain }}"
gitea:
DB_NAME: gitea
DB_USER: gitea
DB_PASSWD: password
# bitwarden # bitwarden
# Get Installation ID & Key at https://bitwarden.com/host/ # Get Installation ID & Key at https://bitwarden.com/host/
bitwarden_domain: "vault.{{ base_domain }}" bitwarden_domain: "vault.{{ base_domain }}"
@@ -30,8 +77,3 @@ bitwarden_dbpass: password
bitwarden_install_id: 4ea840a3-532e-4cb6-a472-abd900728b23 bitwarden_install_id: 4ea840a3-532e-4cb6-a472-abd900728b23
bitwarden_install_key: 1yB3Z2gRI0KnnH90C6p bitwarden_install_key: 1yB3Z2gRI0KnnH90C6p
#bitwarden_prodution: true #bitwarden_prodution: true
# gitea
gitea_domain: "git.{{ base_domain }}"
gitea_version: 1
gitea_dbpass: password

View File

@@ -5,8 +5,8 @@
- host_vars/proxy.yml - host_vars/proxy.yml
roles: roles:
- base - base
- mariadb
- proxy - proxy
- docker - docker
- mariadb
- gitea - gitea
- bitwarden - bitwarden

View File

@@ -1,17 +1,33 @@
#!/bin/bash #!/bin/bash
# Finds the SSH private key under ./.vagrant and connects to # Finds the SSH private key under ./.vagrant and connects to
# the Vagrant box, port forwarding localhost ports: 8443, 80, 443 # the Vagrant box, port forwarding localhost ports: 8443, 443, 80, 22
#
# Download the latest script:
# https://git.krislamo.org/kris/homelab/raw/branch/main/forward-ssh.sh
#
# Copyright (C) 2023 Kris Lamoureux
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
# Root check # Root check
if [ "$EUID" -ne 0 ]; then if [ "$EUID" -ne 0 ]; then
echo "[ERROR]: Please run script as root" echo "[ERROR]: Please run this script as root"
exit 1 exit 1
fi fi
# Clean environment # Clean environment
unset PRIVATE_KEY unset PRIVATE_KEY
unset HOST_IP
unset MATCH_PATTERN unset MATCH_PATTERN
unset PKILL_ANSWER unset PKILL_ANSWER
@@ -24,8 +40,8 @@ function ssh_connect {
printf "[INFO]: Starting new vagrant SSH tunnel on PID " printf "[INFO]: Starting new vagrant SSH tunnel on PID "
sudo -u "$USER" ssh -fNT -i "$PRIVATE_KEY" \ sudo -u "$USER" ssh -fNT -i "$PRIVATE_KEY" \
-L 22:localhost:22 \ -L 22:localhost:22 \
-L 80:localhost:80 \ -L 80:"$HOST_IP":80 \
-L 443:localhost:443 \ -L 443:"$HOST_IP":443 \
-L 8443:localhost:8443 \ -L 8443:localhost:8443 \
-o UserKnownHostsFile=/dev/null \ -o UserKnownHostsFile=/dev/null \
-o StrictHostKeyChecking=no \ -o StrictHostKeyChecking=no \
@@ -34,28 +50,50 @@ function ssh_connect {
pgrep -f "$MATCH_PATTERN" pgrep -f "$MATCH_PATTERN"
;; ;;
*) *)
echo "[INFO]: Delined to start a new vagrant SSH tunnel" echo "[INFO]: Declined to start a new vagrant SSH tunnel"
exit 0 exit 0
;; ;;
esac esac
} }
# Check for valid PRIVATE_KEY location # Check for valid PRIVATE_KEY location
PRIVATE_KEY="$(find .vagrant -name "private_key" 2>/dev/null)" PRIVATE_KEY="$(find .vagrant -name "private_key" 2>/dev/null | sort)"
if ! ssh-keygen -l -f "$PRIVATE_KEY" &>/dev/null; then
# Single vagrant machine or multiple
if [ "$(echo "$PRIVATE_KEY" | wc -l)" -gt 1 ]; then
while IFS= read -r KEYFILE; do
if ! ssh-keygen -l -f "$KEYFILE" &>/dev/null; then
echo "[ERROR]: The SSH key '$KEYFILE' is not valid. Are your virtual machines running?"
exit 1
fi
echo "[CHECK]: Valid key at $KEYFILE"
done < <(echo "$PRIVATE_KEY")
PRIVATE_KEY="$(echo "$PRIVATE_KEY" | grep -m1 "${1:-default}")"
elif ! ssh-keygen -l -f "$PRIVATE_KEY" &>/dev/null; then
echo "[ERROR]: The SSH key '$PRIVATE_KEY' is not valid. Is your virtual machine running?" echo "[ERROR]: The SSH key '$PRIVATE_KEY' is not valid. Is your virtual machine running?"
exit 1 exit 1
else
echo "[CHECK]: Valid key at $PRIVATE_KEY"
fi fi
echo "[CHECK]: Valid key at $PRIVATE_KEY"
# Grab first IP or use whatever HOST_IP_FIELD is set to and check that the guest is up # Grab first IP or use whatever HOST_IP_FIELD is set to and check that the guest is up
HOST_IP="$(vagrant ssh -c "hostname -I | cut -d' ' -f${HOST_IP_FIELD:-1}" 2>/dev/null)" if [ -z "$HOST_IP" ]; then
HOST_IP="${HOST_IP::-1}" # trim HOST_IP="$(sudo -u "$SUDO_USER" vagrant ssh -c "hostname -I | cut -d' ' -f${HOST_IP_FIELD:-1}" "${1:-default}" 2>/dev/null)"
if [ -z "$HOST_IP" ]; then
echo "[ERROR]: Failed to find ${1:-default}'s IP"
exit 1
fi
HOST_IP="${HOST_IP::-1}" # trim
else
echo "[INFO]: HOST_IP configured by the shell environment"
fi
if ! ping -c 1 "$HOST_IP" &>/dev/null; then if ! ping -c 1 "$HOST_IP" &>/dev/null; then
echo "[ERROR]: Cannot ping the host IP '$HOST_IP'" echo "[ERROR]: Cannot ping the host IP '$HOST_IP'"
exit 1 exit 1
fi fi
echo "[CHECK]: Host at $HOST_IP is up" echo "[CHECK]: Host at $HOST_IP (${1:-default}) is up"
# Pattern for matching processes running # Pattern for matching processes running
MATCH_PATTERN="ssh -fNT -i ${PRIVATE_KEY}.*vagrant@" MATCH_PATTERN="ssh -fNT -i ${PRIVATE_KEY}.*vagrant@"

View File

@@ -5,7 +5,12 @@
listen: rebuild_bitwarden listen: rebuild_bitwarden
- name: Rebuild Bitwarden - name: Rebuild Bitwarden
ansible.builtin.shell: "{{ bitwarden_root }}/bitwarden.sh rebuild" ansible.builtin.command: "{{ bitwarden_root }}/bitwarden.sh rebuild"
listen: rebuild_bitwarden
- name: Reload systemd manager configuration
ansible.builtin.systemd:
daemon_reload: true
listen: rebuild_bitwarden listen: rebuild_bitwarden
- name: Start Bitwarden after rebuild - name: Start Bitwarden after rebuild
@@ -14,3 +19,10 @@
state: started state: started
enabled: true enabled: true
listen: rebuild_bitwarden listen: rebuild_bitwarden
- name: Create Bitwarden's initial log file
ansible.builtin.file:
path: "{{ bitwarden_logs_identity }}/{{ bitwarden_logs_identity_date }}.txt"
state: touch
mode: "644"
listen: touch_bitwarden

View File

@@ -7,6 +7,7 @@
ansible.builtin.file: ansible.builtin.file:
path: "{{ bitwarden_root }}" path: "{{ bitwarden_root }}"
state: directory state: directory
mode: "755"
- name: Download Bitwarden script - name: Download Bitwarden script
ansible.builtin.get_url: ansible.builtin.get_url:
@@ -22,22 +23,23 @@
mode: u+x mode: u+x
- name: Run Bitwarden installation script - name: Run Bitwarden installation script
ansible.builtin.shell: "{{ bitwarden_root }}/bw_wrapper" ansible.builtin.command: "{{ bitwarden_root }}/bw_wrapper"
args: args:
creates: "{{ bitwarden_root }}/bwdata/config.yml" creates: "{{ bitwarden_root }}/bwdata/config.yml"
- name: Install docker-compose override - name: Install compose override
ansible.builtin.template: ansible.builtin.template:
src: compose.override.yml.j2 src: compose.override.yml.j2
dest: "{{ bitwarden_root }}/bwdata/docker/docker-compose.override.yml" dest: "{{ bitwarden_root }}/bwdata/docker/docker-compose.override.yml"
when: traefik_version is defined mode: "644"
when: bitwarden_override | default(true)
notify: rebuild_bitwarden notify: rebuild_bitwarden
- name: Disable bitwarden-nginx HTTP on 80 - name: Disable bitwarden-nginx HTTP on 80
ansible.builtin.replace: ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml" path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^http_port: 80$" regexp: "^http_port: 80$"
replace: "http_port: 127.0.0.1:8080" replace: "http_port: {{ bitwarden_http_port | default('127.0.0.1:9080') }}"
when: not bitwarden_standalone when: not bitwarden_standalone
notify: rebuild_bitwarden notify: rebuild_bitwarden
@@ -45,7 +47,7 @@
ansible.builtin.replace: ansible.builtin.replace:
path: "{{ bitwarden_root }}/bwdata/config.yml" path: "{{ bitwarden_root }}/bwdata/config.yml"
regexp: "^https_port: 443$" regexp: "^https_port: 443$"
replace: "https_port: 127.0.0.1:8443" replace: "https_port: {{ bitwarden_https_port | default('127.0.0.1:9443') }}"
when: not bitwarden_standalone when: not bitwarden_standalone
notify: rebuild_bitwarden notify: rebuild_bitwarden
@@ -76,6 +78,7 @@
ansible.builtin.template: ansible.builtin.template:
src: bitwarden.service.j2 src: bitwarden.service.j2
dest: "/etc/systemd/system/{{ bitwarden_name }}.service" dest: "/etc/systemd/system/{{ bitwarden_name }}.service"
mode: "644"
register: bitwarden_systemd register: bitwarden_systemd
notify: rebuild_bitwarden notify: rebuild_bitwarden
@@ -83,22 +86,12 @@
ansible.builtin.file: ansible.builtin.file:
path: "{{ bitwarden_logs_identity }}" path: "{{ bitwarden_logs_identity }}"
state: directory state: directory
register: bitwarden_logs mode: "755"
notify: touch_bitwarden
- name: Create Bitwarden's initial log file
ansible.builtin.file:
path: "{{ bitwarden_logs_identity }}/{{ bitwarden_logs_identity_date }}.txt"
state: touch
when: bitwarden_logs.changed
- name: Install Bitwarden's Fail2ban jail - name: Install Bitwarden's Fail2ban jail
ansible.builtin.template: ansible.builtin.template:
src: fail2ban-jail.conf.j2 src: fail2ban-jail.conf.j2
dest: /etc/fail2ban/jail.d/bitwarden.conf dest: /etc/fail2ban/jail.d/bitwarden.conf
mode: "640"
notify: restart_fail2ban notify: restart_fail2ban
- name: Reload systemd manager configuration
ansible.builtin.systemd:
daemon_reload: true
when: bitwarden_systemd.changed
notify: rebuild_bitwarden

View File

@@ -23,10 +23,13 @@ send "{{ bitwarden_install_id }}\r"
expect "Enter your installation key:" expect "Enter your installation key:"
send "{{ bitwarden_install_key }}\r" send "{{ bitwarden_install_key }}\r"
expect "Do you have a SSL certificate to use? (y/n):" expect "Enter your region (US/EU) \\\[US\\\]:"
send "US\r"
expect "Do you have a SSL certificate to use? (y/N):"
send "n\r" send "n\r"
expect "Do you want to generate a self-signed SSL certificate? (y/n):" expect "Do you want to generate a self-signed SSL certificate? (y/N):"
{% if bitwarden_standalone and not bitwarden_production %} {% if bitwarden_standalone and not bitwarden_production %}
send "y\r" send "y\r"
{% else %} {% else %}

View File

@@ -6,13 +6,11 @@ services:
- traefik - traefik
labels: labels:
traefik.http.routers.bitwarden.rule: "Host(`{{ bitwarden_domain }}`)" traefik.http.routers.bitwarden.rule: "Host(`{{ bitwarden_domain }}`)"
traefik.http.routers.bitwarden.entrypoints: websecure traefik.http.routers.bitwarden.entrypoints: {{ bitwarden_entrypoint | default('web') }}
traefik.http.routers.bitwarden.tls.certresolver: letsencrypt traefik.http.routers.bitwarden.tls: {{ bitwarden_traefik_tls | default('false') }}
traefik.http.routers.bitwarden.middlewares: "securehttps@file"
traefik.http.services.bitwarden.loadbalancer.server.port: 8080 traefik.http.services.bitwarden.loadbalancer.server.port: 8080
traefik.docker.network: traefik traefik.docker.network: traefik
traefik.enable: "true" traefik.enable: "true"
networks: networks:
traefik: traefik:
external: true external: true

View File

@@ -8,4 +8,4 @@ docker_compose: "{{ (docker_official | bool) | ternary('/usr/bin/docker compose'
docker_official: false docker_official: false
docker_repos_keys: "{{ docker_repos_path }}/.keys" docker_repos_keys: "{{ docker_repos_path }}/.keys"
docker_repos_keytype: rsa docker_repos_keytype: rsa
docker_repos_path: /srv/.compose_repos docker_repos_path: /srv/.compose_repos

View File

@@ -4,7 +4,7 @@
listen: compose_systemd listen: compose_systemd
- name: Find which services had a docker-compose.yml updated - name: Find which services had a docker-compose.yml updated
set_fact: ansible.builtin.set_fact:
compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}" compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}"
loop: "{{ compose_update.results }}" loop: "{{ compose_update.results }}"
loop_control: loop_control:
@@ -13,7 +13,7 @@
listen: compose_restart listen: compose_restart
- name: Find which services had their .env updated - name: Find which services had their .env updated
set_fact: ansible.builtin.set_fact:
compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}" compose_restart_list: "{{ (compose_restart_list | default([])) + [item.item.name] }}"
loop: "{{ compose_env_update.results }}" loop: "{{ compose_env_update.results }}"
loop_control: loop_control:
@@ -29,20 +29,20 @@
listen: restart_mariadb # hijack handler for early restart listen: restart_mariadb # hijack handler for early restart
- name: Set MariaDB as restarted - name: Set MariaDB as restarted
set_fact: ansible.builtin.set_fact:
mariadb_restarted: true mariadb_restarted: true
when: not mariadb_restarted when: not mariadb_restarted
listen: restart_mariadb listen: restart_mariadb
- name: Restart {{ docker_compose_service }} services - name: Restart compose services
ansible.builtin.systemd: ansible.builtin.systemd:
state: restarted state: restarted
name: "{{ docker_compose_service }}@{{ item }}" name: "{{ docker_compose_service }}@{{ item }}"
loop: "{{ compose_restart_list | unique }}" loop: "{{ compose_restart_list | default([]) | unique }}"
when: compose_restart_list is defined when: compose_restart_list is defined
listen: compose_restart listen: compose_restart
- name: Start {{ docker_compose_service }} services and enable on boot - name: Start compose services and enable on boot
ansible.builtin.service: ansible.builtin.service:
name: "{{ docker_compose_service }}@{{ item.name }}" name: "{{ docker_compose_service }}@{{ item.name }}"
state: started state: started

View File

@@ -3,6 +3,9 @@
url: "{{ docker_apt_keyring_url }}" url: "{{ docker_apt_keyring_url }}"
dest: "{{ docker_apt_keyring }}" dest: "{{ docker_apt_keyring }}"
checksum: "sha256:{{ docker_apt_keyring_hash }}" checksum: "sha256:{{ docker_apt_keyring_hash }}"
mode: "644"
owner: root
group: root
when: docker_official when: docker_official
- name: Remove official Docker APT key - name: Remove official Docker APT key

View File

@@ -21,6 +21,7 @@
- name: Create git's .ssh directory - name: Create git's .ssh directory
ansible.builtin.file: ansible.builtin.file:
path: /home/git/.ssh path: /home/git/.ssh
mode: "700"
state: directory state: directory
- name: Generate git's SSH keys - name: Generate git's SSH keys
@@ -40,6 +41,7 @@
- name: Create git's authorized_keys file - name: Create git's authorized_keys file
ansible.builtin.file: ansible.builtin.file:
path: /home/git/.ssh/authorized_keys path: /home/git/.ssh/authorized_keys
mode: "600"
state: touch state: touch
when: not git_authkeys.stat.exists when: not git_authkeys.stat.exists
@@ -53,21 +55,24 @@
ansible.builtin.template: ansible.builtin.template:
src: gitea.sh.j2 src: gitea.sh.j2
dest: /usr/local/bin/gitea dest: /usr/local/bin/gitea
mode: 0755 mode: "755"
- name: Create Gitea's logging directory - name: Create Gitea's logging directory
ansible.builtin.file: ansible.builtin.file:
name: /var/log/gitea name: /var/log/gitea
state: directory state: directory
mode: "755"
- name: Install Gitea's Fail2ban filter - name: Install Gitea's Fail2ban filter
ansible.builtin.template: ansible.builtin.template:
src: fail2ban-filter.conf.j2 src: fail2ban-filter.conf.j2
dest: /etc/fail2ban/filter.d/gitea.conf dest: /etc/fail2ban/filter.d/gitea.conf
mode: "644"
notify: restart_fail2ban notify: restart_fail2ban
- name: Install Gitea's Fail2ban jail - name: Install Gitea's Fail2ban jail
ansible.builtin.template: ansible.builtin.template:
src: fail2ban-jail.conf.j2 src: fail2ban-jail.conf.j2
dest: /etc/fail2ban/jail.d/gitea.conf dest: /etc/fail2ban/jail.d/gitea.conf
mode: "640"
notify: restart_fail2ban notify: restart_fail2ban

View File

@@ -6,7 +6,7 @@
listen: restart_mariadb listen: restart_mariadb
- name: Set MariaDB as restarted - name: Set MariaDB as restarted
set_fact: ansible.builtin.set_fact:
mariadb_restarted: true mariadb_restarted: true
when: not mariadb_restarted when: not mariadb_restarted
listen: restart_mariadb listen: restart_mariadb

View File

@@ -4,7 +4,7 @@
state: present state: present
- name: Set MariaDB restarted fact - name: Set MariaDB restarted fact
set_fact: ansible.builtin.set_fact:
mariadb_restarted: false mariadb_restarted: false
- name: Regather facts for the potentially new docker0 interface - name: Regather facts for the potentially new docker0 interface

View File

@@ -1,3 +1,13 @@
- name: Enable nginx sites configuration
ansible.builtin.file:
src: "/etc/nginx/sites-available/{{ item.item.domain }}.conf"
dest: "/etc/nginx/sites-enabled/{{ item.item.domain }}.conf"
state: link
mode: "400"
loop: "{{ nginx_sites.results }}"
when: item.changed
listen: reload_nginx
- name: Reload nginx - name: Reload nginx
ansible.builtin.service: ansible.builtin.service:
name: nginx name: nginx

View File

@@ -19,28 +19,18 @@
ansible.builtin.template: ansible.builtin.template:
src: nginx.conf.j2 src: nginx.conf.j2
dest: /etc/nginx/nginx.conf dest: /etc/nginx/nginx.conf
mode: 0644 mode: "644"
notify: reload_nginx notify: reload_nginx
- name: Install nginx sites configuration - name: Install nginx sites configuration
ansible.builtin.template: ansible.builtin.template:
src: server-nginx.conf.j2 src: server-nginx.conf.j2
dest: "/etc/nginx/sites-available/{{ item.domain }}.conf" dest: "/etc/nginx/sites-available/{{ item.domain }}.conf"
mode: 0400 mode: "400"
loop: "{{ proxy.servers }}" loop: "{{ proxy.servers }}"
notify: reload_nginx notify: reload_nginx
register: nginx_sites register: nginx_sites
- name: Enable nginx sites configuration
ansible.builtin.file:
src: "/etc/nginx/sites-available/{{ item.item.domain }}.conf"
dest: "/etc/nginx/sites-enabled/{{ item.item.domain }}.conf"
state: link
mode: 0400
loop: "{{ nginx_sites.results }}"
when: item.changed
notify: reload_nginx
- name: Generate self-signed certificate - name: Generate self-signed certificate
ansible.builtin.command: 'openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes \ ansible.builtin.command: 'openssl req -newkey rsa:4096 -x509 -sha256 -days 3650 -nodes \
-subj "/C=US/ST=Local/L=Local/O=Org/OU=IT/CN=example.com" \ -subj "/C=US/ST=Local/L=Local/O=Org/OU=IT/CN=example.com" \
@@ -61,14 +51,14 @@
ansible.builtin.template: ansible.builtin.template:
src: cloudflare.ini.j2 src: cloudflare.ini.j2
dest: /root/.cloudflare.ini dest: /root/.cloudflare.ini
mode: 0400 mode: "400"
when: proxy.production is defined and proxy.production and proxy.dns_cloudflare is defined when: proxy.production is defined and proxy.production and proxy.dns_cloudflare is defined
- name: Create nginx post renewal hook directory - name: Create nginx post renewal hook directory
ansible.builtin.file: ansible.builtin.file:
path: /etc/letsencrypt/renewal-hooks/post path: /etc/letsencrypt/renewal-hooks/post
state: directory state: directory
mode: 0500 mode: "500"
when: proxy.production is defined and proxy.production when: proxy.production is defined and proxy.production
- name: Install nginx post renewal hook - name: Install nginx post renewal hook