Merge pull request 'Larger overhaul, generalisation and stabilisation' (#5) from overhaul into master

Reviewed-on: https://git.fsfe.org/fsfe-system-hackers/innernet-playbook/pulls/5
This commit is contained in:
Max Mehl 2022-03-04 11:53:35 +00:00
commit f9413a9b7a
10 changed files with 334 additions and 242 deletions

1
.dockerignore Normal file
View File

@ -0,0 +1 @@
*

3
.gitignore vendored
View File

@ -1,3 +1,2 @@
/roles/server/files/ # peer invitation files
/roles/client/files/
/roles/client/files/*.toml /roles/client/files/*.toml

View File

@ -16,8 +16,15 @@ There is a need for some of our servers to connect to other IPv6-only hosts. Sin
![An overview](fsfe-innernet.png) ![An overview](fsfe-innernet.png)
You can learn more about innernet by looking at its [source code](./innernet-src) or reading this informative [blog post](https://blog.tonari.no/introducing-innernet) of its creator.
# Preparation # Preparation
## Requirements
* A somewhat recent version of `ansible`
* `git`
## Clone the repo ## Clone the repo
```bash ```bash
@ -25,39 +32,47 @@ git clone --recurse-submodules git@git.fsfe.org:fsfe-system-hackers/innernet-pla
cd innernet-playbook cd innernet-playbook
``` ```
## Build binaries from submodule at `./innernet-src` # Deployment
Since [innernet](https://github.com/tonarino/innernet) is new software, it is not yet included in the Debian repositories. Thus, before running the playbook we need to build the `innernet` and `innernet-server` binaries. At the moment, we are using `1.5.1`, but you can choose any other available version by setting the environment variable accordingly. Please also note that you need [`cargo-deb`](https://github.com/kornelski/cargo-deb) installed to successfully compile the Debian packages. In the scope of this playbook and its roles, we have three different categories of computers:
```bash 1. The innernet server, being the central connector of all innernet peers
./build-debs.sh 2. Automatically managed machines that are innernet peers, mostly VMs
``` 3. Manually managed peers, for example admins and other humans
You can learn more about innernet by looking at its [source code](./innernet-src) or reading this informative [blog post](https://blog.tonari.no/introducing-innernet) of its creator. ## Configure server and all clients
## Preparing `ansible` Run the whole playbook to configure everything. For the innernet server and
automatically managed machines, all will be handled. For the manually managed
To ensure this playbook works on different machines, [pipenv](https://pipenv.pypa.io/en/latest/) is used to pin the version of `ansible`. So, to use the same version of Ansible that this playbook was tested with, simply run: peers, you will be given an invitation file.
```bash
pipenv install --dev # for developing or
pipenv install # for simply running this playbook
pipenv shell
```
Now, you should be in a shell that is running the correct version of the `ansible` and `ansible-playbook` executable.
## Execution
### Run the playbook
```bash ```bash
ansible-playbook deploy.yml ansible-playbook deploy.yml
``` ```
## Add a new machine
In order to add e.g. a virtual machine to the networks, run these steps:
1. In the inventory, add the `innernet_client` group to the host
2. Run the playbook with `ansible-playbook -l newserver.org deploy.yml`
This will configure both the necessary parts on the server and the new machine.
## Add a new manually managed peer
In order to add a new human or otherwise manually managed innernet peer, run
these steps:
1. In `all.yml`, add a new entry for `manual_peers`
2. Run the playbook with `ansible-playbook -l innernet_server deploy.yml`
3. Install innernet and import the invitation file on the new peer's computer
(see below). They are in `roles/client/files/` then.
### Distribute the invitation files ### Distribute the invitation files
Some invitation files are for humans, so you need to send these files to them securely. I suggest using someting like `wormohle`. Some invitation files are for humans, so you need to send these files to them
securely. We suggest using someting like `wormohle`.
```bash ```bash
sudo apt install magic-wormhole sudo apt install magic-wormhole
@ -65,8 +80,34 @@ cd roles/client/files
wormhole send <name_of_peer>.toml wormhole send <name_of_peer>.toml
``` ```
### Associations # Update
Please be aware that the `admins` CIDR [is associated](https://github.com/tonarino/innernet#adding-associations-between-cidrs) with all other CIDRs (i.e. `humans > others` and `machines`). Since [innernet](https://github.com/tonarino/innernet) is new software, it is
not yet included in the Debian repositories. Thus, before running the playbook
we need to build the `innernet` and `innernet-server` binaries.
## Development In order to switch to a newer version of innernet, run the following steps:
1. Check out the desired tag in the `innernet-src` submodule
2. Run the build script: `./build-debs.sh`
3. Run the playbook with `ansible-playbook -t update deploy.yml`
# Associations
The different CIDRs can have [associations](https://github.com/tonarino/innernet#adding-associations-between-cidrs), e.g. so that admins can access
machines, although they are not in the same subnet.
These have to be configure by an admin!
Currently, the `admins` CIDR is associated with all other CIDRs (i.e. `humans` >
`others` and `machines`).
# Ansible tags
Some tags allow you to specify just certain operations. Here are the currently
available ones:
* `cidr`: configure CIDRs
* `update`: update the innernet binaries
* `listen_port`: edit/set the listen port between server and clients
* `uninstall`: delete innernet configuration and packages from systems

View File

@ -3,68 +3,36 @@
# SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-License-Identifier: AGPL-3.0-or-later
--- ---
- hosts: innernet_server - hosts: all
remote_user: root remote_user: root
tasks: tasks:
- name: Install needed packages - pause:
apt: prompt: "You are using a function to UNINSTALL innernet on the chosen hosts {{ play_hosts }}. Continue? (yes/no)?"
package: tags: [never, uninstall]
- sqlite3 register: uninstall_confirm
delegate_to: localhost
run_once: yes
- name: Query innernet-server for CIDRs - fail:
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from cidrs;"' msg: Aborted uninstallation of innernet.
register: global_existing_cidrs tags: [never, uninstall]
ignore_errors: true when: not uninstall_confirm.user_input | bool
delegate_to: localhost
run_once: yes
- name: CIDRs already registered on innernet-server - name: Get innernet-server hostname from inventory groups
debug: set_fact:
msg: "{{ item }}" # Assuming that we only have one innernet server, we take the first
loop: "{{ global_existing_cidrs.stdout_lines }}" # occurence
innernet_server: "{{ groups['innernet_server'][0] }}"
- name: CIDRs defined in this playbook run_once: true
debug:
msg: "{{ item.name }}"
loop: "{{ cidrs }}"
- name: These CIDRs have been added
debug:
msg: "{{ item.name }} is new!"
when: item.name not in global_existing_cidrs.stdout_lines
loop: "{{ cidrs }}"
- name: Query innernet-server for peers
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from peers;"'
register: global_existing_peers
ignore_errors: true
- name: Peers already registered on innernet-server
debug:
msg: "{{ item }}"
loop: "{{ global_existing_peers.stdout_lines }}"
- name: Peers defined in this playbook
debug:
msg: "{{ item.name }}"
loop: "{{ peers }}"
- name: These peers have been added
debug:
msg: "{{ item.name }} is new!"
when: item.name not in global_existing_peers.stdout_lines
loop: "{{ peers }}"
- hosts: innernet_server - hosts: innernet_server
remote_user: root remote_user: root
vars:
existing_peers: "{{ global_existing_peers.stdout_lines }}"
existing_cidrs: "{{ global_existing_cidrs.stdout_lines }}"
roles: roles:
- server - server
- hosts: innernet_client - hosts: innernet_client
remote_user: root remote_user: root
vars:
existing_peers: "{{ global_existing_peers.stdout_lines }}"
existing_cidrs: "{{ global_existing_cidrs.stdout_lines }}"
roles: roles:
- client - client

View File

@ -5,45 +5,54 @@ network_name: "fsfe"
# 65,536 usable IP addresses # 65,536 usable IP addresses
network_cidr: "10.200.0.0/16" network_cidr: "10.200.0.0/16"
# wiregaurd listening port # wiregaurd listening port
network_listen_port: "51820" network_listen_port_clients: "51820"
network_listen_port_server: "51820"
cidrs: cidrs:
## humans ## humans
## 10.200.16.1 to 10.200.31.254 ## 10.200.16.1 to 10.200.31.254
## 4,096 usable IP addresses ## 4,096 usable IP addresses
- { "parent": "fsfe", "name": "humans", "cidr": "10.200.16.0/20" } humans:
parent: fsfe
cidr: 10.200.16.0/20
### humans > admins ### humans > admins
### 10.200.16.1 to 10.200.19.254 ### 10.200.16.1 to 10.200.19.254
### 1,024 usable IP addresses ### 1,024 usable IP addresses
- { "parent": "humans", "name": "admins", "cidr": "10.200.16.0/22" } admins:
parent: humans
cidr: 10.200.16.0/22
### humans > others ### humans > others
### 10.200.20.1 to 10.200.23.254 ### 10.200.20.1 to 10.200.23.254
### 1,024 usable IP addresses ### 1,024 usable IP addresses
- { "parent": "humans", "name": "others", "cidr": "10.200.20.0/22" } others:
parent: humans
cidr: 10.200.20.0/22
## machines ## machines
## 10.200.64.1 to 10.200.127.254 ## 10.200.64.1 to 10.200.127.254
## with 16,384 usable IP addresses ## with 16,384 usable IP addresses
- { "parent": "fsfe", "name": "machines", "cidr": "10.200.64.0/18" } machines:
parent: fsfe
cidr: 10.200.64.0/18
# humans > admins, e.g. # name of the CIDR you want to use for the client role,
# - { "cidr": "admins", "name": "linus", "admin": "true" } # so automatically configured peers (typically VMs)
# humans > others, e.g. machine_cidr: machines
# - { "cidr": "others", "name": "mk", "admin": "false" }
# - { "cidr": "others", "name": "fi", "admin": "false" } # Peers that are configured manually, typically humans. The created invitation
# - { "cidr": "others", "name": "fani", "admin": "false" } # file will be stored on the controller machines and has to be imported on the
# machines, e.g. # person's computer manually.
# - { "cidr": "machines", "name": "cont1-plutex", "admin": "false" } # * the key (e.g. "linus") is limited to alphanumeric chars and dashes, no dots
peers: "{{ peers_var|from_yaml }}" # * "cidr" is the name of the CIDR the user shall belong to
peers_var: | # * "admin" defines whether peer should be an admin (true/false). Default: false
- { "cidr": "admins", "name": "linus", "admin": "true" } manual_peers:
- { "cidr": "admins", "name": "max-mehl", "admin": "true" } linus:
- { "cidr": "admins", "name": "albert", "admin": "true" } cidr: admins
{% for host in groups['innernet_client'] %} admin: true
- { max-mehl:
"cidr": "machines", cidr: admins
"name": {{ host.replace('.', '-').replace('-fsfeurope-org', '').replace('-fsfe-org', '') }}, admin: true
"admin": "false" albert:
} cidr: admins
{% endfor %} admin: true

Binary file not shown.

View File

@ -3,91 +3,121 @@
# SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-License-Identifier: AGPL-3.0-or-later
--- ---
- name: Install needed packages for uninstalling innernet - name: Convert hostname to innernet peer name
tags: [never, uninstall] # we want the mere host name before the domain, so e.g.
# * server1.fsfe.org -> server1
# * cont1.noris.fsfeurope.org -> cont1-noris
set_fact:
innernet_client: "{{ innernet_client | replace(item.0, item.1) }}"
vars:
- innernet_client: "{{ ansible_host }}"
loop:
- ['.', '-']
- ['-fsfeurope-org', '']
- ['-fsfe-org', '']
- ['-fsfe-be', '']
- name: Gather which packages are installed on the client
tags: [update, uninstall]
package_facts:
manager: auto
- name: Make sure needed packages for innernet and wireguard are installed
apt: apt:
package: package:
- python3-pexpect - python3-pexpect
- rsync
- wireguard
- wireguard-tools
- ufw
- name: Remove existing innernet - name: Remove existing innernet configuration on client
tags: [never, uninstall] tags: [never, uninstall]
expect: expect:
command: "innernet uninstall {{ network_name }}" command: "innernet uninstall {{ network_name }}"
responses: responses:
(?i)delete: "yes" (?i)delete: "yes"
when: "'innernet' in ansible_facts.packages"
- name: Install needed packages - name: Remove innernet package on client
tags: [always, update] tags: [never, uninstall]
apt: apt:
package: name: innernet
- ufw state: absent
- rsync purge: yes
- wireguard when: "'innernet' in ansible_facts.packages"
- wireguard-tools
- name: Copy package to host - name: Install innernet package on client
tags: [update] tags: [update]
synchronize: block:
src: "innernet.deb" - name: Copy innernet package to client
dest: "/tmp/innernet.deb" synchronize:
src: "innernet.deb"
dest: "/tmp/innernet.deb"
- name: Install package - name: Install innernet client package
tags: [update] apt:
apt: deb: "/tmp/innernet.deb"
deb: "/tmp/innernet.deb" update_cache: true
update_cache: true install_recommends: true
install_recommends: true # If 1. innernet not installed or 2. `update` tag executed
when: "'innernet' not in ansible_facts.packages or 'update' in ansible_run_tags"
- name: Copy non-admin invitation to hosts - name: Get existing peers from innernet-server database
tags: [new_peer] shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from peers;"'
synchronize: register: existing_peers
src: "{{ item.name }}.toml" delegate_to: "{{ innernet_server }}"
dest: "/tmp/{{ item.name }}.toml" run_once: true
- name: Add machine as innernet peer
include_role:
name: server
tasks_from: add_peer
args:
apply:
delegate_to: "{{ innernet_server }}"
vars:
peer_name: "{{ innernet_client }}"
# Value of the CIDR we defined as the CIDR for machines
peer_cidr: "{{ machine_cidr }}"
# machines are never admins
peer_admin: "false"
when: when:
# is not existing - innernet_client not in existing_peers.stdout_lines
- item.name not in hostvars['kaim.fsfeurope.org'].global_existing_peers.stdout_lines
# only if filename contains a part of the hostname
- item.name in ansible_host|replace('.', '-')
loop: "{{ peers }}"
- name: Install non-admin invitation on hosts - name: Install innernet peer invitation on machine
tags: [new_peer] block:
shell: | - name: Copy peer invitation file from controller to client
innernet install /tmp/{{ item.name }}.toml \ copy:
--default-name \ src: "{{ innernet_client }}.toml"
--delete-invite dest: "/root/{{ innernet_client }}.toml"
- name: Install peer invitation on client
shell: |
innernet install /root/{{ innernet_client }}.toml \
--default-name \
--delete-invite
when: when:
# is not existing - innernet_client not in existing_peers.stdout_lines
- item.name not in hostvars['kaim.fsfeurope.org'].global_existing_peers.stdout_lines
# only if filename contains a part of the hostname
- item.name in ansible_host|replace('.', '-')
loop: "{{ peers }}"
- name: Set listen port - name: Set listen port on client
tags: [listen_port] tags: [listen_port]
community.general.ini_file: shell: |
path: "/etc/innernet/{{ network_name }}.conf" innernet set-listen-port {{ network_name }} \
section: interface -l {{ network_listen_port_clients }} \
option: listen-port --yes
value: "{{ network_listen_port }}"
mode: 600
backup: yes
- name: Allow UDP traffic on WireGuard port - name: Allow UDP traffic on WireGuard port
tags: [listen_port, firewall] tags: [listen_port]
ufw: ufw:
to_port: "{{ network_listen_port }}" to_port: "{{ network_listen_port_clients }}"
rule: allow rule: allow
proto: udp proto: udp
- name: Just force systemd to reread configs (2.4 and above)
tags: [systemd, daemon]
ansible.builtin.systemd:
daemon_reload: yes
- name: Restart and enable innernet daemon - name: Restart and enable innernet daemon
tags: [systemd, daemon] tags: [update, listen_port]
ansible.builtin.systemd: systemd:
name: "innernet@{{ network_name }}" name: "innernet@{{ network_name }}"
state: restarted state: restarted
enabled: true enabled: yes
daemon_reload: yes

Binary file not shown.

View File

@ -0,0 +1,46 @@
# SPDX-FileCopyrightText: 2021 Free Software Foundation Europe <https://fsfe.org>
#
# SPDX-License-Identifier: AGPL-3.0-or-later
---
- name: Make sure peer invitation does not exist before creating a new one
file:
path: "/root/{{ peer_name }}.toml"
state: absent
- name: Add innernet peer on server
shell: |
innernet-server add-peer "{{ network_name }}" \
--name "{{ peer_name }}" \
--cidr "{{ peer_cidr }}" \
--admin "{{ peer_admin | lower }}" \
--save-config "/root/{{ peer_name }}.toml" \
--invite-expires "14d" \
--auto-ip \
--yes
throttle: 1
- name: Copy peer invitation file from server to controller
fetch:
src: "/root/{{ peer_name }}.toml"
dest: "{{ playbook_dir }}/roles/client/files/{{ peer_name }}.toml"
flat: yes
fail_on_missing: yes
- name: Delete peer invitation file on server
file:
path: "/root/{{ peer_name }}.toml"
state: absent
- name: Inform about invitation file
debug:
msg: "
{% if manual is defined and manual %}
ATTENTION! Now you have to install the peer invitation file for
{{ peer_name }} manually. You will find it here:
{% else %}
The peer invitation file has been downloaded to your computer. It will
be installed automatically on the machine, so if everything succeeded,
you can safely delete it here.
{% endif %}
{{ playbook_dir }}/roles/client/files/{{ peer_name }}.toml
"

View File

@ -3,48 +3,61 @@
# SPDX-License-Identifier: AGPL-3.0-or-later # SPDX-License-Identifier: AGPL-3.0-or-later
--- ---
- name: Install needed packages - name: Gather which packages are installed on the server
tags: [never, uninstall] tags: [update, uninstall]
package_facts:
manager: auto
- name: Make sure needed packages for innernet and wireguard are installed
apt: apt:
package: package:
- python3-pexpect - python3-pexpect
- rsync
- sqlite3
- wireguard
- wireguard-tools
- ufw
- name: Remove existing innernet - name: Remove existing innernet configuration
tags: [never, uninstall] tags: [never, uninstall]
expect: expect:
command: "innernet-server uninstall {{ network_name }}" command: "innernet-server uninstall {{ network_name }}"
responses: responses:
(?i)delete: "yes" (?i)delete: "yes"
when: "'innernet-server' in ansible_facts.packages"
- name: Install needed packages - name: Remove innernet package on server
tags: [never, uninstall]
apt:
name: innernet-server
state: absent
purge: yes
when: "'innernet-server' in ansible_facts.packages"
- name: Install innernet package on server
tags: [update] tags: [update]
apt: block:
package: - name: Copy innernet-server package to server
- rsync tags: [update]
- wireguard synchronize:
- wireguard-tools src: "innernet-server.deb"
dest: "/tmp/innernet-server.deb"
- name: Copy package to server - name: Install innernet-server package
tags: [never, update] tags: [update]
synchronize: apt:
src: "innernet-server.deb" deb: "/tmp/innernet-server.deb"
dest: "/tmp/innernet-server.deb" update_cache: true
install_recommends: true
# If 1. innernet-server not installed or 2. `update` tag executed
when: "'innernet-server' not in ansible_facts.packages or 'update' in ansible_run_tags"
- name: Install package - name: Check if innernet network is initialised
tags: [never, update]
apt:
deb: "/tmp/innernet-server.deb"
update_cache: true
install_recommends: true
- name: Check if network is initialised
tags: [base]
stat: stat:
path: "/etc/innernet-server/{{ network_name }}.conf" path: "/etc/innernet-server/{{ network_name }}.conf"
register: conf_file register: conf_file
- name: Create base network - name: Create base network if not existent yet
tags: [base]
shell: | shell: |
innernet-server new \ innernet-server new \
--network-name "{{ network_name }}" \ --network-name "{{ network_name }}" \
@ -53,71 +66,56 @@
--listen-port {{ network_listen_port }} --listen-port {{ network_listen_port }}
when: not conf_file.stat.exists when: not conf_file.stat.exists
- name: Create CIDRs - name: Get existing CIDRs from innernet-server database
tags: [cidr]
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from cidrs;"'
register: existing_cidrs
- name: Create new CIDRs
tags: [cidr] tags: [cidr]
shell: | shell: |
innernet-server add-cidr "{{ network_name }}" \ innernet-server add-cidr "{{ network_name }}" \
--parent "{{ item.parent }}" \ --name "{{ item.key }}" \
--name "{{ item.name }}" \ --parent "{{ item.value.parent }}" \
--cidr "{{ item.cidr }}" \ --cidr "{{ item.value.cidr }}" \
--yes --yes
loop: "{{ cidrs }}" loop: "{{ cidrs | dict2items }}"
when: when:
- item.name not in existing_cidrs - item.key not in existing_cidrs.stdout_lines
- name: Create peers # Configure manually defined peers (mostly humans)
tags: [peers] - name: Get existing peers from innernet-server database
shell: | shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from peers;"'
innernet-server add-peer "{{ network_name }}" \ register: existing_peers
--name "{{ item.name }}" \ run_once: true
--cidr "{{ item.cidr }}" \
--admin "{{ item.admin }}" \ - name: Add manually defined peers
--save-config "{{ item.name }}.toml" \ include_tasks: add_peer.yml
--invite-expires "14d" \ vars:
--auto-ip \ peer_name: "{{ item.key }}"
--yes peer_cidr: "{{ item.value.cidr }}"
loop: "{{ peers }}" peer_admin: "{{ item.value.admin | default('false') }}"
manual: true
loop: "{{ manual_peers | dict2items }}"
when: when:
- item.name not in existing_peers - item.key not in existing_peers.stdout_lines
- name: Check for actual peer invitation files - name: Enable firewall and allow SSH
tags: [peers] ufw:
shell: ls | grep .toml state: enabled
register: toml_files default: deny
ignore_errors: true to_port: 22
rule: allow
- name: Custom error message - name: Allow UDP traffic on WireGuard port
tags: [peers] ufw:
fail: to_port: "{{ network_listen_port_server }}"
msg: "Could not find any new invitation files. Have you added a new peer?" rule: allow
when: toml_files.rc == 1
- name: Copy invitation files of peers to controller - name: Restart and enable innernet-server daemon
tags: [peers] tags: [update, listen_port]
synchronize:
src: "/root/{{ item.name }}.toml"
dest: "{{ playbook_dir }}/roles/client/files/{{ item.name }}.toml"
mode: pull
when: toml_files.stdout.find(item.name) != -1
loop: "{{ peers }}"
- name: Make sure invitation files are deleted on innernet-server
tags: [peers]
file:
state: absent
path: "/root/{{ item.name }}.toml"
loop: "{{ peers }}"
when:
- item.name not in existing_peers
- name: Just force systemd to reread configs (2.4 and above)
tags: [systemd, daemon]
ansible.builtin.systemd:
daemon_reload: yes
- name: Enable innernet-server daemon
tags: [systemd, daemon]
systemd: systemd:
name: "innernet-server@{{ network_name }}" name: "innernet-server@{{ network_name }}"
state: restarted state: restarted
enabled: true enabled: yes
daemon_reload: yes