Merge pull request 'Larger overhaul, generalisation and stabilisation' (#5) from overhaul into master

Reviewed-on: https://git.fsfe.org/fsfe-system-hackers/innernet-playbook/pulls/5
This commit is contained in:
Max Mehl 2022-03-04 11:53:35 +00:00
commit f9413a9b7a
10 changed files with 334 additions and 242 deletions

1
.dockerignore Normal file
View File

@ -0,0 +1 @@
*

3
.gitignore vendored
View File

@ -1,3 +1,2 @@
/roles/server/files/
/roles/client/files/
# peer invitation files
/roles/client/files/*.toml

View File

@ -16,8 +16,15 @@ There is a need for some of our servers to connect to other IPv6-only hosts. Sin
![An overview](fsfe-innernet.png)
You can learn more about innernet by looking at its [source code](./innernet-src) or reading this informative [blog post](https://blog.tonari.no/introducing-innernet) of its creator.
# Preparation
## Requirements
* A somewhat recent version of `ansible`
* `git`
## Clone the repo
```bash
@ -25,39 +32,47 @@ git clone --recurse-submodules git@git.fsfe.org:fsfe-system-hackers/innernet-pla
cd innernet-playbook
```
## Build binaries from submodule at `./innernet-src`
# Deployment
Since [innernet](https://github.com/tonarino/innernet) is new software, it is not yet included in the Debian repositories. Thus, before running the playbook we need to build the `innernet` and `innernet-server` binaries. At the moment, we are using `1.5.1`, but you can choose any other available version by setting the environment variable accordingly. Please also note that you need [`cargo-deb`](https://github.com/kornelski/cargo-deb) installed to successfully compile the Debian packages.
In the scope of this playbook and its roles, we have three different categories of computers:
```bash
./build-debs.sh
```
1. The innernet server, being the central connector of all innernet peers
2. Automatically managed machines that are innernet peers, mostly VMs
3. Manually managed peers, for example admins and other humans
You can learn more about innernet by looking at its [source code](./innernet-src) or reading this informative [blog post](https://blog.tonari.no/introducing-innernet) of its creator.
## Configure server and all clients
## Preparing `ansible`
To ensure this playbook works on different machines, [pipenv](https://pipenv.pypa.io/en/latest/) is used to pin the version of `ansible`. So, to use the same version of Ansible that this playbook was tested with, simply run:
```bash
pipenv install --dev # for developing or
pipenv install # for simply running this playbook
pipenv shell
```
Now, you should be in a shell that is running the correct version of the `ansible` and `ansible-playbook` executable.
## Execution
### Run the playbook
Run the whole playbook to configure everything. For the innernet server and
automatically managed machines, all will be handled. For the manually managed
peers, you will be given an invitation file.
```bash
ansible-playbook deploy.yml
```
## Add a new machine
In order to add e.g. a virtual machine to the networks, run these steps:
1. In the inventory, add the `innernet_client` group to the host
2. Run the playbook with `ansible-playbook -l newserver.org deploy.yml`
This will configure both the necessary parts on the server and the new machine.
## Add a new manually managed peer
In order to add a new human or otherwise manually managed innernet peer, run
these steps:
1. In `all.yml`, add a new entry for `manual_peers`
2. Run the playbook with `ansible-playbook -l innernet_server deploy.yml`
3. Install innernet and import the invitation file on the new peer's computer
(see below). They are in `roles/client/files/` then.
### Distribute the invitation files
Some invitation files are for humans, so you need to send these files to them securely. I suggest using someting like `wormohle`.
Some invitation files are for humans, so you need to send these files to them
securely. We suggest using someting like `wormohle`.
```bash
sudo apt install magic-wormhole
@ -65,8 +80,34 @@ cd roles/client/files
wormhole send <name_of_peer>.toml
```
### Associations
# Update
Please be aware that the `admins` CIDR [is associated](https://github.com/tonarino/innernet#adding-associations-between-cidrs) with all other CIDRs (i.e. `humans > others` and `machines`).
Since [innernet](https://github.com/tonarino/innernet) is new software, it is
not yet included in the Debian repositories. Thus, before running the playbook
we need to build the `innernet` and `innernet-server` binaries.
## Development
In order to switch to a newer version of innernet, run the following steps:
1. Check out the desired tag in the `innernet-src` submodule
2. Run the build script: `./build-debs.sh`
3. Run the playbook with `ansible-playbook -t update deploy.yml`
# Associations
The different CIDRs can have [associations](https://github.com/tonarino/innernet#adding-associations-between-cidrs), e.g. so that admins can access
machines, although they are not in the same subnet.
These have to be configure by an admin!
Currently, the `admins` CIDR is associated with all other CIDRs (i.e. `humans` >
`others` and `machines`).
# Ansible tags
Some tags allow you to specify just certain operations. Here are the currently
available ones:
* `cidr`: configure CIDRs
* `update`: update the innernet binaries
* `listen_port`: edit/set the listen port between server and clients
* `uninstall`: delete innernet configuration and packages from systems

View File

@ -3,68 +3,36 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
---
- hosts: innernet_server
- hosts: all
remote_user: root
tasks:
- name: Install needed packages
apt:
package:
- sqlite3
- pause:
prompt: "You are using a function to UNINSTALL innernet on the chosen hosts {{ play_hosts }}. Continue? (yes/no)?"
tags: [never, uninstall]
register: uninstall_confirm
delegate_to: localhost
run_once: yes
- name: Query innernet-server for CIDRs
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from cidrs;"'
register: global_existing_cidrs
ignore_errors: true
- fail:
msg: Aborted uninstallation of innernet.
tags: [never, uninstall]
when: not uninstall_confirm.user_input | bool
delegate_to: localhost
run_once: yes
- name: CIDRs already registered on innernet-server
debug:
msg: "{{ item }}"
loop: "{{ global_existing_cidrs.stdout_lines }}"
- name: CIDRs defined in this playbook
debug:
msg: "{{ item.name }}"
loop: "{{ cidrs }}"
- name: These CIDRs have been added
debug:
msg: "{{ item.name }} is new!"
when: item.name not in global_existing_cidrs.stdout_lines
loop: "{{ cidrs }}"
- name: Query innernet-server for peers
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from peers;"'
register: global_existing_peers
ignore_errors: true
- name: Peers already registered on innernet-server
debug:
msg: "{{ item }}"
loop: "{{ global_existing_peers.stdout_lines }}"
- name: Peers defined in this playbook
debug:
msg: "{{ item.name }}"
loop: "{{ peers }}"
- name: These peers have been added
debug:
msg: "{{ item.name }} is new!"
when: item.name not in global_existing_peers.stdout_lines
loop: "{{ peers }}"
- name: Get innernet-server hostname from inventory groups
set_fact:
# Assuming that we only have one innernet server, we take the first
# occurence
innernet_server: "{{ groups['innernet_server'][0] }}"
run_once: true
- hosts: innernet_server
remote_user: root
vars:
existing_peers: "{{ global_existing_peers.stdout_lines }}"
existing_cidrs: "{{ global_existing_cidrs.stdout_lines }}"
roles:
- server
- hosts: innernet_client
remote_user: root
vars:
existing_peers: "{{ global_existing_peers.stdout_lines }}"
existing_cidrs: "{{ global_existing_cidrs.stdout_lines }}"
roles:
- client

View File

@ -5,45 +5,54 @@ network_name: "fsfe"
# 65,536 usable IP addresses
network_cidr: "10.200.0.0/16"
# wiregaurd listening port
network_listen_port: "51820"
network_listen_port_clients: "51820"
network_listen_port_server: "51820"
cidrs:
## humans
## 10.200.16.1 to 10.200.31.254
## 4,096 usable IP addresses
- { "parent": "fsfe", "name": "humans", "cidr": "10.200.16.0/20" }
humans:
parent: fsfe
cidr: 10.200.16.0/20
### humans > admins
### 10.200.16.1 to 10.200.19.254
### 1,024 usable IP addresses
- { "parent": "humans", "name": "admins", "cidr": "10.200.16.0/22" }
admins:
parent: humans
cidr: 10.200.16.0/22
### humans > others
### 10.200.20.1 to 10.200.23.254
### 1,024 usable IP addresses
- { "parent": "humans", "name": "others", "cidr": "10.200.20.0/22" }
others:
parent: humans
cidr: 10.200.20.0/22
## machines
## 10.200.64.1 to 10.200.127.254
## with 16,384 usable IP addresses
- { "parent": "fsfe", "name": "machines", "cidr": "10.200.64.0/18" }
machines:
parent: fsfe
cidr: 10.200.64.0/18
# humans > admins, e.g.
# - { "cidr": "admins", "name": "linus", "admin": "true" }
# humans > others, e.g.
# - { "cidr": "others", "name": "mk", "admin": "false" }
# - { "cidr": "others", "name": "fi", "admin": "false" }
# - { "cidr": "others", "name": "fani", "admin": "false" }
# machines, e.g.
# - { "cidr": "machines", "name": "cont1-plutex", "admin": "false" }
peers: "{{ peers_var|from_yaml }}"
peers_var: |
- { "cidr": "admins", "name": "linus", "admin": "true" }
- { "cidr": "admins", "name": "max-mehl", "admin": "true" }
- { "cidr": "admins", "name": "albert", "admin": "true" }
{% for host in groups['innernet_client'] %}
- {
"cidr": "machines",
"name": {{ host.replace('.', '-').replace('-fsfeurope-org', '').replace('-fsfe-org', '') }},
"admin": "false"
}
{% endfor %}
# name of the CIDR you want to use for the client role,
# so automatically configured peers (typically VMs)
machine_cidr: machines
# Peers that are configured manually, typically humans. The created invitation
# file will be stored on the controller machines and has to be imported on the
# person's computer manually.
# * the key (e.g. "linus") is limited to alphanumeric chars and dashes, no dots
# * "cidr" is the name of the CIDR the user shall belong to
# * "admin" defines whether peer should be an admin (true/false). Default: false
manual_peers:
linus:
cidr: admins
admin: true
max-mehl:
cidr: admins
admin: true
albert:
cidr: admins
admin: true

Binary file not shown.

View File

@ -3,91 +3,121 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
---
- name: Install needed packages for uninstalling innernet
tags: [never, uninstall]
- name: Convert hostname to innernet peer name
# we want the mere host name before the domain, so e.g.
# * server1.fsfe.org -> server1
# * cont1.noris.fsfeurope.org -> cont1-noris
set_fact:
innernet_client: "{{ innernet_client | replace(item.0, item.1) }}"
vars:
- innernet_client: "{{ ansible_host }}"
loop:
- ['.', '-']
- ['-fsfeurope-org', '']
- ['-fsfe-org', '']
- ['-fsfe-be', '']
- name: Gather which packages are installed on the client
tags: [update, uninstall]
package_facts:
manager: auto
- name: Make sure needed packages for innernet and wireguard are installed
apt:
package:
- python3-pexpect
- rsync
- wireguard
- wireguard-tools
- ufw
- name: Remove existing innernet
- name: Remove existing innernet configuration on client
tags: [never, uninstall]
expect:
command: "innernet uninstall {{ network_name }}"
responses:
(?i)delete: "yes"
when: "'innernet' in ansible_facts.packages"
- name: Install needed packages
tags: [always, update]
- name: Remove innernet package on client
tags: [never, uninstall]
apt:
package:
- ufw
- rsync
- wireguard
- wireguard-tools
name: innernet
state: absent
purge: yes
when: "'innernet' in ansible_facts.packages"
- name: Copy package to host
- name: Install innernet package on client
tags: [update]
block:
- name: Copy innernet package to client
synchronize:
src: "innernet.deb"
dest: "/tmp/innernet.deb"
- name: Install package
tags: [update]
- name: Install innernet client package
apt:
deb: "/tmp/innernet.deb"
update_cache: true
install_recommends: true
# If 1. innernet not installed or 2. `update` tag executed
when: "'innernet' not in ansible_facts.packages or 'update' in ansible_run_tags"
- name: Copy non-admin invitation to hosts
tags: [new_peer]
synchronize:
src: "{{ item.name }}.toml"
dest: "/tmp/{{ item.name }}.toml"
- name: Get existing peers from innernet-server database
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from peers;"'
register: existing_peers
delegate_to: "{{ innernet_server }}"
run_once: true
- name: Add machine as innernet peer
include_role:
name: server
tasks_from: add_peer
args:
apply:
delegate_to: "{{ innernet_server }}"
vars:
peer_name: "{{ innernet_client }}"
# Value of the CIDR we defined as the CIDR for machines
peer_cidr: "{{ machine_cidr }}"
# machines are never admins
peer_admin: "false"
when:
# is not existing
- item.name not in hostvars['kaim.fsfeurope.org'].global_existing_peers.stdout_lines
# only if filename contains a part of the hostname
- item.name in ansible_host|replace('.', '-')
loop: "{{ peers }}"
- innernet_client not in existing_peers.stdout_lines
- name: Install non-admin invitation on hosts
tags: [new_peer]
- name: Install innernet peer invitation on machine
block:
- name: Copy peer invitation file from controller to client
copy:
src: "{{ innernet_client }}.toml"
dest: "/root/{{ innernet_client }}.toml"
- name: Install peer invitation on client
shell: |
innernet install /tmp/{{ item.name }}.toml \
innernet install /root/{{ innernet_client }}.toml \
--default-name \
--delete-invite
when:
# is not existing
- item.name not in hostvars['kaim.fsfeurope.org'].global_existing_peers.stdout_lines
# only if filename contains a part of the hostname
- item.name in ansible_host|replace('.', '-')
loop: "{{ peers }}"
- innernet_client not in existing_peers.stdout_lines
- name: Set listen port
- name: Set listen port on client
tags: [listen_port]
community.general.ini_file:
path: "/etc/innernet/{{ network_name }}.conf"
section: interface
option: listen-port
value: "{{ network_listen_port }}"
mode: 600
backup: yes
shell: |
innernet set-listen-port {{ network_name }} \
-l {{ network_listen_port_clients }} \
--yes
- name: Allow UDP traffic on WireGuard port
tags: [listen_port, firewall]
tags: [listen_port]
ufw:
to_port: "{{ network_listen_port }}"
to_port: "{{ network_listen_port_clients }}"
rule: allow
proto: udp
- name: Just force systemd to reread configs (2.4 and above)
tags: [systemd, daemon]
ansible.builtin.systemd:
daemon_reload: yes
- name: Restart and enable innernet daemon
tags: [systemd, daemon]
ansible.builtin.systemd:
tags: [update, listen_port]
systemd:
name: "innernet@{{ network_name }}"
state: restarted
enabled: true
enabled: yes
daemon_reload: yes

Binary file not shown.

View File

@ -0,0 +1,46 @@
# SPDX-FileCopyrightText: 2021 Free Software Foundation Europe <https://fsfe.org>
#
# SPDX-License-Identifier: AGPL-3.0-or-later
---
- name: Make sure peer invitation does not exist before creating a new one
file:
path: "/root/{{ peer_name }}.toml"
state: absent
- name: Add innernet peer on server
shell: |
innernet-server add-peer "{{ network_name }}" \
--name "{{ peer_name }}" \
--cidr "{{ peer_cidr }}" \
--admin "{{ peer_admin | lower }}" \
--save-config "/root/{{ peer_name }}.toml" \
--invite-expires "14d" \
--auto-ip \
--yes
throttle: 1
- name: Copy peer invitation file from server to controller
fetch:
src: "/root/{{ peer_name }}.toml"
dest: "{{ playbook_dir }}/roles/client/files/{{ peer_name }}.toml"
flat: yes
fail_on_missing: yes
- name: Delete peer invitation file on server
file:
path: "/root/{{ peer_name }}.toml"
state: absent
- name: Inform about invitation file
debug:
msg: "
{% if manual is defined and manual %}
ATTENTION! Now you have to install the peer invitation file for
{{ peer_name }} manually. You will find it here:
{% else %}
The peer invitation file has been downloaded to your computer. It will
be installed automatically on the machine, so if everything succeeded,
you can safely delete it here.
{% endif %}
{{ playbook_dir }}/roles/client/files/{{ peer_name }}.toml
"

View File

@ -3,48 +3,61 @@
# SPDX-License-Identifier: AGPL-3.0-or-later
---
- name: Install needed packages
tags: [never, uninstall]
- name: Gather which packages are installed on the server
tags: [update, uninstall]
package_facts:
manager: auto
- name: Make sure needed packages for innernet and wireguard are installed
apt:
package:
- python3-pexpect
- rsync
- sqlite3
- wireguard
- wireguard-tools
- ufw
- name: Remove existing innernet
- name: Remove existing innernet configuration
tags: [never, uninstall]
expect:
command: "innernet-server uninstall {{ network_name }}"
responses:
(?i)delete: "yes"
when: "'innernet-server' in ansible_facts.packages"
- name: Install needed packages
tags: [update]
- name: Remove innernet package on server
tags: [never, uninstall]
apt:
package:
- rsync
- wireguard
- wireguard-tools
name: innernet-server
state: absent
purge: yes
when: "'innernet-server' in ansible_facts.packages"
- name: Copy package to server
tags: [never, update]
- name: Install innernet package on server
tags: [update]
block:
- name: Copy innernet-server package to server
tags: [update]
synchronize:
src: "innernet-server.deb"
dest: "/tmp/innernet-server.deb"
- name: Install package
tags: [never, update]
- name: Install innernet-server package
tags: [update]
apt:
deb: "/tmp/innernet-server.deb"
update_cache: true
install_recommends: true
# If 1. innernet-server not installed or 2. `update` tag executed
when: "'innernet-server' not in ansible_facts.packages or 'update' in ansible_run_tags"
- name: Check if network is initialised
tags: [base]
- name: Check if innernet network is initialised
stat:
path: "/etc/innernet-server/{{ network_name }}.conf"
register: conf_file
- name: Create base network
tags: [base]
- name: Create base network if not existent yet
shell: |
innernet-server new \
--network-name "{{ network_name }}" \
@ -53,71 +66,56 @@
--listen-port {{ network_listen_port }}
when: not conf_file.stat.exists
- name: Create CIDRs
- name: Get existing CIDRs from innernet-server database
tags: [cidr]
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from cidrs;"'
register: existing_cidrs
- name: Create new CIDRs
tags: [cidr]
shell: |
innernet-server add-cidr "{{ network_name }}" \
--parent "{{ item.parent }}" \
--name "{{ item.name }}" \
--cidr "{{ item.cidr }}" \
--name "{{ item.key }}" \
--parent "{{ item.value.parent }}" \
--cidr "{{ item.value.cidr }}" \
--yes
loop: "{{ cidrs }}"
loop: "{{ cidrs | dict2items }}"
when:
- item.name not in existing_cidrs
- item.key not in existing_cidrs.stdout_lines
- name: Create peers
tags: [peers]
shell: |
innernet-server add-peer "{{ network_name }}" \
--name "{{ item.name }}" \
--cidr "{{ item.cidr }}" \
--admin "{{ item.admin }}" \
--save-config "{{ item.name }}.toml" \
--invite-expires "14d" \
--auto-ip \
--yes
loop: "{{ peers }}"
# Configure manually defined peers (mostly humans)
- name: Get existing peers from innernet-server database
shell: 'sqlite3 /var/lib/innernet-server/{{ network_name }}.db "select name from peers;"'
register: existing_peers
run_once: true
- name: Add manually defined peers
include_tasks: add_peer.yml
vars:
peer_name: "{{ item.key }}"
peer_cidr: "{{ item.value.cidr }}"
peer_admin: "{{ item.value.admin | default('false') }}"
manual: true
loop: "{{ manual_peers | dict2items }}"
when:
- item.name not in existing_peers
- item.key not in existing_peers.stdout_lines
- name: Check for actual peer invitation files
tags: [peers]
shell: ls | grep .toml
register: toml_files
ignore_errors: true
- name: Enable firewall and allow SSH
ufw:
state: enabled
default: deny
to_port: 22
rule: allow
- name: Custom error message
tags: [peers]
fail:
msg: "Could not find any new invitation files. Have you added a new peer?"
when: toml_files.rc == 1
- name: Allow UDP traffic on WireGuard port
ufw:
to_port: "{{ network_listen_port_server }}"
rule: allow
- name: Copy invitation files of peers to controller
tags: [peers]
synchronize:
src: "/root/{{ item.name }}.toml"
dest: "{{ playbook_dir }}/roles/client/files/{{ item.name }}.toml"
mode: pull
when: toml_files.stdout.find(item.name) != -1
loop: "{{ peers }}"
- name: Make sure invitation files are deleted on innernet-server
tags: [peers]
file:
state: absent
path: "/root/{{ item.name }}.toml"
loop: "{{ peers }}"
when:
- item.name not in existing_peers
- name: Just force systemd to reread configs (2.4 and above)
tags: [systemd, daemon]
ansible.builtin.systemd:
daemon_reload: yes
- name: Enable innernet-server daemon
tags: [systemd, daemon]
- name: Restart and enable innernet-server daemon
tags: [update, listen_port]
systemd:
name: "innernet-server@{{ network_name }}"
state: restarted
enabled: true
enabled: yes
daemon_reload: yes