A playbook that sets up an internal WireGuard network using innernet (as declaratively as possible)
Go to file
2023-08-23 15:07:26 +02:00
group_vars Remove unused manual peers 2022-11-09 09:17:38 +01:00
innernet-src@8d058c8d87 chore: bump innernet-src 2023-08-23 15:07:18 +02:00
inventory@0908633b4c bump inventory 2023-07-21 10:55:35 +02:00
roles Update innernet to 1.5.4 2022-07-19 11:58:49 +02:00
.dockerignore reduce docker context 2022-03-04 12:46:47 +01:00
.drone.yml Re-add plutex step 2023-02-14 07:58:01 +01:00
.gitignore add .deb files for innernet 1.5.1 in repo 2022-03-04 12:46:48 +01:00
.gitmodules add correct upstream as innernet-src 2021-11-15 11:38:49 +01:00
ansible.cfg adding ansible scaffolding 2021-11-05 15:20:44 +01:00
build-debs.sh Update innernet to 1.5.4 2022-07-19 11:58:49 +02:00
deploy.yml limit global tasks to innernet server and clients 2022-03-04 15:25:45 +01:00
Dockerfile change the way in which the .deb packages are built 2021-12-02 17:42:02 +01:00
fsfe-innernet.png change picture background 2021-11-16 10:04:48 +01:00
open_the_vault.sh adding ansible scaffolding 2021-11-05 15:20:44 +01:00
README.md fix: typo and re-encrypt to tobiasd 2023-07-21 11:43:59 +02:00
vault_passphrase.gpg fix: typo and re-encrypt to tobiasd 2023-07-21 11:43:59 +02:00
vault_passphrase.gpg.license adding ansible scaffolding 2021-11-05 15:20:44 +01:00

in docs.fsfe.org

Table of Contents

Motivation

There is a need for some of our servers to connect to other IPv6-only hosts. Since this is not always possible without introducing major painpoints elsewhere, we simply create an internal WireGuard network so that the machines in question can communicate securely using IPv4.

An overview

You can learn more about innernet by looking at its source code or reading this informative blog post of its creator.

Preparation

Requirements

  • A somewhat recent version of ansible
  • git

Clone the repo

git clone --recurse-submodules git@git.fsfe.org:fsfe-system-hackers/innernet-playbook.git
cd innernet-playbook

Deployment

In the scope of this playbook and its roles, we have three different categories of computers:

  1. The innernet server, being the central connector of all innernet peers
  2. Automatically managed machines that are innernet peers, mostly VMs
  3. Manually managed peers, for example admins and other humans

Configure server and all clients

Run the whole playbook to configure everything. For the innernet server and automatically managed machines, all will be handled. For the manually managed peers, you will be given an invitation file.

ansible-playbook deploy.yml

Add a new machine

In order to add e.g. a virtual machine to the networks, run these steps:

  1. In the inventory, add the innernet_client group to the host
  2. Run the playbook with ansible-playbook -l newserver.org deploy.yml

This will configure both the necessary parts on the server and the new machine.

Add a new manually managed peer

In order to add a new human or otherwise manually managed innernet peer, run these steps:

  1. In all.yml, add a new entry for manual_peers
  2. Run the playbook with ansible-playbook -l innernet_server deploy.yml
  3. Install innernet and import the invitation file on the new peer's computer (see below). They are in roles/client/files/ then.

Distribute the invitation files

Some invitation files are for humans, so you need to send these files to them securely. We suggest using something like wormohle.

sudo apt install magic-wormhole
cd roles/client/files
wormhole send <name_of_peer>.toml

Update

Since innernet is new software, it is not yet included in the Debian repositories. Thus, before running the playbook we need to build the innernet and innernet-server binaries.

In order to switch to a newer version of innernet, run the following steps:

  1. Check out the desired tag in the innernet-src submodule
  2. Run the build script: ./build-debs.sh
  3. Run the playbook with ansible-playbook -t update deploy.yml

Associations

The different CIDRs can have associations, e.g. so that admins can access machines, although they are not in the same subnet.

These have to be configure by an admin!

Currently, the admins CIDR is associated with all other CIDRs (i.e. humans > others and machines).

Ansible tags

Some tags allow you to specify just certain operations. Here are the currently available ones:

  • cidr: configure CIDRs
  • update: update the innernet binaries
  • listen_port: edit/set the listen port between server and clients
  • uninstall: delete innernet configuration and packages from systems