Trying this for every peer winds up being very slow and precludes it
from acceptable runtime in the CI, so reduce this to 4.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Now that we have parent pointers hooked up, we can simply go right to
the node and remove it in place, rather than having to recursively walk
the entire trie.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This makes the insertion algorithm a bit more efficient, while also now
taking on the additional task of connecting up parent pointers. This
will be handy in the following commit.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Heavier network extensions might require the wireguard-go component to
use less ram, so let users of this reduce these as needed.
At some point we'll put this behind a configuration method of sorts, but
for now, just expose the consts as vars.
Requested-by: Josh Bleecher Snyder <josh@tailscale.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
On Linux we can run `ip link del wg0`, in which case the fd becomes
stale, and we should exit. Since this is an intentional action, don't
treat it as an error.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
When debugging, it's useful to know why a receive func exited.
We were already logging that, but only in the "death spiral" case.
Move the logging up, to capture it always.
Reduce the verbosity, since it is not an error case any more.
Put the receive func name in the log line.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
Note: this bug is "hidden" by avoiding "death spiral" code path by
6228659 ("device: handle broader range of errors in RoutineReceiveIncoming").
If the code reached "death spiral" mechanism, there would be multiple
double frees happening. This results in a deadlock on iOS, because the
pools are fixed size and goroutine might stop until somebody makes
space in the pool.
This was almost 100% repro on the new ARM Macbooks:
- Build with 'ios' tag for Mac. This will enable bounded pools.
- Somehow call device.IpcSet at least couple of times (update config)
- device.BindUpdate() would be triggered
- RoutineReceiveIncoming would enter "death spiral".
- RoutineReceiveIncoming would stall on double free (pool is already
full)
- The stuck routine would deadlock 'device.closeBindLocked()' function
on line 'netc.stopping.Wait()'
Signed-off-by: Kristupas Antanavičius <kristupas.antanavicius@nordsec.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Instead of hard-coding exactly two sources from which
to receive packets (an IPv4 source and an IPv6 source),
allow the conn.Bind to specify a set of sources.
Beneficial consequences:
* If there's no IPv6 support on a system,
conn.Bind.Open can choose not to return a receive function for it,
which is simpler than tracking that state in the bind.
This simplification removes existing data races from both
conn.StdNetBind and bindtest.ChannelBind.
* If there are more than two sources on a system,
the conn.Bind no longer needs to add a separate muxing layer.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
RoutineReceiveIncoming exits immediately on net.ErrClosed,
but not on other errors. However, for errors that are known
to be permanent, such as syscall.EAFNOSUPPORT,
we may as well exit immediately instead of retrying.
This considerably speeds up the package device tests right now,
because the Bind sometimes (incorrectly) returns syscall.EAFNOSUPPORT
instead of net.ErrClosed.
Signed-off-by: Josh Bleecher Snyder <josharian@gmail.com>
There's no way for len(peers)==0 when a current peer has
isRunning==false.
This requires some struct reshuffling so that the uint64 pointer is
aligned.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Googlers have a habit of graffiting their name in TODO items that then
are never addressed, and other people won't go near those because
they're marked territory of another animal. I've been gradually cleaning
these up as I see them, but this commit just goes all the way and
removes the remaining stragglers.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This code is stable, and the test is finicky, especially on high core
count systems, so just disable it.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This linked list implementation is awful, but maybe Go 2 will help
eventually, and at least we're not open coding the hlist any more.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
We're loosing our ownership of the port when bringing the device down,
which means another test process could reclaim it. Avoid this by
retrying for 4 seconds.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Rather than racing with Start(), since we're never destroying these
queues, we just set the variables at creation time.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Since RoutineHandshake calls peer.SendKeepalive(), it potentially is a
writer into the encryption queue, so we need to bump the wg count.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
RoutineReadFromTUN can trigger a call to SendStagedPackets.
SendStagedPackets attempts to protect against sending
on the encryption queue by checking peer.isRunning and device.isClosed.
However, those are subject to TOCTOU bugs.
If that happens, we get this:
goroutine 1254 [running]:
golang.zx2c4.com/wireguard/device.(*Peer).SendStagedPackets(0xc000798300)
.../wireguard-go/device/send.go:321 +0x125
golang.zx2c4.com/wireguard/device.(*Device).RoutineReadFromTUN(0xc000014780)
.../wireguard-go/device/send.go:271 +0x21c
created by golang.zx2c4.com/wireguard/device.NewDevice
.../wireguard-go/device/device.go:315 +0x298
Fix this with a simple, big hammer: Keep the encryption queue
alive as long as it might be written to.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This serves two purposes.
First, it makes repeatedly stopping then starting a peer cheaper.
Second, it prevents a data race observed accessing the queues.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The high iteration count was useful when TestUpDown
was the nexus of new bugs to investigate.
Now that it has stabilized, that's less valuable.
And it slows down running the tests and crowds out other tests.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
On a many-core machine with the race detector enabled,
this test can take several minutes to complete.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It's never used and we won't have a use for it. Also, move to go-running
stringer, for those without GOPATHs.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
The test previously checked the offset within a substruct, not the
offset within the allocated struct, so this adds the two together.
It then fixes an alignment crash on 32-bit machines.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Part of being actually idempotent is that we shouldn't penalize code
that takes advantage of this property with a log splat.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Before, the code attached a finalizer to an object that wasn't returned,
resulting in immediate garbage collection. Instead return the actual
pointer.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Without this, we wind up freeing packets that the encryption/decryption
queues still have, resulting in a UaF.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
The immediate motivation for this change is an observed deadlock.
1. A goroutine calls peer.Stop. That calls peer.queue.Lock().
2. Another goroutine is in RoutineSequentialReceiver.
It receives an elem from peer.queue.inbound.
3. The peer.Stop goroutine calls close(peer.queue.inbound),
close(peer.queue.outbound), and peer.stopping.Wait().
It blocks waiting for RoutineSequentialReceiver
and RoutineSequentialSender to exit.
4. The RoutineSequentialReceiver goroutine calls peer.SendStagedPackets().
SendStagedPackets attempts peer.queue.RLock().
That blocks forever because the peer.Stop
goroutine holds a write lock on that mutex.
A background motivation for this change is that it can be expensive
to have a mutex in the hot code path of RoutineSequential*.
The mutex was necessary to avoid attempting to send elems on a closed channel.
This commit removes that danger by never closing the channel.
Instead, we send a sentinel nil value on the channel to indicate
to the receiver that it should exit.
The only problem with this is that if the receiver exits,
we could write an elem into the channel which would never get received.
If it never gets received, it cannot get returned to the device pools.
To work around this, we use a finalizer. When the channel can be GC'd,
the finalizer drains any remaining elements from the channel and
restores them to the device pool.
After that change, peer.queue.RWMutex no longer makes sense where it is.
It is only used to prevent concurrent calls to Start and Stop.
Move it to a more sensible location and make it a plain sync.Mutex.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We have a bunch of stupid channel tricks, and I'm about to add more.
Give them their own file. This commit is 100% code movement.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
timersInit sets up the timers.
It need only be done once per peer.
timersStart does the work to prepare the timers
for a newly running peer. It needs to be done
every time a peer starts.
Separate the two and call them in the appropriate places.
This prevents data races on the peer's timers fields
when starting and stopping peers.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We already track this state elsewhere. No need to duplicate.
The cost of calling changeState is negligible.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The old code silently accepted negative MTUs.
It also set MTUs above the maximum.
It also had hard to follow deeply nested conditionals.
Add more paranoid handling,
and make the code more straight-line.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The TUN event reader does three things: Change MTU, device up, and device down.
Changing the MTU after the device is closed does no harm.
Device up and device down don't make sense after the device is closed,
but we can check that condition before proceeding with changeState.
There's thus no reason to block device.Close on RoutineTUNEventReader exiting.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This commit simplifies device state management.
It creates a single unified state variable and documents its semantics.
It also makes state changes more atomic.
As an example of the sort of bug that occurred due to non-atomic state changes,
the following sequence of events used to occur approximately every 2.5 million test runs:
* RoutineTUNEventReader received an EventDown event.
* It called device.Down, which called device.setUpDown.
* That set device.state.changing, but did not yet attempt to lock device.state.Mutex.
* Test completion called device.Close.
* device.Close locked device.state.Mutex.
* device.Close blocked on a call to device.state.stopping.Wait.
* device.setUpDown then attempted to lock device.state.Mutex and blocked.
Deadlock results. setUpDown cannot progress because device.state.Mutex is locked.
Until setUpDown returns, RoutineTUNEventReader cannot call device.state.stopping.Done.
Until device.state.stopping.Done gets called, device.state.stopping.Wait is blocked.
As long as device.state.stopping.Wait is blocked, device.state.Mutex cannot be unlocked.
This commit fixes that deadlock by holding device.state.mu
when checking that the device is not closed.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It is no longer necessary, as of 454de6f3e64abd2a7bf9201579cd92eea5280996
(device: use channel close to shut down and drain decryption channel).
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The leak test had rare flakes.
If a system goroutine started at just the wrong moment, you'd get a false positive.
Instead of looping until the goroutines look good and then checking,
exit completely as soon as the number of goroutines looks good.
Also, check more frequently, in an attempt to complete faster.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Here is the old implementation:
type WaitPool struct {
c chan interface{}
}
func NewWaitPool(max uint32, new func() interface{}) *WaitPool {
p := &WaitPool{c: make(chan interface{}, max)}
for i := uint32(0); i < max; i++ {
p.c <- new()
}
return p
}
func (p *WaitPool) Get() interface{} {
return <- p.c
}
func (p *WaitPool) Put(x interface{}) {
p.c <- x
}
It performs worse than the new one:
name old time/op new time/op delta
WaitPool-16 16.4µs ± 5% 15.1µs ± 3% -7.86% (p=0.008 n=5+5)
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
benchmark old ns/op new ns/op delta
BenchmarkUAPIGet-16 2872 2157 -24.90%
benchmark old allocs new allocs delta
BenchmarkUAPIGet-16 30 18 -40.00%
benchmark old bytes new bytes delta
BenchmarkUAPIGet-16 737 256 -65.26%
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This moves to a simple queue with no routine processing it, to reduce
scheduler pressure.
This splits latency in half!
benchmark old ns/op new ns/op delta
BenchmarkThroughput-16 2394 2364 -1.25%
BenchmarkLatency-16 259652 120810 -53.47%
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This makes the IpcGet method much faster.
We also refactor the traversal API to use a callback so that we don't
need to allocate at all. Avoiding allocations we do self-masking on
insertion, which in turn means that split intermediate nodes require a
copy of the bits.
benchmark old ns/op new ns/op delta
BenchmarkUAPIGet-16 3243 2659 -18.01%
benchmark old allocs new allocs delta
BenchmarkUAPIGet-16 35 30 -14.29%
benchmark old bytes new bytes delta
BenchmarkUAPIGet-16 1218 737 -39.49%
This benchmark is good, though it's only for a pair of peers, each with
only one allowedips. As this grows, the delta expands considerably.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
There are very few cases, if any, in which a user only wants one of
these levels, so combine it into a single level.
While we're at it, reduce indirection on the loggers by using an empty
function rather than a nil function pointer. It's not like we have
retpolines anyway, and we were always calling through a function with a
branch prior, so this seems like a net gain.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This commit overhauls wireguard-go's logging.
The primary, motivating change is to use a function instead
of a *log.Logger as the basic unit of logging.
Using functions provides a lot more flexibility for
people to bring their own logging system.
It also introduces logging helper methods on Device.
These reduce line noise at the call site.
They also allow for log functions to be nil;
when nil, instead of generating a log line and throwing it away,
we don't bother generating it at all.
This spares allocation and pointless work.
This is a breaking change, although the fix required
of clients is fairly straightforward.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The declaration of err in
nextByte, err := buffered.ReadByte
shadows the declaration of err in
op, err := buffered.ReadString('\n')
above. As a result, the assignments to err in
err = ipcErrorf(ipc.IpcErrorInvalid, "trailing character in UAPI get: %c", nextByte)
and in
err = device.IpcGetOperation(buffered.Writer)
do not modify the correct err variable.
Found by staticcheck.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>