Since RoutineHandshake calls peer.SendKeepalive(), it potentially is a
writer into the encryption queue, so we need to bump the wg count.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
RoutineReadFromTUN can trigger a call to SendStagedPackets.
SendStagedPackets attempts to protect against sending
on the encryption queue by checking peer.isRunning and device.isClosed.
However, those are subject to TOCTOU bugs.
If that happens, we get this:
goroutine 1254 [running]:
golang.zx2c4.com/wireguard/device.(*Peer).SendStagedPackets(0xc000798300)
.../wireguard-go/device/send.go:321 +0x125
golang.zx2c4.com/wireguard/device.(*Device).RoutineReadFromTUN(0xc000014780)
.../wireguard-go/device/send.go:271 +0x21c
created by golang.zx2c4.com/wireguard/device.NewDevice
.../wireguard-go/device/device.go:315 +0x298
Fix this with a simple, big hammer: Keep the encryption queue
alive as long as it might be written to.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This serves two purposes.
First, it makes repeatedly stopping then starting a peer cheaper.
Second, it prevents a data race observed accessing the queues.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The high iteration count was useful when TestUpDown
was the nexus of new bugs to investigate.
Now that it has stabilized, that's less valuable.
And it slows down running the tests and crowds out other tests.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
On a many-core machine with the race detector enabled,
this test can take several minutes to complete.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It's never used and we won't have a use for it. Also, move to go-running
stringer, for those without GOPATHs.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
The test previously checked the offset within a substruct, not the
offset within the allocated struct, so this adds the two together.
It then fixes an alignment crash on 32-bit machines.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Part of being actually idempotent is that we shouldn't penalize code
that takes advantage of this property with a log splat.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Before, the code attached a finalizer to an object that wasn't returned,
resulting in immediate garbage collection. Instead return the actual
pointer.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Without this, we wind up freeing packets that the encryption/decryption
queues still have, resulting in a UaF.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
The immediate motivation for this change is an observed deadlock.
1. A goroutine calls peer.Stop. That calls peer.queue.Lock().
2. Another goroutine is in RoutineSequentialReceiver.
It receives an elem from peer.queue.inbound.
3. The peer.Stop goroutine calls close(peer.queue.inbound),
close(peer.queue.outbound), and peer.stopping.Wait().
It blocks waiting for RoutineSequentialReceiver
and RoutineSequentialSender to exit.
4. The RoutineSequentialReceiver goroutine calls peer.SendStagedPackets().
SendStagedPackets attempts peer.queue.RLock().
That blocks forever because the peer.Stop
goroutine holds a write lock on that mutex.
A background motivation for this change is that it can be expensive
to have a mutex in the hot code path of RoutineSequential*.
The mutex was necessary to avoid attempting to send elems on a closed channel.
This commit removes that danger by never closing the channel.
Instead, we send a sentinel nil value on the channel to indicate
to the receiver that it should exit.
The only problem with this is that if the receiver exits,
we could write an elem into the channel which would never get received.
If it never gets received, it cannot get returned to the device pools.
To work around this, we use a finalizer. When the channel can be GC'd,
the finalizer drains any remaining elements from the channel and
restores them to the device pool.
After that change, peer.queue.RWMutex no longer makes sense where it is.
It is only used to prevent concurrent calls to Start and Stop.
Move it to a more sensible location and make it a plain sync.Mutex.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We have a bunch of stupid channel tricks, and I'm about to add more.
Give them their own file. This commit is 100% code movement.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
timersInit sets up the timers.
It need only be done once per peer.
timersStart does the work to prepare the timers
for a newly running peer. It needs to be done
every time a peer starts.
Separate the two and call them in the appropriate places.
This prevents data races on the peer's timers fields
when starting and stopping peers.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
We already track this state elsewhere. No need to duplicate.
The cost of calling changeState is negligible.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The old code silently accepted negative MTUs.
It also set MTUs above the maximum.
It also had hard to follow deeply nested conditionals.
Add more paranoid handling,
and make the code more straight-line.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The TUN event reader does three things: Change MTU, device up, and device down.
Changing the MTU after the device is closed does no harm.
Device up and device down don't make sense after the device is closed,
but we can check that condition before proceeding with changeState.
There's thus no reason to block device.Close on RoutineTUNEventReader exiting.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This commit simplifies device state management.
It creates a single unified state variable and documents its semantics.
It also makes state changes more atomic.
As an example of the sort of bug that occurred due to non-atomic state changes,
the following sequence of events used to occur approximately every 2.5 million test runs:
* RoutineTUNEventReader received an EventDown event.
* It called device.Down, which called device.setUpDown.
* That set device.state.changing, but did not yet attempt to lock device.state.Mutex.
* Test completion called device.Close.
* device.Close locked device.state.Mutex.
* device.Close blocked on a call to device.state.stopping.Wait.
* device.setUpDown then attempted to lock device.state.Mutex and blocked.
Deadlock results. setUpDown cannot progress because device.state.Mutex is locked.
Until setUpDown returns, RoutineTUNEventReader cannot call device.state.stopping.Done.
Until device.state.stopping.Done gets called, device.state.stopping.Wait is blocked.
As long as device.state.stopping.Wait is blocked, device.state.Mutex cannot be unlocked.
This commit fixes that deadlock by holding device.state.mu
when checking that the device is not closed.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
It is no longer necessary, as of 454de6f3e64abd2a7bf9201579cd92eea5280996
(device: use channel close to shut down and drain decryption channel).
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The leak test had rare flakes.
If a system goroutine started at just the wrong moment, you'd get a false positive.
Instead of looping until the goroutines look good and then checking,
exit completely as soon as the number of goroutines looks good.
Also, check more frequently, in an attempt to complete faster.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Here is the old implementation:
type WaitPool struct {
c chan interface{}
}
func NewWaitPool(max uint32, new func() interface{}) *WaitPool {
p := &WaitPool{c: make(chan interface{}, max)}
for i := uint32(0); i < max; i++ {
p.c <- new()
}
return p
}
func (p *WaitPool) Get() interface{} {
return <- p.c
}
func (p *WaitPool) Put(x interface{}) {
p.c <- x
}
It performs worse than the new one:
name old time/op new time/op delta
WaitPool-16 16.4µs ± 5% 15.1µs ± 3% -7.86% (p=0.008 n=5+5)
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
benchmark old ns/op new ns/op delta
BenchmarkUAPIGet-16 2872 2157 -24.90%
benchmark old allocs new allocs delta
BenchmarkUAPIGet-16 30 18 -40.00%
benchmark old bytes new bytes delta
BenchmarkUAPIGet-16 737 256 -65.26%
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This moves to a simple queue with no routine processing it, to reduce
scheduler pressure.
This splits latency in half!
benchmark old ns/op new ns/op delta
BenchmarkThroughput-16 2394 2364 -1.25%
BenchmarkLatency-16 259652 120810 -53.47%
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This makes the IpcGet method much faster.
We also refactor the traversal API to use a callback so that we don't
need to allocate at all. Avoiding allocations we do self-masking on
insertion, which in turn means that split intermediate nodes require a
copy of the bits.
benchmark old ns/op new ns/op delta
BenchmarkUAPIGet-16 3243 2659 -18.01%
benchmark old allocs new allocs delta
BenchmarkUAPIGet-16 35 30 -14.29%
benchmark old bytes new bytes delta
BenchmarkUAPIGet-16 1218 737 -39.49%
This benchmark is good, though it's only for a pair of peers, each with
only one allowedips. As this grows, the delta expands considerably.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
There are very few cases, if any, in which a user only wants one of
these levels, so combine it into a single level.
While we're at it, reduce indirection on the loggers by using an empty
function rather than a nil function pointer. It's not like we have
retpolines anyway, and we were always calling through a function with a
branch prior, so this seems like a net gain.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
This commit overhauls wireguard-go's logging.
The primary, motivating change is to use a function instead
of a *log.Logger as the basic unit of logging.
Using functions provides a lot more flexibility for
people to bring their own logging system.
It also introduces logging helper methods on Device.
These reduce line noise at the call site.
They also allow for log functions to be nil;
when nil, instead of generating a log line and throwing it away,
we don't bother generating it at all.
This spares allocation and pointless work.
This is a breaking change, although the fix required
of clients is fairly straightforward.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The declaration of err in
nextByte, err := buffered.ReadByte
shadows the declaration of err in
op, err := buffered.ReadString('\n')
above. As a result, the assignments to err in
err = ipcErrorf(ipc.IpcErrorInvalid, "trailing character in UAPI get: %c", nextByte)
and in
err = device.IpcGetOperation(buffered.Writer)
do not modify the correct err variable.
Found by staticcheck.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Plenty more to go, but a start:
name old time/op new time/op delta
UAPIGet-4 6.37µs ± 2% 5.56µs ± 1% -12.70% (p=0.000 n=8+8)
name old alloc/op new alloc/op delta
UAPIGet-4 1.98kB ± 0% 1.22kB ± 0% -38.71% (p=0.000 n=10+10)
name old allocs/op new allocs/op delta
UAPIGet-4 42.0 ± 0% 35.0 ± 0% -16.67% (p=0.000 n=10+10)
Signed-off-by: Brad Fitzpatrick <bradfitz@tailscale.com>
Unify the handling of unexpected UAPI errors.
The comment that says "should never happen" is incorrect;
this could happen due to I/O errors. Correct it.
Change error message capitalization for consistency.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The goal of this change is to make the structure
of IpcSetOperation easier to follow.
IpcSetOperation contains a small state machine:
It starts by configuring the device,
then shifts to configuring one peer at a time.
Having the code all in one giant method obscured that structure.
Split out the parts into helper functions and encapsulate the peer state.
This makes the overall structure more apparent.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Expand IPCError to contain a wrapped error,
and add a helper to make constructing such errors easier.
Add a defer-based "log on returned error" to IpcSetOperation.
This lets us simplify all of the error return paths.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Until we depend on Go 1.16 (which isn't released yet), alias our own
variable to the private member of the net package. This will allow an
easy find replace to make this go away when we eventually switch to
1.16.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Now that we block when enqueueing to the decryption queue,
there is only one case in which we "drop" a inbound element,
when decryption fails.
We can use a simple, obvious, sync-free sentinel for that, elem.packet == nil.
Also, we can return the message buffer to the pool slightly later,
which further simplifies the code.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This is similar to commit e1fa1cc556,
but for the decryption channel.
It is an alternative fix to f9f655567930a4cd78d40fa4ba0d58503335ae6a.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Some users report seeing lines like:
> Routine: receive incoming IPv4 - stopped
Popping up unexpectedly. Let's sleep and try again before failing, and
also log the error, and perhaps we'll eventually understand this
situation better in future versions.
Because we have to distinguish between the socket being closed
explicitly and whatever error this is, we bump the module to require Go
1.16.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
It's possible for RoutineSequentialReceiver to try to lock an elem after
RoutineDecryption has exited. Before this meant we didn't then unlock
the elem, so the whole program deadlocked.
As well, it looks like the flush code (which is now potentially
unnecessary?) wasn't properly dropping the buffers for the
not-already-dropped case.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
These obviously don't perfectly capture real world performance,
in which syscalls and network links have a significant impact.
Nevertheless, they capture some of the internal performance factors,
and they're easy and convenient to work with.
Hat tip to Avery Pennarun for help designing the throughput benchmark.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
One of the first rules of WaitGroups is that you call wg.Add
outside of a goroutine, not inside it. Fix this embarrassing mistake.
This prevents an extremely rare race condition (2 per 100,000 runs)
which could occur when attempting to start a new peer
concurrently with shutting down a device.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This is particularly problematic on mobile,
where there is a fixed number of elements.
If most of them leak, it'll impact performance;
if all of them leak, the device will permanently deadlock.
I have a test that detects element leaks, which is how I found this one.
There are some remaining leaks that I have not yet tracked down,
but this is the most prominent by far.
I will commit the test when it passes reliably.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
bufio is not required.
strings.Builder is cheaper than bytes.Buffer for constructing strings.
io.Writer is more flexible than io.StringWriter,
and just as cheap (when used with io.WriteString).
Run gofmt.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This was shifted by 2 bytes when making persistent keepalive into a u32.
Fix it by placing it after the aligned region.
Fixes: e739ff7 ("device: fix persistent_keepalive_interval data races")
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
People are actually hitting this condition, so make it uniform. Also,
change a printf into a println, to match the other conventions.
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Found by the race detector and existing tests.
To avoid introducing a lock into this hot path,
calculate and cache whether any peers exist.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Access keypair.sendNonce atomically.
Eliminate one unnecessary initialization to zero.
Mutate handshake.lastSentHandshake with the mutex held.
Co-authored-by: David Anderson <danderson@tailscale.com>
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
They're called elem in most places.
Rename a few local variables to make it consistent.
This makes it easier to grep the code for things like elem.Drop.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This is a similar treatment to the handling of the encryption
channel found a few commits ago: Use the closing of the channel
to manage goroutine lifetime and shutdown.
It is considerably simpler because there is only a single writer.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
The new test introduced in this commit used to deadlock about 1% of the time.
I believe that the deadlock occurs as follows:
* The test completes, calling device.Close.
* device.Close closes device.signals.stop.
* RoutineEncryption stops.
* The deferred function in RoutineEncryption drains device.queue.encryption.
* RoutineEncryption exits.
* A peer's RoutineNonce processes an element queued in peer.queue.nonce.
* RoutineNonce puts that element into the outbound and encryption queues.
* RoutineSequentialSender reads that elements from the outbound queue.
* It waits for that element to get Unlocked by RoutineEncryption.
* RoutineEncryption has already exited, so RoutineSequentialSender blocks forever.
* device.RemoveAllPeers calls peer.Stop on all peers.
* peer.Stop waits for peer.routines.stopping, which blocks forever.
Rather than attempt to add even more ordering to the already complex
centralized shutdown orchestration, this commit moves towards a
data-flow-oriented shutdown.
The device.queue.encryption gets closed when there will be no more writes to it.
All device.queue.encryption readers always read until the channel is closed and then exit.
We thus guarantee that any element that enters the encryption queue also exits it.
This removes the need for central control of the lifetime of RoutineEncryption,
removes the need to drain the encryption queue on shutdown, and simplifies RoutineEncryption.
This commit also fixes a data race. When RoutineSequentialSender
drains its queue on shutdown, it needs to lock the elem before operating on it,
just as the main body does.
The new test in this commit passed 50k iterations with the race detector enabled
and 150k iterations with the race detector disabled, with no failures.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Since we already have it packed into a uint64
in a known byte order, write it back out again
the same byte order instead of copying byte by byte.
This should also generate more efficient code,
because the compiler can do a single uint64 write,
instead of eight bounds checks and eight byte writes.
Due to a missed optimization, it actually generates a mishmash
of smaller writes: 1 byte, 4 bytes, 2 bytes, 1 byte.
This is https://golang.org/issue/41663.
The code is still better than before, and will get better yet
once that compiler bug gets fixed.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This makes it easier to work with configs in tests.
It'll see heavier use over upcoming commits;
this commit only adds the infrastructure.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Any io.Reader will do, and there are no performance concerns here.
This is technically backwards incompatible,
but it is very unlikely to break any existing code.
It is compatible with the existing uses in wireguard-{windows,android,apple}
and also will allow us to slightly simplify it if desired.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
When running many concurrent test processing using
https://godoc.org/golang.org/x/tools/cmd/stress
the processing sometimes cannot complete a ping in under 300ms.
Increase the timeout to 5s to reduce the rate of false positives.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
When closing a device, packets that are in flight
can make it to SendBuffer, which then returns an error.
Those errors add noise but no light;
they do not reflect an actual problem.
Adding the synchronization required to prevent
this from occurring is currently expensive and error-prone.
Instead, quietly drop such packets instead of
returning an error.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
In each case, the starting waitgroup did nothing but ensure
that the goroutine has launched.
Nothing downstream depends on the order in which goroutines launch,
and if the Go runtime scheduler is so broken that goroutines
don't get launched reasonably promptly, we have much deeper problems.
Given all that, simplify the code.
Passed a race-enabled stress test 25,000 times without failure.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
Picking two free ports to use for a test is difficult.
The free port we selected might no longer be free when we reach
for it a second time.
On my machine, this failure mode led to failures approximately
once per thousand test runs.
Since failures are rare, and threading through and checking for
all possible errors is complicated, fix this with a big hammer:
Retry if either device fails to come up.
Also, if you accidentally pick the same port twice, delightful confusion ensues.
The handshake failures manifest as crypto errors, which look scary.
Again, fix with retries.
To make these retries easier to implement, use testing.T.Cleanup
instead of defer to close devices. This requires Go 1.14.
Update go.mod accordingly. Go 1.13 is no longer supported anyway.
With these fixes, 'go test -race' ran 100,000 times without failure.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
This doesn't cause any practical problems as it is,
but vet (rightly) flags this code as copying a mutex.
It is easy to fix, so do so.
Signed-off-by: Josh Bleecher Snyder <josh@tailscale.com>
fix panic: send on closed channel when remove peer
Signed-off-by: Haichao Liu <liuhaichao@bytedance.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Both wireguard-windows and wireguard-android access Bind
directly for these methods now.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Use the RTMGRP_IPV4_ROUTE const from x/sys/unix instead of using the
corresponding RTNLGRP_IPV4_ROUTE const to create the multicast groups
mask.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Peers are currently removed after Device's goroutines are signaled to stop,
but without waiting for them to actually do so, which is racy.
For example, RoutineHandshake may be in Peer.SendKeepalive
when the corresponding peer is removed, which closes its nonce channel.
This causes a send on a closed channel, as observed in tailscale/tailscale#487.
This patch seems to be the correct synchronizing action:
Peer's goroutines are receivers and handle channel closure gracefully,
so Device's goroutines are the ones that should be fully stopped first.
Signed-Off-By: Dmytro Shynkevych <dmytro@tailscale.com>
Go's GC semantics might not always guarantee the safety of this, and the
race detector gets upset too, so instead we wrap this all in atomic
accessors.
Reported-by: David Anderson <danderson@tailscale.com>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
The sticky socket code stays in the device package for now,
as it reaches deeply into the peer list.
This is the first step in an effort to split some code out of
the very busy device package.
Signed-off-by: David Crawshaw <crawshaw@tailscale.com>
Adds a test that will fail consistently on 32-bit platforms if the
struct ever changes again to violate the rules. This is likely not
needed because unaligned access crashes reliably, but this will reliably
fail even if tests accidentally pass due to lucky alignment.
Signed-Off-By: David Anderson <danderson@tailscale.com>
Update the golang.org/x/sys/unix dependency and use the newly introduced
RTMGRP_* consts instead of using the corresponding RTNLGRP_* const to
create a mask.
Signed-off-by: Tobias Klauser <tklauser@distanz.ch>