Because the connWorker are operating with the e.peerConns list we must ensure all workers exited before we modify the content of the e.peerConns list.
If we do not do that the engine will start new connWorkers for the exists ones, and they start connection for the same peers in parallel.
* migrate sqlite store to
generic sql store
* fix conflicts
* init postgres store
* Add postgres store tests
* Refactor postgres store engine name
* fix tests
* Run postgres store tests on linux only
* fix tests
* Refactor
* cascade policy rules on policy deletion
* fix tests
* run postgres cases in new db
* close store connection after tests
* refactor
* using testcontainers
* sync go sum
* remove postgres service
* remove store cleanup
* go mod tidy
* remove env
* use postgres as engine and initialize test store with testcontainer
---------
Co-authored-by: Maycon Santos <mlsmaycon@gmail.com>
* Enable release workflow on PR and upload binaries
add GetSystem32Command to validate if a command is in the path
it will fall back to the full system32, assuming the OS driver is C
---------
Co-authored-by: Maycon Santos <mlsmaycon@gmail.com>
* config.go: pull unified Config.apply() out of createNewConfig() and update()
as a bonus it ensures returned Config object doesn't have any configuration
values missing
* Add status anonymization
* Add OS/arch to the status command
* Use human-friendly last-update status messages
* Add debug bundle command to collect (anonymized) logs
* Add debug log level command
* And debug for a certain time span command
Now that we have the latency between peers available we can use this data to consider when choosing the best route. This way the route with the routing peer with the lower latency will be preferred over others with the same target network.
* Add Linux legacy routing if ip rule functionality is not available
* Ignore exclusion route errors if host has no route
* Exclude iOS from route manager
* Also retrieve IPv6 routes
* Ignore loopback addresses not being in the main table
* Ignore "not supported" errors on cleanup
* Fix regression in ListenUDP not using fwmarks
All routes are now installed in a custom netbird routing table.
Management and wireguard traffic is now marked with a custom fwmark.
When the mark is present the traffic is routed via the main routing table, bypassing the VPN.
When the mark is absent the traffic is routed via the netbird routing table, if:
- there's no match in the main routing table
- it would match the default route in the routing table
IPv6 traffic is blocked when a default route IPv4 route is configured to avoid leakage.
* adding peer healthcheck
* generate proto file
* fix return in udp mux and replace with continue
* use ice agent for latency checks
* fix status output
* remove some logs
* fix status test
* revert bind and ebpf code
* fix error handling on binding response callback
* extend error handling on binding response callback
---------
Co-authored-by: Maycon Santos <mlsmaycon@gmail.com>