* re-organized imports according to pep8
* fixed all remaining pep8 issues
* moved common config into setup.cfg, additionally test `tests`
* removed --select=X -- the errors selected where by default not in
flake8's --ignore list so effectively had no effect
* update .travis.yml to reflect changes in tox.ini
* make travis just use tox in order to avoid code duplaction
* replace py.test with pytest
* fixed .travis.yml
* try different pypy toxenv
* hopefully fixed testenv for pypy
* added pypy basepython, removed unused python2.6
* install dev package before testing (fixes missing coverage)
* fixed empty exception pass blocks with noqa
* Added dummy log message on empty try-except-pass blocks to make dodacy happy :(
* Replaced Exception with BaseException
There's a known issue that makes sshuttle crash if there are too
many routes on the remote host (that don't fit in 64KB). This patch
requests the routes only if auto-nets is specified on the command
line.
This was susceptible to the same deadlock issue that ipt_chain_exists
had and was fixed in d43db80 where if the command returned a significant
amount of output, it wouldn't all be read in, resulting in the
subprocess hanging waiting for the output to be read.
When running sshuttle with a large list of routes it's failing to clean
them up at exit. It returns the following:
$ sshuttle -r user@host.example.com -s /tmp/aws-cidrs.txt
user@host.example.com's password:
client: Connected.
^CAnother app is currently holding the xtables lock; still -9s 0us time ahead to have a chance to grab the lock...
Another app is currently holding the xtables lock; still -19s 0us time ahead to have a chance to grab the lock...
Another app is currently holding the xtables lock; still -29s 0us time ahead to have a chance to grab the lock...
This continues indefinitely. Looking in ps reveals that there are 2
iptables processes running. Killing -9 the first one, allows sshuttle to
continue and clean up successfully.
The problem lies with the use of Popen here. The function currently
returns as soon as it finds a match without consuming everything from
stdout. This means that if there's more output from iptables than will
fit in the buffer it doesn't exit, and therefore doesn't release the
kernel xtables lock.
Before this change, in pf, exclusions used a pass out quick which gave
them higher precedence than any other rule independent of subnet width.
As reported in #265 this causes exclusion from one instance of sshuttle
to also take effect on other instances because quick aborts the
evaluation of rules across all anchors.
This commit changes the precedence of rules so quick can now be
dropped. The new order is defined by the following rule, from
subnet_weight:
"We need to go from smaller, more specific, port ranges, to larger,
less-specific, port ranges. At each level, we order by subnet
width, from most-specific subnets (largest swidth) to
least-specific. On ties, excludes come first."
* Fixes support for OpenBSD (6.1+)
As reported in #219, new versions of OpenBSD ship with a different
pfioc_rule struct. This commit adjusts the offset to match the new struct.
* Fixes tests for OpenBSD 6.1+
Having the tests in a `tests` directory in root is the most common
approach. Also moved pytest's conftest.py into `tests` making the
fixture available for client and server tests.
This provides a way to avoid setting PYTHONPATH when invoking the
privileged part of sshuttle with sudo. This is useful if running
sshuttle as a PEX archive, as Telepresence does, as it enables
sshuttle's sudo access to be securely locked down.
PEX archives will extract themselves into the invoking user's home
directory, which means that the invoking user has full control over
the code in them. This makes restricting sudo access with
PYTHONPATH set completely pointless in this scenario -- an attacker
could put any code into ~/.pex and gain full root access anyway.
On the other hand, if sshuttle is a PEX archive, the privileged
invocation will simply extract itself into /root/.pex anyway, so
there is no need to set PYTHONPATH in this case.
Starting sshuttle without having to type in one's password requires to
put the sudo-ed command in the `/etc/sudoers` file. However, sshuttle
sets an environment variable, which cannot be done as-is in the sudoers
file. This fix prepend the /usr/bin/env command, which allows one to
pass fixed environment variables to a sudo-ed command.
In practice, the sub-command:
```
sudo PYTHONPATH=/usr/lib/python3/dist-packages -- \
/usr/bin/python3 /usr/bin/sshuttle --method auto --firewall
```
becomes
```
sudo /usr/bin/env PYTHONPATH=/usr/lib/python3/dist-packages \
/usr/bin/python3 /usr/bin/sshuttle --method auto --firewall
```
This small change will allow a file path to be passed as argument from which
the command line options will be loaded.
Extra command line options can be passed (in addition to those already in the
file) and existing ones can be overriden.
Example sshuttle.conf file:
192.168.0.0/16
--remote
user@example.com
Example sshuttle call:
sshuttle @/path/to/sshuttle.conf
Example sshuttle call with verbose flags added:
sshuttle @/path/to/sshuttle.conf -vvv
Example sshuttle call overriding the remote server:
sshuttle @/path/to/sshuttle.conf -r otheruser@test.example.com
When I starting sshuttle with option `--seed-hosts example.com`, got the following error:
```
hostwatch: Starting hostwatch with Python version 3.5.2
hostwatch: Traceback (most recent call last):
---> File "sshuttle.server", line 144, in start_hostwatch
---> File "sshuttle.hostwatch", line 272, in hw_main
---> File "sshuttle.hostwatch", line 234, in check_host
---> File "sshuttle.hostwatch", line 32, in _is_ip
---> File "/usr/lib/python3.5/re.py", line 163, in match
---> return _compile(pattern, flags).match(string)
---> TypeError: cannot use a string pattern on a bytes-like object
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "assembler.py", line 37, in <module>
File "sshuttle.server", line 393, in main
File "sshuttle.ssnet", line 596, in runonce
File "sshuttle.server", line 324, in hostwatch_ready
sshuttle.helpers.Fatal: hostwatch process died
```
It seems like the list of hosts is not properly decoded on the server side. This is an attempt to fix that.
As suggested by @colinmkeith the UDP and DNS proxies should listen on different
ports otherwise the DNS proxy can get traffic intended to the UDP proxy (or
vice-versa) and handle it incorrectly as reported in #178.
At first sight it seems that we had the code in place to try another port if
the one we are binding is already bound, however, with UDP and REUSEADDR the
OS will not refuse to bind two sockets to the same socket address, so both
the UDP proxy and DNS proxy were being bound to the same pair.
Some Linux distros, like Alpine, Arch, etc and some BSDs, like FreeBSD, are
now shipping with python3.6 as the default python3. Both the client and the
server are failing to run in this distros, because we are specifically looking
for python3.5.
These changes make the run shell script use python3 if the version is greater
than 3.5, otherwise falling back as usual.
On the server any version of python3 will do, use it before falling back to
python, as the server code can run with any version of python3.
When the pf module is not loaded our calls to pfctl will fail with
unhelpful messages.
This change spares the user the pain of decrypting those messages and manually
enabling pf. It also keeps track if pf was loaded by sshuttle and unloads on
exit if that was the case.
Also fixed the case where both ipv4 and ipv6 anchors were added by sshuttle
but the first call of disable would disable pf before the second call had the
chance of cleaning it's anchor.
If we receive no routes from server or if, for some reason, we receive
some empty lines, we should skip them instead of crashing.
Fixes on of the problems in #147.