Search the entire python sys.path, not just the directory that argv[0] is
in. That way if you symlink the sshuttle binary into (for example) ~/bin,
it'll be able to work correctly.
Pointed out by nisc on github. If people use an unusual umask or have funny
permissions on /etc/hosts, sshuttle would screw it up.
We also use hardlinks to atomically backup the original /etc/hosts to
/etc/hosts.sbak the first time, rather than manually copying it. Not sure
why I didn't think of that before.
Now if you use --auto-hosts (-H), the client will ask the server to spawn a
hostwatcher to add names. That, in turn, will send names back to the
server, which sends them back to the client, which sends them to the
firewall subprocess, which will write them to /etc/hosts. Whew!
Only the firewall process can write to /etc/hosts, of course, because only
he's running as root.
Since the name discovery process is kind of slow, we cache the names in
~/.sshuttle.hosts on the remote server.
Right now, most of the names are discovered using nmblookup and smbclient,
as well as by reading the existing entries in /etc/hosts. What would really
be nice would be to query active directory or mdns somehow... but I don't
really know how those work, so this is what you get for now :) It's pretty
neat, at least.
With this rule, we don't interfere with already-established (or incoming)
connections to routes that we're about to take over. This is what
happens by default in Linux/iptables.
Now if you do
./sshuttle -Nr username@myservername
It'll automatically route the "local" subnets (ie., stuff in the routing
table) from myservername. This is (hopefully a reasonable default setting
for most people.
Instead, grab our source code, send it over the link, and have python eval
it and then start the server.py main() function.
Strangely, there's now *less* horrible stuff in ssh.py, because we no longer
have to munge around with the PATH environment variable. And this
significantly reduces the setup required to get sshuttle going.
Based on a suggestion from Wayne Scott.
Basic forwarding now works on MacOS, assuming you set up ipfw correctly
(ha ha). I wasted a few hours today trying to figure this out, and I'm *so
very close*, but unfortunately it just didn't work. Think you can figure it
out?
Related changes:
- don't die if iptables is unavailable
- BSD uses getsockname() instead of SO_ORIGINAL_DST
- non-blocking connect() returns EISCONN once it's connected
- you can't setsockopt IP_TTL more than once
If you ran sshuttle from /home/apenwarr/sshuttle/sshuttle, we would
automatically add /home/apenwarr/sshuttle to the PATH before trying to
execute sshuttle on the remote machine. That way, if you install it in the
same place on two computers, the client would still be able to start the
server.
Someone reported, though, that if they installed the client in
/home/apenwarr/sshuttle/shuttle, and the server in /root/sshuttle/sshuttle,
then used "-r root@servername", it wasn't able to find the program.
Similar problems would happen if you're apenwarr at home and averyp at work.
So what we now do is add *two* directories to the PATH:
/home/apenwarr/sshuttle and $HOME/sshuttle, where $HOME is the value of
$HOME on the *server*, not the client. So it'll find it in either place.
If the server was having trouble starting, we would print a lot of
unnecessary stuff from iptables. We shouldn't even have bothered *starting*
iptables if the server was dead anyway.
Previous versions depended on having 'sudo' in your PATH. Now that we can
feel safe that --iptables will clean up properly when you exit, and it
doesn't need to authenticate twice, the advantages of sudo aren't strictly
needed. Good old 'su' is a reasonable fallback - and everybody has it,
which is nice.
Unfortunately su doesn't let you redirect stdin, so I had to play a stupid
fd trick to make it work.
Now the sudo iptables subprocess persists for the entire life of sshuttle.
The benefits of this are:
- no need to authenticate again at shutdown (failure of which could cause us
to not clean up iptables)
- if the main process dies unexpectedly, iptables still gets cleaned up
- the password prompt can happen *before* starting the ssh/server process,
which means it'll stand out and the password prompt won't be overwritten.
It seems ssh is kind of stupid and uses a really big SO_SNDBUF (hundreds of
kbytes). Thus, we can't depend on the socket's output buffer to limit our
latency down to something reasonable. Instead, limit the amount of data we
can send in a single round trip.
Now if we aren't given an explicit port, we always initiate the port search
at 12300 and count upward looking for an available port.
Normally the kernel will assign us a random port, but that's not ideal
in our case because we'd like to use the same port numbers whenever
possible; that avoids piling up crap inside iptables in the (hopefully
unlikely) event that we die without cleaning up correctly.
When regenerating outgoing connections, we set TTL=42 to prevent re-proxying
of requests. That's a little hacky, but at least it avoids infinite loops.