This makes it easier to actually test what happens when channel numbers wrap
around. The good news: it works.
However, I did find a bug where sshuttle would die if we completely ran out
of available channel numbers because so many of them were open. This would
never realistically happen at the default of 65535 channels (we'd run out of
file descriptors first), but it's still a bug, so let's handle it by just
dropping the connection when it happens.
UDP and TCP have separate port namespaces, so to make it easier to keep
track of what's going on, just use the same transproxy port number for both.
We still need two sockets, but now tcpdumps are easier to understand.
Limitations:
- uses a hardcoded DNS server IP on both client and server
- never expires request/response objects, so leaks memory and sockets
- works only with iptables, not with ipfw
* daemon:
daemonization: make sure the firewall subproc sends to syslog too.
Rearrange daemonization/syslog stuff and make it more resilient.
run in background (daemon) and option
Rename --background to -D/--daemon, to match other programs (like smbd).
You can now have --syslog even without --daemon.
Avoid using atexit(); try/finally is better.
Don't just close stderr; we'll end up eating error output from ssh!
Instead, redirect stderr to a 'logger' subprocess that will send to syslog.
Delay redirecting stderr until after we know we're daemonizing, so handy
error messages can go to stderr instead of syslog.
Make pidfile stuff more resilient: support already-existing files, files
with strict permissions, outdated files containing an already-dead pid. Add
a --pidfile option to let you specify the pidfile path.
chdir("/") while daemonizing, so that the filesystem containing $PWD can
later be unmounted without killing the daemon.
fw.done() can't wait on the firewall subprocess on exit when daemonized; we
no longer are the parent of that process.
It can happen if there are too many sockets open. If that happens, just
throw away any connections that arrive in the meantime instead of aborting
completely.
Add some cleverness for breaking infinite loops. Previously we'd only
detect it successfully if you connected to exactly the same IP as we were
listening on, but that was unreliable if we're listening on 0.0.0.0 and you
connected to one of the IP addresses we haven't heard of.
Now, if you try to connect to our listen port on *any* IP, we try binding to
that IP as a local socket; if it works, that's a local IP, and therefore
it's our socket, so reject the connection. If it doesn't work, it's a
remote IP, so forward it along.
Thanks to 'tass' on github for noticing the problem.
Based on suggestions by Jason Grossman and Ed Maste on the mailing list.
We now add a [local su] prefix to the 'su' password prompt (by cheating and
printing it before calling su), and we replace the 'sudo' password prompt
with '[local sudo] Password: ' (by using the little-known and
hopefully-portable -p option).
We no longer call sudo or su if the uid is already 0; otherwise the prefix
on the 'su' prompt would look weird, since su wouldn't ask for a password in
that case.
We don't add a prefix to the ssh password prompt, because it's too hard to
tell if there will *be* an ssh password prompt. But people will probably
assume that the password request is for the server anyway; few people are
likely to think that 'sshuttle -r myhost.com' is going to prompt for the
*local* password.
Of course none of this is a problem on a modern OS, like Debian, that would
say something like "Password for apenwarr@myhost.com:" instead of just
"Password:". MacOS doesn't do that, however, so I assume many other OSes
also don't. Let's try to help them out.
Also, add 127.0.0.0/8 to the default list of excludes. If you want to route
0/0, you almost certainly *don't* want to route localhost to the remote ssh
server's localhost!
Thanks to Edward for the suggestion.
Now if you use --auto-hosts (-H), the client will ask the server to spawn a
hostwatcher to add names. That, in turn, will send names back to the
server, which sends them back to the client, which sends them to the
firewall subprocess, which will write them to /etc/hosts. Whew!
Only the firewall process can write to /etc/hosts, of course, because only
he's running as root.
Since the name discovery process is kind of slow, we cache the names in
~/.sshuttle.hosts on the remote server.
Right now, most of the names are discovered using nmblookup and smbclient,
as well as by reading the existing entries in /etc/hosts. What would really
be nice would be to query active directory or mdns somehow... but I don't
really know how those work, so this is what you get for now :) It's pretty
neat, at least.
Now if you do
./sshuttle -Nr username@myservername
It'll automatically route the "local" subnets (ie., stuff in the routing
table) from myservername. This is (hopefully a reasonable default setting
for most people.
Basic forwarding now works on MacOS, assuming you set up ipfw correctly
(ha ha). I wasted a few hours today trying to figure this out, and I'm *so
very close*, but unfortunately it just didn't work. Think you can figure it
out?
Related changes:
- don't die if iptables is unavailable
- BSD uses getsockname() instead of SO_ORIGINAL_DST
- non-blocking connect() returns EISCONN once it's connected
- you can't setsockopt IP_TTL more than once
Previous versions depended on having 'sudo' in your PATH. Now that we can
feel safe that --iptables will clean up properly when you exit, and it
doesn't need to authenticate twice, the advantages of sudo aren't strictly
needed. Good old 'su' is a reasonable fallback - and everybody has it,
which is nice.
Unfortunately su doesn't let you redirect stdin, so I had to play a stupid
fd trick to make it work.
Now the sudo iptables subprocess persists for the entire life of sshuttle.
The benefits of this are:
- no need to authenticate again at shutdown (failure of which could cause us
to not clean up iptables)
- if the main process dies unexpectedly, iptables still gets cleaned up
- the password prompt can happen *before* starting the ssh/server process,
which means it'll stand out and the password prompt won't be overwritten.
Now if we aren't given an explicit port, we always initiate the port search
at 12300 and count upward looking for an available port.
Normally the kernel will assign us a random port, but that's not ideal
in our case because we'd like to use the same port numbers whenever
possible; that avoids piling up crap inside iptables in the (hopefully
unlikely) event that we die without cleaning up correctly.