Just kidding. This is a squash of a whole bunch of unlabeled temporary
commits that I produced over the last couple of weeks while writing a UI
for MacOS while riding on airplanes and sitting in airports.
So long, batch of useless commits!
We were doing it as soon as the other end was shut_read, but we didn't
confirm first that the other end's read buffer had been totally emptied.
This caused problems on twitter.com, at least, where they create a zillion
short-lived connections.
I don't actually need these lines at all, as it turns out, because we were
already calling nowrite() correctly on the other end after a buffer got
emptied.
* daemon:
daemonization: make sure the firewall subproc sends to syslog too.
Rearrange daemonization/syslog stuff and make it more resilient.
run in background (daemon) and option
Rename --background to -D/--daemon, to match other programs (like smbd).
You can now have --syslog even without --daemon.
Avoid using atexit(); try/finally is better.
Don't just close stderr; we'll end up eating error output from ssh!
Instead, redirect stderr to a 'logger' subprocess that will send to syslog.
Delay redirecting stderr until after we know we're daemonizing, so handy
error messages can go to stderr instead of syslog.
Make pidfile stuff more resilient: support already-existing files, files
with strict permissions, outdated files containing an already-dead pid. Add
a --pidfile option to let you specify the pidfile path.
chdir("/") while daemonizing, so that the filesystem containing $PWD can
later be unmounted without killing the daemon.
fw.done() can't wait on the firewall subprocess on exit when daemonized; we
no longer are the parent of that process.
When the server disconnected, we were forwarding that information to the
client. But we weren't forwarding back the other way when the client
disconnected since there was no callback in place to do that.
Relatedly, when we failed entirely to connect to the server, we didn't notify the
client right away. Now we do.
Thanks to 'Roger' on the mailing list for pointing out these bugs.
It can happen if there are too many sockets open. If that happens, just
throw away any connections that arrive in the meantime instead of aborting
completely.
This avoids any possible problem caused by other people on your network
using you as a proxy. If you want to allow this, you can force it back to
the old way using the --listen option.
Thanks to 'tass' on github for reporting portscans that revealed this
potential security problem.
Add some cleverness for breaking infinite loops. Previously we'd only
detect it successfully if you connected to exactly the same IP as we were
listening on, but that was unreliable if we're listening on 0.0.0.0 and you
connected to one of the IP addresses we haven't heard of.
Now, if you try to connect to our listen port on *any* IP, we try binding to
that IP as a local socket; if it works, that's a local IP, and therefore
it's our socket, so reject the connection. If it doesn't work, it's a
remote IP, so forward it along.
Thanks to 'tass' on github for noticing the problem.
(Note by apenwarr: I used Roger's original patch as the basis for this one,
but implemented it a different way. All errors are thus my fault, but Roger
gets the credit for actually tracking down the circular reference that
caused the memory leak.)
(Note by apenwarr: seems to still work for me. The reason the
problem occurred is that reassigning 'handlers' doesn't change it in its
parent; it creates a whole new list, and the caller still owns the old one
with all the dead sockets in it. The problem seems to have been introduced
in commit 84376284db when I factored the
runonce() functionality out of the client and server but didn't notice this
reassignment.)
Supported sshuttle commands for IPv6:
./sshuttle -r "IPv6:addr" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr]" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr]:22" 0.0.0.0/0 -vv
Technically "invalid" address/port formats, but they can still be parsed because they’re unambiguous, so these also work:
./sshuttle -r "IPv6:addr]" 0.0.0.0/0 -vv
./sshuttle -r "IPv6:addr]:" 0.0.0.0/0 -vv
./sshuttle -r "IPv6:addr]:22" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr" 0.0.0.0/0 -vv
(If you have a Mac with Back To My Mac, use dns-sd to discover the remote host's IPv6 address:
dns-sd -G v4v6 <machine name>.<member name>.members.mac.com )
Based on suggestions by Jason Grossman and Ed Maste on the mailing list.
We now add a [local su] prefix to the 'su' password prompt (by cheating and
printing it before calling su), and we replace the 'sudo' password prompt
with '[local sudo] Password: ' (by using the little-known and
hopefully-portable -p option).
We no longer call sudo or su if the uid is already 0; otherwise the prefix
on the 'su' prompt would look weird, since su wouldn't ask for a password in
that case.
We don't add a prefix to the ssh password prompt, because it's too hard to
tell if there will *be* an ssh password prompt. But people will probably
assume that the password request is for the server anyway; few people are
likely to think that 'sshuttle -r myhost.com' is going to prompt for the
*local* password.
Of course none of this is a problem on a modern OS, like Debian, that would
say something like "Password for apenwarr@myhost.com:" instead of just
"Password:". MacOS doesn't do that, however, so I assume many other OSes
also don't. Let's try to help them out.
Instead, get a list of known sysctls in the interesting prefix (net.inet.ip)
and check if there's an entry in the list for each sysctl we want to change.
If there isn't, then don't try to change it.
This fixes a problem with FreeBSD, which doesn't have
net.inet.ip.scopedroute but also doesn't need it. Probably also fixes MacOS
10.5, which probably didn't have that either, but I don't know for sure.
Reported by Ed Maste.
'del' is an abbreviation that happened to work because of substring matching
in earlier versions of ipfw, but apparently they're planning to remove the
substring matching eventually. In any case, 'delete' has always worked, so
there's no downside to using that.
Reported by Ed Maste.
If hostwatch has a lot of stuff to say all at once, it would come in more
than one recv() packet, and server.py would send each packet individually as
a CMD_HOST_LIST message. Unfortunately, client.py (rightly) expects each
CMD_HOST_LIST message to be complete, ie. a correct sequence of rows.
So now server.py makes sure of this. If there's a leftover bit (ie. an
unterminated line), it saves it for later.
Bug reported by user "Duke" on the mailing list.
* python23:
Oops, missed another << operator to replace with _shl().
socket.SHUT_RD and socket.SHUT_WR don't exist in python 2.3.
compat/ssubprocess.py: some python versions don't have os.closerange().
_nb_clean: don't catch EPIPE after all.
Fix busy-waiting in two situations:
Factor out common mainloop code between client and server.
Implement our own left-shift operator to shut up python 2.3 warnings.
Don't use set() since it's not in python 2.3.
import and use subprocess.py from python 2.6.
Remove list comprehensions for python 2.3 compatibility.
Mercifully, socket.socket.shutdown() still does, but it uses hardcoded
integer parameters - and the integers correspond to the SHUT_RD and SHUT_WR
definitions in later versions - so let's just hardcode them ourselves.
See the carnage for yourself:
http://docs.python.org/release/2.3.5/lib/socket-objects.html
Like python2.5 on Debian. It might be a MacOS extension or something. So
much for the comment in subprocess.py that said "keep this compatible with
python 2.2."
EPIPE is a serious error from these places, so we have to actually do
something. Otherwise the client ends up busy waiting when the server
disconnects by surprise.
Bug noticed in a log from Chetan Kunte.
- If you tried to connect to a server that didn't exist, then disconnected
the client during the 60-second connection timeout, the server would
busy wait for 60 seconds.
- If you connected to a server and then sent data, but then the server
disconnected before reading all your data, the server would busy wait.
(example: yes | telnet servername 80)
Apparently left-shift in python 2.3 just *always* prints a warning, even if
we weren't doing anything wrong. Or maybe it only prints the warning
sometimes. Anyway, let's just multiply by 2**x instead of using <<x, since
we're not performance-sensitive anyway.
If your machine is a firewall/router, it affects whether people behind the
router can use your sshuttle connection - in the same way that it affects
whether they can route *anything* through you. And thus, it should be set
by the admin, not by sshuttle.
sshuttle works fine for the local user either way.
(This also affects MacOS since it's a BSD variant.)
It turns out 'established' doesn't work the way I expected it to from
iptables; it's not stateful. It just checks the TCP flags to see if the
connection *thinks* it's already established, and follows the rule if so.
That caused the first packet of each new connection to set sent to our
transproxy, but not the subsequent ones, so weird stuff happened.
With this change, any (matching) connection created *after* starting sshuttle
will get forwarded, but pre-existing ones - most importantly, sshuttle's own
ssh connection - will not.
And with this (plus the previous commit), sshuttle works on MacOS, including
10.6!
If you 'telnet localhost 12300' weird things happen; someday we should
probably auto-detect and avoid that altogether. But meanwhile, catch EPIPE
if it happens (it's irrelevant) and don't barf with a %d data type for a
value that can apparently sometimes be None.