This version is a bit limited: it always only connects back to itself, which
is always on 127.0.0.1. It also doesn't really find any problems, other
than odd behaviour when Linux runs out of available port numbers after a
while.
This makes it easier to actually test what happens when channel numbers wrap
around. The good news: it works.
However, I did find a bug where sshuttle would die if we completely ran out
of available channel numbers because so many of them were open. This would
never realistically happen at the default of 65535 channels (we'd run out of
file descriptors first), but it's still a bug, so let's handle it by just
dropping the connection when it happens.
* dns:
dns on MacOS: use divert sockets instead of 'fwd' rules.
client.py: do DNS listener on the same port as the TCP listener.
Move client._islocal() to helpers.islocal() in preparation for sharing.
dns: add support for MacOS (but it doesn't work...)
Oops, dns_done() crashed if the request had already been timed out.
dns: trim DNS channel handlers after a response, or after a timeout.
dns: extract 'nameserver' lines from /etc/resolv.conf
Extremely basic, but functional, DNS proxying support (--dns option)
It turns out diverting UDP sockets is pretty easy compared to TCP (which
makes it all the more embarrassing that they screwed up 'fwd' support for
UDP and not TCP, but oh well). So let's use divert sockets instead of
transproxy for our DNS packets.
This is a little tricky because we have to do it all in firewall.py, since
divert sockets require root access, and only firewall.py has root access.
UDP and TCP have separate port namespaces, so to make it easier to keep
track of what's going on, just use the same transproxy port number for both.
We still need two sockets, but now tcpdumps are easier to understand.
...because stupid MacOS ipfw 'fwd' rules don't work quite right with udp.
It can intercept packets bound for remote hosts, but it doesn't correctly
rewrite the port number from its original to the new socket, so it gets
dropped by the local kernel anyway.
That is, a packet to 1.2.3.4:53 should be redirected to, say,
127.0.0.1:9999, the local DNS listener socket. But instead, it gets sent to
127.0.0.1:53, which nobody is listening on, so it gets eaten.
Sigh.
Limitations:
- uses a hardcoded DNS server IP on both client and server
- never expires request/response objects, so leaks memory and sockets
- works only with iptables, not with ipfw
Tests with speedtest.net to a linode.com server:
Downstream Upstream
No sshuttle 1.25 Mbit/s 0.55 Mbit/s
Default 0.75 Mbit/s 0.51 Mbit/s
--no-latency-control 1.25 Mbit/s 0.55 Mbit/s
* fullness:
man page for the --no-latency-control option.
options: remove unused 'exe' parameter
options.py: generate usage string correctly for no-* options.
Implement the optional fullness checking a bit more like I like it.
new option to disable fullness checking
The 'exe' parameter was added in the hope of using it for additional
contextual information in the help text that Options generates. It was
till then abandoned and was judged as superflous information.
Remove the 'exe' parameter from Options' constructor.
(copied from the 'bup' project)
Signed-off-by: Gabriel Filion <lelutin@gmail.com>
On high latency links, the PING/PONG round trip triggered by fullness
checking could kill the bandwidth. Disabling it could result in >10x
bandwidth increase in some setups where the existing latency is already high
and the available bandwidth is also high.
ttl matching is only needed if your server is the same machine as the
client, which is kind of useless anyway (other than for testing), so there's
no reason for it to be fatal if that doesn't work.
Reported by "Alphazo" on the mailing list, who managed to get sshuttle
working on his Nokia N900 by removing the ttl stuff.
A cute little GUI for sshuttle on MacOS, written using pyobjc.
* macapp:
ui-macos: call the main binary MacOS/Sshuttle.
ui-macos/git-export.do: write the generated app to a branch.
ui-macos/default.app.do: get rid of some duplicated files.
ui-macos: Actually prompt for passwords instead of assuming a default.
ui-macos: Don't enable connecting for hosts with Custom but zero subnets.
ui-macos: Smoother log messages in the log window.
ui-macos: Notice when we've connected; make debug logs optional.
ui-macos: Much better connection status reporting.
ui-macos/run.do: a shortcut for running debug.app.
ui-macos/*: "a series of unfortunate events."
Just kidding. This is a squash of a whole bunch of unlabeled temporary
commits that I produced over the last couple of weeks while writing a UI
for MacOS while riding on airplanes and sitting in airports.
So long, batch of useless commits!
We were doing it as soon as the other end was shut_read, but we didn't
confirm first that the other end's read buffer had been totally emptied.
This caused problems on twitter.com, at least, where they create a zillion
short-lived connections.
I don't actually need these lines at all, as it turns out, because we were
already calling nowrite() correctly on the other end after a buffer got
emptied.
* daemon:
daemonization: make sure the firewall subproc sends to syslog too.
Rearrange daemonization/syslog stuff and make it more resilient.
run in background (daemon) and option
Rename --background to -D/--daemon, to match other programs (like smbd).
You can now have --syslog even without --daemon.
Avoid using atexit(); try/finally is better.
Don't just close stderr; we'll end up eating error output from ssh!
Instead, redirect stderr to a 'logger' subprocess that will send to syslog.
Delay redirecting stderr until after we know we're daemonizing, so handy
error messages can go to stderr instead of syslog.
Make pidfile stuff more resilient: support already-existing files, files
with strict permissions, outdated files containing an already-dead pid. Add
a --pidfile option to let you specify the pidfile path.
chdir("/") while daemonizing, so that the filesystem containing $PWD can
later be unmounted without killing the daemon.
fw.done() can't wait on the firewall subprocess on exit when daemonized; we
no longer are the parent of that process.
When the server disconnected, we were forwarding that information to the
client. But we weren't forwarding back the other way when the client
disconnected since there was no callback in place to do that.
Relatedly, when we failed entirely to connect to the server, we didn't notify the
client right away. Now we do.
Thanks to 'Roger' on the mailing list for pointing out these bugs.
It can happen if there are too many sockets open. If that happens, just
throw away any connections that arrive in the meantime instead of aborting
completely.
This avoids any possible problem caused by other people on your network
using you as a proxy. If you want to allow this, you can force it back to
the old way using the --listen option.
Thanks to 'tass' on github for reporting portscans that revealed this
potential security problem.
Add some cleverness for breaking infinite loops. Previously we'd only
detect it successfully if you connected to exactly the same IP as we were
listening on, but that was unreliable if we're listening on 0.0.0.0 and you
connected to one of the IP addresses we haven't heard of.
Now, if you try to connect to our listen port on *any* IP, we try binding to
that IP as a local socket; if it works, that's a local IP, and therefore
it's our socket, so reject the connection. If it doesn't work, it's a
remote IP, so forward it along.
Thanks to 'tass' on github for noticing the problem.