...because stupid MacOS ipfw 'fwd' rules don't work quite right with udp.
It can intercept packets bound for remote hosts, but it doesn't correctly
rewrite the port number from its original to the new socket, so it gets
dropped by the local kernel anyway.
That is, a packet to 1.2.3.4:53 should be redirected to, say,
127.0.0.1:9999, the local DNS listener socket. But instead, it gets sent to
127.0.0.1:53, which nobody is listening on, so it gets eaten.
Sigh.
Limitations:
- uses a hardcoded DNS server IP on both client and server
- never expires request/response objects, so leaks memory and sockets
- works only with iptables, not with ipfw
Tests with speedtest.net to a linode.com server:
Downstream Upstream
No sshuttle 1.25 Mbit/s 0.55 Mbit/s
Default 0.75 Mbit/s 0.51 Mbit/s
--no-latency-control 1.25 Mbit/s 0.55 Mbit/s
* fullness:
man page for the --no-latency-control option.
options: remove unused 'exe' parameter
options.py: generate usage string correctly for no-* options.
Implement the optional fullness checking a bit more like I like it.
new option to disable fullness checking
The 'exe' parameter was added in the hope of using it for additional
contextual information in the help text that Options generates. It was
till then abandoned and was judged as superflous information.
Remove the 'exe' parameter from Options' constructor.
(copied from the 'bup' project)
Signed-off-by: Gabriel Filion <lelutin@gmail.com>
On high latency links, the PING/PONG round trip triggered by fullness
checking could kill the bandwidth. Disabling it could result in >10x
bandwidth increase in some setups where the existing latency is already high
and the available bandwidth is also high.
ttl matching is only needed if your server is the same machine as the
client, which is kind of useless anyway (other than for testing), so there's
no reason for it to be fatal if that doesn't work.
Reported by "Alphazo" on the mailing list, who managed to get sshuttle
working on his Nokia N900 by removing the ttl stuff.
A cute little GUI for sshuttle on MacOS, written using pyobjc.
* macapp:
ui-macos: call the main binary MacOS/Sshuttle.
ui-macos/git-export.do: write the generated app to a branch.
ui-macos/default.app.do: get rid of some duplicated files.
ui-macos: Actually prompt for passwords instead of assuming a default.
ui-macos: Don't enable connecting for hosts with Custom but zero subnets.
ui-macos: Smoother log messages in the log window.
ui-macos: Notice when we've connected; make debug logs optional.
ui-macos: Much better connection status reporting.
ui-macos/run.do: a shortcut for running debug.app.
ui-macos/*: "a series of unfortunate events."
Just kidding. This is a squash of a whole bunch of unlabeled temporary
commits that I produced over the last couple of weeks while writing a UI
for MacOS while riding on airplanes and sitting in airports.
So long, batch of useless commits!
We were doing it as soon as the other end was shut_read, but we didn't
confirm first that the other end's read buffer had been totally emptied.
This caused problems on twitter.com, at least, where they create a zillion
short-lived connections.
I don't actually need these lines at all, as it turns out, because we were
already calling nowrite() correctly on the other end after a buffer got
emptied.
* daemon:
daemonization: make sure the firewall subproc sends to syslog too.
Rearrange daemonization/syslog stuff and make it more resilient.
run in background (daemon) and option
Rename --background to -D/--daemon, to match other programs (like smbd).
You can now have --syslog even without --daemon.
Avoid using atexit(); try/finally is better.
Don't just close stderr; we'll end up eating error output from ssh!
Instead, redirect stderr to a 'logger' subprocess that will send to syslog.
Delay redirecting stderr until after we know we're daemonizing, so handy
error messages can go to stderr instead of syslog.
Make pidfile stuff more resilient: support already-existing files, files
with strict permissions, outdated files containing an already-dead pid. Add
a --pidfile option to let you specify the pidfile path.
chdir("/") while daemonizing, so that the filesystem containing $PWD can
later be unmounted without killing the daemon.
fw.done() can't wait on the firewall subprocess on exit when daemonized; we
no longer are the parent of that process.
When the server disconnected, we were forwarding that information to the
client. But we weren't forwarding back the other way when the client
disconnected since there was no callback in place to do that.
Relatedly, when we failed entirely to connect to the server, we didn't notify the
client right away. Now we do.
Thanks to 'Roger' on the mailing list for pointing out these bugs.
It can happen if there are too many sockets open. If that happens, just
throw away any connections that arrive in the meantime instead of aborting
completely.
This avoids any possible problem caused by other people on your network
using you as a proxy. If you want to allow this, you can force it back to
the old way using the --listen option.
Thanks to 'tass' on github for reporting portscans that revealed this
potential security problem.
Add some cleverness for breaking infinite loops. Previously we'd only
detect it successfully if you connected to exactly the same IP as we were
listening on, but that was unreliable if we're listening on 0.0.0.0 and you
connected to one of the IP addresses we haven't heard of.
Now, if you try to connect to our listen port on *any* IP, we try binding to
that IP as a local socket; if it works, that's a local IP, and therefore
it's our socket, so reject the connection. If it doesn't work, it's a
remote IP, so forward it along.
Thanks to 'tass' on github for noticing the problem.
(Note by apenwarr: I used Roger's original patch as the basis for this one,
but implemented it a different way. All errors are thus my fault, but Roger
gets the credit for actually tracking down the circular reference that
caused the memory leak.)
(Note by apenwarr: seems to still work for me. The reason the
problem occurred is that reassigning 'handlers' doesn't change it in its
parent; it creates a whole new list, and the caller still owns the old one
with all the dead sockets in it. The problem seems to have been introduced
in commit 84376284db when I factored the
runonce() functionality out of the client and server but didn't notice this
reassignment.)
Supported sshuttle commands for IPv6:
./sshuttle -r "IPv6:addr" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr]" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr]:22" 0.0.0.0/0 -vv
Technically "invalid" address/port formats, but they can still be parsed because they’re unambiguous, so these also work:
./sshuttle -r "IPv6:addr]" 0.0.0.0/0 -vv
./sshuttle -r "IPv6:addr]:" 0.0.0.0/0 -vv
./sshuttle -r "IPv6:addr]:22" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr" 0.0.0.0/0 -vv
(If you have a Mac with Back To My Mac, use dns-sd to discover the remote host's IPv6 address:
dns-sd -G v4v6 <machine name>.<member name>.members.mac.com )