Compare commits

..

17 Commits

Author SHA1 Message Date
7bd0efd57b Oops, broke --noserver mode at some point. 2010-05-04 18:40:21 -04:00
8173925bcd ssh.py: allow hostnames of the form hostname:port
Feature requested by Wayne Scott and Ed Maste.
2010-05-04 13:08:38 -04:00
0cdd72c830 README: clarify that the server doesn't need Linux or iptables. 2010-05-03 16:35:35 -04:00
403a088e07 README: add information about which iptables modules are needed.
As requested by a user.
2010-05-03 13:08:26 -04:00
4a93d3362d README: fix some formatting for easier text-mode readability.
It looked okay in markdown, but some of the text lines were a bit too long.
2010-05-03 13:07:30 -04:00
33a73056ee README: update to use real markdown-style headings.
Oops, got those mixed up with some random other markup format.
2010-05-02 21:46:51 -04:00
4a462258f5 ssh.py: support finding sshuttle in "$HOME/.../sshuttle"
If you ran sshuttle from /home/apenwarr/sshuttle/sshuttle, we would
automatically add /home/apenwarr/sshuttle to the PATH before trying to
execute sshuttle on the remote machine.  That way, if you install it in the
same place on two computers, the client would still be able to start the
server.

Someone reported, though, that if they installed the client in
/home/apenwarr/sshuttle/shuttle, and the server in /root/sshuttle/sshuttle,
then used "-r root@servername", it wasn't able to find the program.

Similar problems would happen if you're apenwarr at home and averyp at work.

So what we now do is add *two* directories to the PATH:
/home/apenwarr/sshuttle and $HOME/sshuttle, where $HOME is the value of
$HOME on the *server*, not the client.  So it'll find it in either place.
2010-05-02 21:24:31 -04:00
a5fc93c841 iptables: if client dies before sending GO, just quit.
If the server was having trouble starting, we would print a lot of
unnecessary stuff from iptables.  We shouldn't even have bothered *starting*
iptables if the server was dead anyway.
2010-05-02 21:06:31 -04:00
ea6bb5c255 iptables: die quietly if parent process dies.
If we can't communicate with the parent process, he's probably died
unexpectedly; just shut down and don't bother people about it.
2010-05-02 21:02:09 -04:00
2c2bea80bc iptables: try launching with sudo, then su, then directly.
Previous versions depended on having 'sudo' in your PATH.  Now that we can
feel safe that --iptables will clean up properly when you exit, and it
doesn't need to authenticate twice, the advantages of sudo aren't strictly
needed.  Good old 'su' is a reasonable fallback - and everybody has it,
which is nice.

Unfortunately su doesn't let you redirect stdin, so I had to play a stupid
fd trick to make it work.
2010-05-02 20:54:10 -04:00
7d674e9e37 ssnet: throw a nicer-looking Fatal when the mux connection dies.
When it happens, it's probably because the client died and the server hasn't
noticed yet.
2010-05-02 20:53:29 -04:00
a21e8c7a3c iptables: more resilient startup/cleanup.
Now the sudo iptables subprocess persists for the entire life of sshuttle.
The benefits of this are:

- no need to authenticate again at shutdown (failure of which could cause us
  to not clean up iptables)

- if the main process dies unexpectedly, iptables still gets cleaned up

- the password prompt can happen *before* starting the ssh/server process,
  which means it'll stand out and the password prompt won't be overwritten.
2010-05-02 19:29:03 -04:00
ca14231aae Whoops, we were spinning in the server when the mux outbuf was full.
Once again, the buffering gets mixed up with the selecting.  Seems to be the
story of my life.
2010-05-02 06:17:43 -04:00
6c2dc54b9e More latency fixes: use a PING/PONG pair to limit queue length.
It seems ssh is kind of stupid and uses a really big SO_SNDBUF (hundreds of
kbytes).  Thus, we can't depend on the socket's output buffer to limit our
latency down to something reasonable.  Instead, limit the amount of data we
can send in a single round trip.
2010-05-02 05:39:17 -04:00
5d1390927d Don't overfill the mux's output buffer.
Otherwise a single busy stream can ruin it for everybody.
2010-05-02 05:06:51 -04:00
da06286427 ssnet: when a connection error occurs, stop trying to connect. 2010-05-02 03:52:46 -04:00
616d0680d1 Add a README file based on my blog entry. 2010-05-02 03:42:59 -04:00
7 changed files with 364 additions and 52 deletions

152
README.md Normal file
View File

@ -0,0 +1,152 @@
sshuttle: where transparent proxy meets VPN meets ssh
=====================================================
I just spent an afternoon working on a new kind of VPN. You can get
the first release, <a href="http://github.com/apenwarr/sshuttle">sshuttle
0.10, on github</a>.
As far as I know, sshuttle is the only program that solves the following
common case:
- Your client machine (or router) is Linux.
- You have access to a remote network via ssh.
- You don't necessarily have admin access on the remote network.
- The remote network has no VPN, or only stupid/complex VPN
protocols (IPsec, PPTP, etc). Or maybe you <i>are</i> the
admin and you just got frustrated with the awful state of
VPN tools.
- You don't want to create an ssh port forward for every
single host/port on the remote network.
- You hate openssh's port forwarding because it's randomly
slow and/or stupid.
- You can't use openssh's PermitTunnel feature because
it's disabled by default on openssh servers; plus it does
TCP-over-TCP, which has terrible performance (see below).
Prerequisites
-------------
- sudo, su, or logged in as root on your client machine.
(The server doesn't need admin access.)
- Linux+iptables on your client machine, including at
least the iptables DNAT, REDIRECT, and ttl modules.
This is available by default on most Linux distributions.
(The server doesn't need iptables and doesn't need to be
Linux.)
This is how you use it:
-----------------------
- <tt>git clone git://github.com/apenwarr/sshuttle</tt>
on your client and server machines. The server can be
any ssh server with python available; the client must
be Linux with iptables, and you'll need root or sudo
access.
- <tt>./sshuttle -r username@sshserver 0.0.0.0/0 -vv</tt>
That's it! Now your local machine can access the remote network as if you
were right there! And if your "client" machine is a router, everyone on
your local network can make connections to your remote network.
This creates a transparent proxy server on your local machine for all IP
addresses that match 0.0.0.0/0. (You can use more specific IP addresses if
you want; use any number of IP addresses or subnets to change which
addresses get proxied. Using 0.0.0.0/0 proxies <i>everything</i>, which is
interesting if you don't trust the people on your local network.)
Any TCP session you initiate to one of the proxied IP addresses will be
captured by sshuttle and sent over an ssh session to the remote copy of
sshuttle, which will then regenerate the connection on that end, and funnel
the data back and forth through ssh.
Fun, right? A poor man's instant VPN, and you don't even have to have
admin access on the server.
Theory of Operation
-------------------
sshuttle is not exactly a VPN, and not exactly port forwarding. It's kind
of both, and kind of neither.
It's like a VPN, since it can forward every port on an entire network, not
just ports you specify. Conveniently, it lets you use the "real" IP
addresses of each host rather than faking port numbers on localhost.
On the other hand, the way it *works* is more like ssh port forwarding than
a VPN. Normally, a VPN forwards your data one packet at a time, and
doesn't care about individual connections; ie. it's "stateless" with respect
to the traffic. sshuttle is the opposite of stateless; it tracks every
single connection.
You could compare sshuttle to something like the old <a
href="http://en.wikipedia.org/wiki/Slirp">Slirp</a> program, which was a
userspace TCP/IP implementation that did something similar. But it
operated on a packet-by-packet basis on the client side, reassembling the
packets on the server side. That worked okay back in the "real live serial
port" days, because serial ports had predictable latency and buffering.
But you can't safely just forward TCP packets over a TCP session (like ssh),
because TCP's performance depends fundamentally on packet loss; it
<i>must</i> experience packet loss in order to know when to slow down! At
the same time, the outer TCP session (ssh, in this case) is a reliable
transport, which means that what you forward through the tunnel <i>never</i>
experiences packet loss. The ssh session itself experiences packet loss, of
course, but TCP fixes it up and ssh (and thus you) never know the
difference. But neither does your inner TCP session, and extremely screwy
performance ensues.
sshuttle assembles the TCP stream locally, multiplexes it statefully over
an ssh session, and disassembles it back into packets at the other end. So
it never ends up doing TCP-over-TCP. It's just data-over-TCP, which is
safe.
Useless Trivia
--------------
Back in 1998 (12 years ago! Yikes!), I released the first version of <a
href="http://alumnit.ca/wiki/?TunnelVisionReadMe">Tunnel Vision</a>, a
semi-intelligent VPN client for Linux. Unfortunately, I made two big mistakes:
I implemented the key exchange myself (oops), and I ended up doing
TCP-over-TCP (double oops). The resulting program worked okay - and people
used it for years - but the performance was always a bit funny. And nobody
ever found any security flaws in my key exchange, either, but that doesn't
mean anything. :)
The same year, dcoombs and I also released Fast Forward, a proxy server
supporting transparent proxying. Among other things, we used it for
automatically splitting traffic across more than one Internet connection (a
tool we called "Double Vision").
I was still in university at the time. A couple years after that, one of my
professors was working with some graduate students on the technology that
would eventually become <a href="http://www.slipstream.com/">Slipstream
Internet Acceleration</a>. He asked me to do a contract for him to build an
initial prototype of a transparent proxy server for mobile networks. The
idea was similar to sshuttle: if you reassemble and then disassemble the TCP
packets, you can reduce latency and improve performance vs. just forwarding
the packets over a plain VPN or mobile network. (It's unlikely that any of
my code has persisted in the Slipstream product today, but the concept is
still pretty cool. I'm still horrified that people use plain TCP on
complex mobile networks with crazily variable latency, for which it was
never really intended.)
That project I did for Slipstream was what first gave me the idea to merge
the concepts of Fast Forward, Double Vision, and Tunnel Vision into a single
program that was the best of all worlds. And here we are, at last, 10 years
later. You're welcome.
--
Avery Pennarun <apenwarr@gmail.com>

View File

@ -13,17 +13,68 @@ def original_dst(sock):
return (ip,port)
def iptables_setup(port, subnets):
class IPTables:
def __init__(self, port, subnets):
self.port = port
self.subnets = subnets
subnets_str = ['%s/%d' % (ip,width) for ip,width in subnets]
argv = (['sudo', sys.argv[0]] +
argvbase = ([sys.argv[0]] +
['-v'] * (helpers.verbose or 0) +
['--iptables', str(port)] + subnets_str)
rv = subprocess.call(argv)
if rv != 0:
raise Fatal('%r returned %d' % (argv, rv))
argv_tries = [
['sudo'] + argvbase,
['su', '-c', ' '.join(argvbase)],
argvbase
]
# we can't use stdin/stdout=subprocess.PIPE here, as we normally would,
# because stupid Linux 'su' requires that stdin be attached to a tty.
# Instead, attach a *bidirectional* socket to its stdout, and use
# that for talking in both directions.
(s1,s2) = socket.socketpair()
def setup():
# run in the child process
s2.close()
e = None
for argv in argv_tries:
try:
self.p = subprocess.Popen(argv, stdout=s1, preexec_fn=setup)
e = None
break
except OSError, e:
pass
self.argv = argv
s1.close()
self.pfile = s2.makefile('wb+')
if e:
log('Spawning iptables: %r\n' % self.argv)
raise Fatal(e)
line = self.pfile.readline()
self.check()
if line != 'READY\n':
raise Fatal('%r expected READY, got %r' % (self.argv, line))
def check(self):
rv = self.p.poll()
if rv:
raise Fatal('%r returned %d' % (self.argv, rv))
def start(self):
self.pfile.write('GO\n')
self.pfile.flush()
line = self.pfile.readline()
self.check()
if line != 'STARTED\n':
raise Fatal('%r expected STARTED, got %r' % (self.argv, line))
def done(self):
self.pfile.close()
rv = self.p.wait()
if rv:
raise Fatal('cleanup: %r returned %d' % (self.argv, rv))
def _main(listener, listenport, use_server, remotename, subnets):
def _main(listener, ipt, use_server, remotename):
handlers = []
if use_server:
if helpers.verbose >= 1:
@ -47,7 +98,7 @@ def _main(listener, listenport, use_server, remotename, subnets):
# we definitely want to do this *after* starting ssh, or we might end
# up intercepting the ssh connection!
iptables_setup(listenport, subnets)
ipt.start()
def onaccept():
sock,srcip = listener.accept()
@ -87,6 +138,9 @@ def _main(listener, listenport, use_server, remotename, subnets):
for s in handlers:
if s.socks & ready:
s.callback()
if use_server:
mux.callback()
mux.check_fullness()
def main(listenip, use_server, remotename, subnets):
@ -116,7 +170,9 @@ def main(listenip, use_server, remotename, subnets):
listenip = listener.getsockname()
debug1('Listening on %r.\n' % (listenip,))
ipt = IPTables(listenip[1], subnets)
try:
return _main(listener, listenip[1], use_server, remotename, subnets)
return _main(listener, ipt, use_server, remotename)
finally:
iptables_setup(listenip[1], [])
ipt.done()

View File

@ -11,7 +11,7 @@ def chain_exists(name):
return True
rv = p.wait()
if rv:
raise Exception('%r returned %d' % (argv, rv))
raise Fatal('%r returned %d' % (argv, rv))
def ipt(*args):
@ -19,27 +19,10 @@ def ipt(*args):
debug1('>> %s\n' % ' '.join(argv))
rv = subprocess.call(argv)
if rv:
raise Exception('%r returned %d' % (argv, rv))
raise Fatal('%r returned %d' % (argv, rv))
# This is some iptables voodoo for setting up the Linux kernel's transparent
# proxying stuff. If subnets is empty, we just delete our sshuttle chain;
# otherwise we delete it, then make it from scratch.
#
# We name the chain based on the transproxy port number so that it's possible
# to run multiple copies of sshuttle at the same time. Of course, the
# multiple copies shouldn't have overlapping subnets, or only the most-
# recently-started one will win (because we use "-I OUTPUT 1" instead of
# "-A OUTPUT").
#
# sshuttle is supposed to clean up after itself by deleting extra chains on
# exit. In case that fails, it's not the end of the world; future runs will
# supercede it in the transproxy list, at least, so the leftover iptables
# chains are mostly harmless.
def main(port, subnets):
assert(port > 0)
assert(port <= 65535)
def do_it(port, subnets):
chain = 'sshuttle-%s' % port
# basic cleanup/setup of chains
@ -63,4 +46,72 @@ def main(port, subnets):
'--to-ports', str(port),
'-m', 'ttl', '!', '--ttl', '42' # to prevent infinite loops
)
subnets_str = ['%s/%d' % (ip,width) for ip,width in subnets]
# This is some iptables voodoo for setting up the Linux kernel's transparent
# proxying stuff. If subnets is empty, we just delete our sshuttle chain;
# otherwise we delete it, then make it from scratch.
#
# We name the chain based on the transproxy port number so that it's possible
# to run multiple copies of sshuttle at the same time. Of course, the
# multiple copies shouldn't have overlapping subnets, or only the most-
# recently-started one will win (because we use "-I OUTPUT 1" instead of
# "-A OUTPUT").
#
# This code is supposed to clean up after itself by deleting extra chains on
# exit. In case that fails, it's not the end of the world; future runs will
# supercede it in the transproxy list, at least, so the leftover iptables
# chains are mostly harmless.
def main(port, subnets):
assert(port > 0)
assert(port <= 65535)
if os.getuid() != 0:
raise Fatal('you must be root (or enable su/sudo) to set up iptables')
# because of limitations of the 'su' command, the *real* stdin/stdout
# are both attached to stdout initially. Clone stdout into stdin so we
# can read from it.
os.dup2(1, 0)
debug1('iptables manager ready.\n')
sys.stdout.write('READY\n')
sys.stdout.flush()
# ctrl-c shouldn't be passed along to me. When the main sshuttle dies,
# I'll die automatically.
os.setsid()
# we wait until we get some input before creating the rules. That way,
# sshuttle can launch us as early as possible (and get sudo password
# authentication as early in the startup process as possible).
line = sys.stdin.readline(128)
if not line:
return # parent died; nothing to do
if line != 'GO\n':
raise Fatal('iptables: expected GO but got %r' % line)
try:
if line:
debug1('iptables manager: starting transproxy.\n')
do_it(port, subnets)
sys.stdout.write('STARTED\n')
try:
sys.stdout.flush()
# Now we wait until EOF or any other kind of exception. We need
# to stay running so that we don't need a *second* password
# authentication at shutdown time - that cleanup is important!
while sys.stdin.readline(128):
pass
except IOError:
# the parent process died for some reason; he's surely been loud
# enough, so no reason to report another error
return
finally:
try:
debug1('iptables manager: undoing changes.\n')
except:
pass
do_it(port, [])

View File

@ -45,7 +45,7 @@ def parse_ipport(s):
optspec = """
sshuttle [-l [ip:]port] [-r [username@]sshserver] <subnets...>
sshuttle [-l [ip:]port] [-r [username@]sshserver[:port]] <subnets...>
sshuttle --iptables <port> <subnets...>
sshuttle --server
--
@ -83,5 +83,6 @@ except Fatal, e:
log('fatal: %s\n' % e)
sys.exit(99)
except KeyboardInterrupt:
log('\nKeyboard interrupt: exiting.\n')
log('\n')
log('Keyboard interrupt: exiting.\n')
sys.exit(1)

View File

@ -35,11 +35,15 @@ def main():
handlers = filter(lambda s: s.ok, handlers)
for s in handlers:
s.pre_select(r,w,x)
debug2('Waiting: %d[%d,%d,%d]...\n'
% (len(handlers), len(r), len(w), len(x)))
debug2('Waiting: %d[%d,%d,%d] (fullness=%d/%d)...\n'
% (len(handlers), len(r), len(w), len(x),
mux.fullness, mux.too_full))
(r,w,x) = select.select(r,w,x)
#log('r=%r w=%r x=%r\n' % (r,w,x))
ready = set(r) | set(w) | set(x)
for s in handlers:
#debug2('check: %r: %r\n' % (s, s.socks & ready))
if s.socks & ready:
s.callback()
mux.check_fullness()
mux.callback()

25
ssh.py
View File

@ -1,10 +1,21 @@
import sys, os, re, subprocess, socket
import helpers
from helpers import *
def connect(rhost):
def connect(rhostport):
main_exe = sys.argv[0]
l = (rhostport or '').split(':', 1)
rhost = l[0]
portl = []
if len(l) > 1:
portl = ['-p', str(int(l[1]))]
nicedir = os.path.split(os.path.abspath(main_exe))[0]
nicedir = re.sub(r':', "_", nicedir)
myhome = os.path.expanduser('~') + '/'
if nicedir.startswith(myhome):
nicedir2 = nicedir[len(myhome):]
else:
nicedir2 = nicedir
if rhost == '-':
rhost = None
if not rhost:
@ -18,10 +29,13 @@ def connect(rhost):
# allow shellquoting. So we end up having to double-shellquote
# stuff here.
escapedir = re.sub(r'([^\w/])', r'\\\\\\\1', nicedir)
escapedir2 = re.sub(r'([^\w/])', r'\\\\\\\1', nicedir2)
cmd = r"""
sh -c PATH=%s:'$PATH exec sshuttle --server%s'
""" % (escapedir, ' -v' * (helpers.verbose or 0))
argv = ['ssh', rhost, '--', cmd.strip()]
sh -c PATH=%s:'$HOME'/%s:'$PATH exec sshuttle --server%s'
""" % (escapedir, escapedir2,
' -v' * (helpers.verbose or 0))
argv = ['ssh'] + portl + [rhost, '--', cmd.strip()]
debug2('executing: %r\n' % argv)
(s1,s2) = socket.socketpair()
def setup():
# runs in the child process
@ -32,7 +46,8 @@ def connect(rhost):
os.setsid()
s1a,s1b = os.dup(s1.fileno()), os.dup(s1.fileno())
s1.close()
p = subprocess.Popen(argv, stdin=s1a, stdout=s1b, preexec_fn=setup)
p = subprocess.Popen(argv, stdin=s1a, stdout=s1b, preexec_fn=setup,
close_fds=True)
os.close(s1a)
os.close(s1b)
return p, s2

View File

@ -81,6 +81,7 @@ class SockWrapper:
pass # not connected yet
elif e.args[0] in [errno.ECONNREFUSED, errno.ETIMEDOUT]:
# a "normal" kind of error
self.connect_to = None
self.seterr(e)
else:
raise # error we've never heard of?! barf completely.
@ -100,6 +101,9 @@ class SockWrapper:
except socket.error, e:
self.seterr(e)
def too_full(self):
return False # fullness is determined by the socket's select() state
def uwrite(self, buf):
if self.connect_to:
return 0 # still connecting
@ -180,6 +184,7 @@ class Proxy(Handler):
if self.wrap1.connect_to:
w.add(self.wrap1.rsock)
elif self.wrap1.buf:
if not self.wrap2.too_full():
w.add(self.wrap2.wsock)
elif not self.wrap1.shut_read:
r.add(self.wrap1.rsock)
@ -187,6 +192,7 @@ class Proxy(Handler):
if self.wrap2.connect_to:
w.add(self.wrap2.rsock)
elif self.wrap2.buf:
if not self.wrap1.too_full():
w.add(self.wrap1.wsock)
elif not self.wrap2.shut_read:
r.add(self.wrap2.rsock)
@ -214,6 +220,8 @@ class Mux(Handler):
self.want = 0
self.inbuf = ''
self.outbuf = []
self.fullness = 0
self.too_full = False
self.send(0, CMD_PING, 'chicken')
def next_channel(self):
@ -225,15 +233,28 @@ class Mux(Handler):
if not self.channels.get(self.chani):
return self.chani
def amount_queued(self):
return sum(len(b) for b in self.outbuf)
def check_fullness(self):
if self.fullness > 32768:
if not self.too_full:
self.send(0, CMD_PING, 'rttest')
self.too_full = True
#ob = []
#for b in self.outbuf:
# (s1,s2,c) = struct.unpack('!ccH', b[:4])
# ob.append(c)
#log('outbuf: %d %r\n' % (self.amount_queued(), ob))
def send(self, channel, cmd, data):
data = str(data)
assert(len(data) <= 65535)
p = struct.pack('!ccHHH', 'S', 'S', channel, cmd, len(data)) + data
self.outbuf.append(p)
debug2(' > channel=%d cmd=%s len=%d\n'
% (channel, cmd_to_name[cmd], len(data)))
#log('Mux: send queue is %d/%d\n'
# % (len(self.outbuf), sum(len(b) for b in self.outbuf)))
debug2(' > channel=%d cmd=%s len=%d (fullness=%d)\n'
% (channel, cmd_to_name[cmd], len(data), self.fullness))
self.fullness += len(data)
def got_packet(self, channel, cmd, data):
debug2('< channel=%d cmd=%s len=%d\n'
@ -242,6 +263,8 @@ class Mux(Handler):
self.send(0, CMD_PONG, data)
elif cmd == CMD_PONG:
debug2('received PING response\n')
self.too_full = False
self.fullness = 0
elif cmd == CMD_EXIT:
self.ok = False
elif cmd == CMD_CONNECT:
@ -256,6 +279,7 @@ class Mux(Handler):
self.wsock.setblocking(False)
if self.outbuf and self.outbuf[0]:
wrote = _nb_clean(os.write, self.wsock.fileno(), self.outbuf[0])
debug2('mux wrote: %d/%d\n' % (wrote, len(self.outbuf[0])))
if wrote:
self.outbuf[0] = self.outbuf[0][wrote:]
while self.outbuf and not self.outbuf[0]:
@ -263,7 +287,10 @@ class Mux(Handler):
def fill(self):
self.rsock.setblocking(False)
try:
b = _nb_clean(os.read, self.rsock.fileno(), 32768)
except OSError, e:
raise Fatal('other end: %r' % e)
#log('<<< %r\n' % b)
if b == '': # EOF
self.ok = False
@ -308,6 +335,7 @@ class MuxWrapper(SockWrapper):
self.mux = mux
self.channel = channel
self.mux.channels[channel] = self.got_packet
self.socks = []
debug2('new channel: %d\n' % channel)
def __del__(self):
@ -326,9 +354,14 @@ class MuxWrapper(SockWrapper):
self.shut_write = True
self.mux.send(self.channel, CMD_EOF, '')
def too_full(self):
return self.mux.too_full
def uwrite(self, buf):
if len(buf) > 65535:
buf = buf[:32768]
if self.mux.too_full:
return 0 # too much already enqueued
if len(buf) > 2048:
buf = buf[:2048]
self.mux.send(self.channel, CMD_DATA, buf)
return len(buf)