Compare commits

...

86 Commits

Author SHA1 Message Date
973d5a95a1 man page update for daemonization options. 2011-01-01 00:32:37 -08:00
95ab6e7119 ssyslog.py: use daemon.notice instead of daemon.info
...MacOS X seems to default (in /etc/syslog.conf) to not logging daemon.info
anywhere.  That kind of defeats the purpose, I think.
2011-01-01 00:28:31 -08:00
e6d7c44e27 Merge branch 'daemon'
* daemon:
  daemonization: make sure the firewall subproc sends to syslog too.
  Rearrange daemonization/syslog stuff and make it more resilient.
  run in background (daemon) and option
2011-01-01 00:22:43 -08:00
5bf6e40682 daemonization: make sure the firewall subproc sends to syslog too. 2011-01-01 00:22:09 -08:00
8a5ae1a40a Rearrange daemonization/syslog stuff and make it more resilient.
Rename --background to -D/--daemon, to match other programs (like smbd).

You can now have --syslog even without --daemon.

Avoid using atexit(); try/finally is better.

Don't just close stderr; we'll end up eating error output from ssh!
Instead, redirect stderr to a 'logger' subprocess that will send to syslog.

Delay redirecting stderr until after we know we're daemonizing, so handy
error messages can go to stderr instead of syslog.

Make pidfile stuff more resilient: support already-existing files, files
with strict permissions, outdated files containing an already-dead pid.  Add
a --pidfile option to let you specify the pidfile path.

chdir("/") while daemonizing, so that the filesystem containing $PWD can
later be unmounted without killing the daemon.

fw.done() can't wait on the firewall subprocess on exit when daemonized; we
no longer are the parent of that process.
2010-12-31 23:55:19 -08:00
651b607361 If ssh dies right after starting, we might get ECONNRESET.
Turn it into a nicer-looking fatal error instead of an exception dump.
2010-12-31 23:46:47 -08:00
dc9a5e63c7 Minor tweak to help for the python= option. 2010-12-31 23:46:05 -08:00
33bc55be27 Merge branch 'closing'
* closing:
  Correctly close server connection when client disconnects.
  "Too many open files" shouldn't be a fatal condition.
2010-12-31 22:12:54 -08:00
c3204d2728 Correctly close server connection when client disconnects.
When the server disconnected, we were forwarding that information to the
client.  But we weren't forwarding back the other way when the client
disconnected since there was no callback in place to do that.

Relatedly, when we failed entirely to connect to the server, we didn't notify the
client right away.  Now we do.

Thanks to 'Roger' on the mailing list for pointing out these bugs.
2010-12-31 21:59:36 -08:00
b1edb226a5 "Too many open files" shouldn't be a fatal condition.
It can happen if there are too many sockets open.  If that happens, just
throw away any connections that arrive in the meantime instead of aborting
completely.
2010-12-31 21:32:51 -08:00
7fa1c3c4e4 Listen on localhost:0 instead of 0.0.0.0:0 by default.
This avoids any possible problem caused by other people on your network
using you as a proxy.  If you want to allow this, you can force it back to
the old way using the --listen option.

Thanks to 'tass' on github for reporting portscans that revealed this
potential security problem.
2010-12-31 21:22:40 -08:00
cca69eb496 Don't allow proxying of connections to the proxy port.
Add some cleverness for breaking infinite loops.  Previously we'd only
detect it successfully if you connected to exactly the same IP as we were
listening on, but that was unreliable if we're listening on 0.0.0.0 and you
connected to one of the IP addresses we haven't heard of.

Now, if you try to connect to our listen port on *any* IP, we try binding to
that IP as a local socket; if it works, that's a local IP, and therefore
it's our socket, so reject the connection.  If it doesn't work, it's a
remote IP, so forward it along.

Thanks to 'tass' on github for noticing the problem.
2010-12-31 21:22:32 -08:00
91f65132be Get rid of ugly quotes on "Accept:" log messages. 2010-12-31 20:54:46 -08:00
2ef3a301fb run in background (daemon) and option 2010-12-12 12:08:54 +08:00
41fd0348eb Fix a bug when packets are received on a channel after it closes.
Reported by cbowns.
2010-12-11 17:27:12 -08:00
1907048dad Remove the never-used and misleading CMD_CLOSE. 2010-12-09 19:20:09 -08:00
82e1d1c166 Fix memory leak of MuxWrapper object.
(Note by apenwarr: I used Roger's original patch as the basis for this one,
but implemented it a different way.  All errors are thus my fault, but Roger
gets the credit for actually tracking down the circular reference that
caused the memory leak.)
2010-12-09 19:20:07 -08:00
a497132c01 Add debug messages for counting SockWrapper objects.
You can use this to confirm that a memory leak exists.
2010-12-09 19:20:06 -08:00
7354600849 Fix a socket leak: delete object after close on both direction.
(Note by apenwarr: seems to still work for me.  The reason the
problem occurred is that reassigning 'handlers' doesn't change it in its
parent; it creates a whole new list, and the caller still owns the old one
with all the dead sockets in it.  The problem seems to have been introduced
in commit 84376284db when I factored the
runonce() functionality out of the client and server but didn't notice this
reassignment.)
2010-12-09 19:20:01 -08:00
918725c485 Oops, earlier ipv6 patch didn't work if no -r option is specified. 2010-12-09 19:20:01 -08:00
95c9b788a0 Add support for IPv6 remote hosts.
Supported sshuttle commands for IPv6:

./sshuttle -r "IPv6:addr" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr]" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr]:22" 0.0.0.0/0 -vv

Technically "invalid" address/port formats, but they can still be parsed because they’re unambiguous, so these also work:

./sshuttle -r "IPv6:addr]" 0.0.0.0/0 -vv
./sshuttle -r "IPv6:addr]:" 0.0.0.0/0 -vv
./sshuttle -r "IPv6:addr]:22" 0.0.0.0/0 -vv
./sshuttle -r "[IPv6:addr" 0.0.0.0/0 -vv

(If you have a Mac with Back To My Mac, use dns-sd to discover the remote host's IPv6 address:
dns-sd -G v4v6 <machine name>.<member name>.members.mac.com )
2010-11-19 15:13:35 -08:00
ef71751846 Add a sshuttle.8 manpage.
You need to have 'pandoc' installed in order to render it from sshuttle.md.
2010-11-09 01:59:51 -08:00
32b4defa9b Add a new --ssh-cmd= option to let you override the ssh command.
Requested by Axel Beckert.
2010-11-09 00:17:01 -08:00
8b7605cc5d Remove the --noserver option.
It didn't work anyway.  Obviously it hasn't been tested (or apparently
needed) in a long time.
2010-11-08 23:59:26 -08:00
bcf1892305 Make password prompting more clear.
Based on suggestions by Jason Grossman and Ed Maste on the mailing list.

We now add a [local su] prefix to the 'su' password prompt (by cheating and
printing it before calling su), and we replace the 'sudo' password prompt
with '[local sudo] Password: ' (by using the little-known and
hopefully-portable -p option).

We no longer call sudo or su if the uid is already 0; otherwise the prefix
on the 'su' prompt would look weird, since su wouldn't ask for a password in
that case.

We don't add a prefix to the ssh password prompt, because it's too hard to
tell if there will *be* an ssh password prompt.  But people will probably
assume that the password request is for the server anyway; few people are
likely to think that 'sshuttle -r myhost.com' is going to prompt for the
*local* password.

Of course none of this is a problem on a modern OS, like Debian, that would
say something like "Password for apenwarr@myhost.com:" instead of just
"Password:".  MacOS doesn't do that, however, so I assume many other OSes
also don't.  Let's try to help them out.
2010-11-08 23:35:16 -08:00
fe742c928d firewall.py: don't die if a given sysctl doesn't exist.
Instead, get a list of known sysctls in the interesting prefix (net.inet.ip)
and check if there's an entry in the list for each sysctl we want to change.
If there isn't, then don't try to change it.

This fixes a problem with FreeBSD, which doesn't have
net.inet.ip.scopedroute but also doesn't need it.  Probably also fixes MacOS
10.5, which probably didn't have that either, but I don't know for sure.

Reported by Ed Maste.
2010-10-16 20:11:30 -06:00
10ce1ee5d4 ipfw: use 'delete' instead of 'del' to avoid a warning on freebsd.
'del' is an abbreviation that happened to work because of substring matching
in earlier versions of ipfw, but apparently they're planning to remove the
substring matching eventually.  In any case, 'delete' has always worked, so
there's no downside to using that.

Reported by Ed Maste.
2010-10-05 13:29:12 -04:00
a32305a275 server.py: don't send partial hostwatch lists.
If hostwatch has a lot of stuff to say all at once, it would come in more
than one recv() packet, and server.py would send each packet individually as
a CMD_HOST_LIST message.  Unfortunately, client.py (rightly) expects each
CMD_HOST_LIST message to be complete, ie. a correct sequence of rows.

So now server.py makes sure of this.  If there's a leftover bit (ie. an
unterminated line), it saves it for later.

Bug reported by user "Duke" on the mailing list.
2010-10-04 02:47:43 -07:00
ae32fe2a59 Merge branch 'python23' - python 2.3 compatibility
* python23:
  Oops, missed another << operator to replace with _shl().
  socket.SHUT_RD and socket.SHUT_WR don't exist in python 2.3.
  compat/ssubprocess.py: some python versions don't have os.closerange().
  _nb_clean: don't catch EPIPE after all.
  Fix busy-waiting in two situations:
  Factor out common mainloop code between client and server.
  Implement our own left-shift operator to shut up python 2.3 warnings.
  Don't use set() since it's not in python 2.3.
  import and use subprocess.py from python 2.6.
  Remove list comprehensions for python 2.3 compatibility.
2010-10-02 16:34:35 -07:00
5070f2ffcf Oops, missed another << operator to replace with _shl().
For python 2.3, of course.
2010-10-02 15:26:29 -07:00
b219b523c2 socket.SHUT_RD and socket.SHUT_WR don't exist in python 2.3.
Mercifully, socket.socket.shutdown() still does, but it uses hardcoded
integer parameters - and the integers correspond to the SHUT_RD and SHUT_WR
definitions in later versions - so let's just hardcode them ourselves.

See the carnage for yourself:
http://docs.python.org/release/2.3.5/lib/socket-objects.html
2010-10-02 15:24:04 -07:00
52fbb2ebbe compat/ssubprocess.py: some python versions don't have os.closerange().
Like python2.5 on Debian.  It might be a MacOS extension or something.  So
much for the comment in subprocess.py that said "keep this compatible with
python 2.2."
2010-10-01 19:26:56 -07:00
76d576a375 _nb_clean: don't catch EPIPE after all.
EPIPE is a serious error from these places, so we have to actually do
something.  Otherwise the client ends up busy waiting when the server
disconnects by surprise.

Bug noticed in a log from Chetan Kunte.
2010-10-01 18:25:03 -07:00
f6e6515a3c Fix busy-waiting in two situations:
- If you tried to connect to a server that didn't exist, then disconnected
  the client during the 60-second connection timeout, the server would
  busy wait for 60 seconds.

- If you connected to a server and then sent data, but then the server
  disconnected before reading all your data, the server would busy wait.
    (example:  yes | telnet servername 80)
2010-10-01 18:22:36 -07:00
84376284db Factor out common mainloop code between client and server.
Also improve the socket message output a bit.
2010-10-01 17:36:09 -07:00
b0f061e204 Implement our own left-shift operator to shut up python 2.3 warnings.
Apparently left-shift in python 2.3 just *always* prints a warning, even if
we weren't doing anything wrong.  Or maybe it only prints the warning
sometimes.  Anyway, let's just multiply by 2**x instead of using <<x, since
we're not performance-sensitive anyway.
2010-10-01 14:46:34 -07:00
c403a83ab8 Don't use set() since it's not in python 2.3.
Just use a plain list instead.  Technically probably slightly worse
asymptotic behaviour, but it's not like we'll have a million sockets anyway.
2010-10-01 14:38:08 -07:00
da774f3f46 import and use subprocess.py from python 2.6.
This should hopefully let us run even on python 2.3 on really old servers.
2010-10-01 12:11:48 -07:00
7d3028dee2 Remove list comprehensions for python 2.3 compatibility. 2010-10-01 11:55:45 -07:00
518df41049 ssh.py: don't os.setsid().
This prevents ssh from asking for a password successfully.  Error reported
by Chetan Kunte.
2010-10-01 11:35:13 -07:00
76bbbfd67b Catch the exception thrown when ssh can't connect.
Easiest test: give it an invalid hostname.

Reported by Chetan Kunte.
2010-10-01 10:34:20 -07:00
6e336c09bf README: remove the note about MacOS not working. It works now! 2010-10-01 00:43:01 -07:00
f950a3800b BSD: sysctl net.inet.ip.forwarding=1 is not necessary.
If your machine is a firewall/router, it affects whether people behind the
router can use your sshuttle connection - in the same way that it affects
whether they can route *anything* through you.  And thus, it should be set
by the admin, not by sshuttle.

sshuttle works fine for the local user either way.

(This also affects MacOS since it's a BSD variant.)
2010-10-01 00:39:30 -07:00
8b4466b802 BSD ipfw: switch from 'established' to 'keep-state/check-state'.
It turns out 'established' doesn't work the way I expected it to from
iptables; it's not stateful.  It just checks the TCP flags to see if the
connection *thinks* it's already established, and follows the rule if so.
That caused the first packet of each new connection to set sent to our
transproxy, but not the subsequent ones, so weird stuff happened.

With this change, any (matching) connection created *after* starting sshuttle
will get forwarded, but pre-existing ones - most importantly, sshuttle's own
ssh connection - will not.

And with this (plus the previous commit), sshuttle works on MacOS, including
10.6!
2010-10-01 00:36:46 -07:00
4bf4f70c67 ssnet: recover slightly more gracefully from an infinite forwarding loop.
If you 'telnet localhost 12300' weird things happen; someday we should
probably auto-detect and avoid that altogether.  But meanwhile, catch EPIPE
if it happens (it's irrelevant) and don't barf with a %d data type for a
value that can apparently sometimes be None.
2010-10-01 00:05:49 -07:00
410b9d4229 Magic incantation to mostly fix MacOS 10.6.
It comes down to this:
   sysctl_set('net.inet.ip.scopedroute', 0)

I say "mostly" because actually it doesn't fix it; sshuttle doesn't know
what to do with the received connection, so there must be a minor bug
remaining somewhere.  I'll fix that next.

Thanks to dkf <dfortunato@gmail.com> on the sshuttle mailing list for
suggesting the magic fix.  He points at this post in particular:
  http://discussions.apple.com/thread.jspa?messageID=11558355&#11558355
that gave him the necessary clue.
2010-10-01 00:05:48 -07:00
2ef1c6a4c4 latest options.py from bup, now with tty-width guessing.
as of bup commit bup-0.19-2-gce2ace5.
2010-09-21 18:03:17 -07:00
b35cfbd022 hostwatch: add missing errno import
If the ~/.sshuttle.hosts file does not exist, it triggers the following
error:

       Traceback (most recent call last):
         File "./sshuttle", line 80, in <module>
           sys.exit(hostwatch.hw_main(extra))
         File "/home/def/p/sshuttle/hostwatch.py", line 246, in hw_main
           read_host_cache()
         File "/home/def/p/sshuttle/hostwatch.py", line 41, in read_host_cache
           if e.errno == errno.ENOENT:
       NameError: global name 'errno' is not defined

(This only happened if you run 'sshuttle --hostwatch' from the command line
directly, without passing it through assembler.py.)
2010-09-21 17:15:46 -07:00
dcba684766 If netstat -rn returns an error, make that non-fatal.
That only really stops --auto-nets from working; it's mostly harmless
otherwise.  And apparently some locked-down shared hosts don't let you get
the list of routes.
2010-09-04 11:29:11 -07:00
ee74110cff add option to allow the remote python binary's name/path to be specified 2010-09-03 23:00:26 -07:00
5bf8687ce3 Import latest options.py from bup-0.17.
This has new support for default values in square brackets, so let's use
that.
2010-09-03 23:00:26 -07:00
6bdb9517fd README: fix some out-of-date system requirements stuff.
Reported by Jason Axelson.
2010-07-25 00:16:09 -04:00
f1b33dab29 Add a --exclude option for excluding subnets from routing.
Also, add 127.0.0.0/8 to the default list of excludes.  If you want to route
0/0, you almost certainly *don't* want to route localhost to the remote ssh
server's localhost!

Thanks to Edward for the suggestion.
2010-07-15 14:13:33 -04:00
3a25f709e5 log(): don't abort if we fail to write to stderr.
Failing to write to the log sucks, but not as much as failing to clean up
just because stderr disappeared.  So let's catch any IOError exception from
log() and just ignore it.

This should fix a problem reported by Camille Moncelier, which is that
sshuttle firewall entries stick around if your tty dies strangely (eg. your
X server aborts for some reason).
2010-05-16 17:57:18 -04:00
a8b3d69856 ssh.py: try harder to find required *.py files.
Search the entire python sys.path, not just the directory that argv[0] is
in.  That way if you symlink the sshuttle binary into (for example) ~/bin,
it'll be able to work correctly.
2010-05-12 13:53:14 -04:00
2d4f6a4308 client: add a debug1() message for connecting/connected.
If the server is going to delay us, we'd at least like to know that.
2010-05-11 19:04:44 -04:00
d435ed837d Created a googlegroups.com mailing list for sshuttle. 2010-05-11 15:30:53 -04:00
2d77403a0b Don't use try/except/finally so that python 2.4 works.
Use try/(try/except)/finally instead.  There was only once case of this.

Thanks to Wayne Scott and nisc for pointing this out.
2010-05-10 13:58:52 -04:00
77cf37e0fa firewall: preserve permissions on /etc/hosts
Pointed out by nisc on github.  If people use an unusual umask or have funny
permissions on /etc/hosts, sshuttle would screw it up.

We also use hardlinks to atomically backup the original /etc/hosts to
/etc/hosts.sbak the first time, rather than manually copying it.  Not sure
why I didn't think of that before.
2010-05-09 11:22:05 -04:00
384d0e7c1d hostwatch: watch "netstat -n" for IP addresses.
The list of active sessions might tell us about some hostnames on the local
networks, which we can then add to our subnet list.
2010-05-08 16:14:36 -04:00
5a4a2ab7f9 Oops, previous change to ipfw settings prevented cleanup from working. 2010-05-08 16:14:36 -04:00
33efa5ac62 Added new --auto-hosts and --seed-hosts options to the client.
Now if you use --auto-hosts (-H), the client will ask the server to spawn a
hostwatcher to add names.  That, in turn, will send names back to the
server, which sends them back to the client, which sends them to the
firewall subprocess, which will write them to /etc/hosts.  Whew!

Only the firewall process can write to /etc/hosts, of course, because only
he's running as root.

Since the name discovery process is kind of slow, we cache the names in
~/.sshuttle.hosts on the remote server.

Right now, most of the names are discovered using nmblookup and smbclient,
as well as by reading the existing entries in /etc/hosts.  What would really
be nice would be to query active directory or mdns somehow... but I don't
really know how those work, so this is what you get for now :)  It's pretty
neat, at least.
2010-05-08 03:32:30 -04:00
a2ea5ab455 Add 'sshuttle --hostwatch' subcommand.
This tries to discover local hostnames and prints them to stdout.  Will be
used by the server for auto-hostname tracking.
2010-05-08 03:00:05 -04:00
680941cb0c BSD: "ipfw add %d accept ip from any to any established"
With this rule, we don't interfere with already-established (or incoming)
connections to routes that we're about to take over.  This is what
happens by default in Linux/iptables.
2010-05-07 20:07:41 -04:00
7043195043 Add -N (--auto-nets) option for auto-discovering subnets.
Now if you do

	./sshuttle -Nr username@myservername

It'll automatically route the "local" subnets (ie., stuff in the routing
table) from myservername.  This is (hopefully a reasonable default setting
for most people.
2010-05-07 20:02:04 -04:00
77935bd110 ssnet: EHOSTUNREACH and ENETUNREACH are non-fatal errors.
Reported by Wayne Scott.
2010-05-07 12:30:03 -04:00
8fe3592be3 Don't require the remote server to have sshuttle installed.
Instead, grab our source code, send it over the link, and have python eval
it and then start the server.py main() function.

Strangely, there's now *less* horrible stuff in ssh.py, because we no longer
have to munge around with the PATH environment variable.  And this
significantly reduces the setup required to get sshuttle going.

Based on a suggestion from Wayne Scott.
2010-05-04 23:42:36 -04:00
ba19d9c72d Rename iptables->firewall.
Since we "almost" support ipfw on MacOS (which I guess might mean FreeBSD
too), the name should be a bit more generic.
2010-05-04 22:06:27 -04:00
096bbcc576 Client "almost" works on MacOS and maybe FreeBSD.
Basic forwarding now works on MacOS, assuming you set up ipfw correctly
(ha ha).  I wasted a few hours today trying to figure this out, and I'm *so
very close*, but unfortunately it just didn't work.  Think you can figure it
out?

Related changes:
- don't die if iptables is unavailable
- BSD uses getsockname() instead of SO_ORIGINAL_DST
- non-blocking connect() returns EISCONN once it's connected
- you can't setsockopt IP_TTL more than once
2010-05-04 22:06:22 -04:00
7bd0efd57b Oops, broke --noserver mode at some point. 2010-05-04 18:40:21 -04:00
8173925bcd ssh.py: allow hostnames of the form hostname:port
Feature requested by Wayne Scott and Ed Maste.
2010-05-04 13:08:38 -04:00
0cdd72c830 README: clarify that the server doesn't need Linux or iptables. 2010-05-03 16:35:35 -04:00
403a088e07 README: add information about which iptables modules are needed.
As requested by a user.
2010-05-03 13:08:26 -04:00
4a93d3362d README: fix some formatting for easier text-mode readability.
It looked okay in markdown, but some of the text lines were a bit too long.
2010-05-03 13:07:30 -04:00
33a73056ee README: update to use real markdown-style headings.
Oops, got those mixed up with some random other markup format.
2010-05-02 21:46:51 -04:00
4a462258f5 ssh.py: support finding sshuttle in "$HOME/.../sshuttle"
If you ran sshuttle from /home/apenwarr/sshuttle/sshuttle, we would
automatically add /home/apenwarr/sshuttle to the PATH before trying to
execute sshuttle on the remote machine.  That way, if you install it in the
same place on two computers, the client would still be able to start the
server.

Someone reported, though, that if they installed the client in
/home/apenwarr/sshuttle/shuttle, and the server in /root/sshuttle/sshuttle,
then used "-r root@servername", it wasn't able to find the program.

Similar problems would happen if you're apenwarr at home and averyp at work.

So what we now do is add *two* directories to the PATH:
/home/apenwarr/sshuttle and $HOME/sshuttle, where $HOME is the value of
$HOME on the *server*, not the client.  So it'll find it in either place.
2010-05-02 21:24:31 -04:00
a5fc93c841 iptables: if client dies before sending GO, just quit.
If the server was having trouble starting, we would print a lot of
unnecessary stuff from iptables.  We shouldn't even have bothered *starting*
iptables if the server was dead anyway.
2010-05-02 21:06:31 -04:00
ea6bb5c255 iptables: die quietly if parent process dies.
If we can't communicate with the parent process, he's probably died
unexpectedly; just shut down and don't bother people about it.
2010-05-02 21:02:09 -04:00
2c2bea80bc iptables: try launching with sudo, then su, then directly.
Previous versions depended on having 'sudo' in your PATH.  Now that we can
feel safe that --iptables will clean up properly when you exit, and it
doesn't need to authenticate twice, the advantages of sudo aren't strictly
needed.  Good old 'su' is a reasonable fallback - and everybody has it,
which is nice.

Unfortunately su doesn't let you redirect stdin, so I had to play a stupid
fd trick to make it work.
2010-05-02 20:54:10 -04:00
7d674e9e37 ssnet: throw a nicer-looking Fatal when the mux connection dies.
When it happens, it's probably because the client died and the server hasn't
noticed yet.
2010-05-02 20:53:29 -04:00
a21e8c7a3c iptables: more resilient startup/cleanup.
Now the sudo iptables subprocess persists for the entire life of sshuttle.
The benefits of this are:

- no need to authenticate again at shutdown (failure of which could cause us
  to not clean up iptables)

- if the main process dies unexpectedly, iptables still gets cleaned up

- the password prompt can happen *before* starting the ssh/server process,
  which means it'll stand out and the password prompt won't be overwritten.
2010-05-02 19:29:03 -04:00
ca14231aae Whoops, we were spinning in the server when the mux outbuf was full.
Once again, the buffering gets mixed up with the selecting.  Seems to be the
story of my life.
2010-05-02 06:17:43 -04:00
6c2dc54b9e More latency fixes: use a PING/PONG pair to limit queue length.
It seems ssh is kind of stupid and uses a really big SO_SNDBUF (hundreds of
kbytes).  Thus, we can't depend on the socket's output buffer to limit our
latency down to something reasonable.  Instead, limit the amount of data we
can send in a single round trip.
2010-05-02 05:39:17 -04:00
5d1390927d Don't overfill the mux's output buffer.
Otherwise a single busy stream can ruin it for everybody.
2010-05-02 05:06:51 -04:00
da06286427 ssnet: when a connection error occurs, stop trying to connect. 2010-05-02 03:52:46 -04:00
616d0680d1 Add a README file based on my blog entry. 2010-05-02 03:42:59 -04:00
18 changed files with 3274 additions and 271 deletions

1
.gitignore vendored
View File

@ -1,2 +1,3 @@
*.pyc *.pyc
*~ *~
*.8

19
Makefile Normal file
View File

@ -0,0 +1,19 @@
PANDOC:=$(shell \
if pandoc </dev/null 2>/dev/null; then \
echo pandoc; \
else \
echo "Warning: pandoc not installed; can't generate manpages." >&2; \
echo '@echo Skipping: pandoc'; \
fi)
default: all
all: sshuttle.8
sshuttle.8: sshuttle.md
%.8: %.md
$(PANDOC) -s -r markdown -w man -o $@ $<
clean:
rm -f *~ */*~ .*~ */.*~ *.8 *.tmp */*.tmp *.pyc */*.pyc

166
README.md Normal file
View File

@ -0,0 +1,166 @@
sshuttle: where transparent proxy meets VPN meets ssh
=====================================================
As far as I know, sshuttle is the only program that solves the following
common case:
- Your client machine (or router) is Linux, FreeBSD, or MacOS.
- You have access to a remote network via ssh.
- You don't necessarily have admin access on the remote network.
- The remote network has no VPN, or only stupid/complex VPN
protocols (IPsec, PPTP, etc). Or maybe you <i>are</i> the
admin and you just got frustrated with the awful state of
VPN tools.
- You don't want to create an ssh port forward for every
single host/port on the remote network.
- You hate openssh's port forwarding because it's randomly
slow and/or stupid.
- You can't use openssh's PermitTunnel feature because
it's disabled by default on openssh servers; plus it does
TCP-over-TCP, which has terrible performance (see below).
Prerequisites
-------------
- sudo, su, or logged in as root on your client machine.
(The server doesn't need admin access.)
- If you use Linux on your client machine:
iptables installed on the client, including at
least the iptables DNAT, REDIRECT, and ttl modules.
These are installed by default on most Linux distributions.
(The server doesn't need iptables and doesn't need to be
Linux.)
- If you use MacOS or BSD on your client machine:
Your kernel needs to be compiled with IPFIREWALL_FORWARD
(MacOS has this by default) and you need to have ipfw
available. (The server doesn't need to be MacOS or BSD.)
This is how you use it:
-----------------------
- <tt>git clone git://github.com/apenwarr/sshuttle</tt>
on your client machine. You'll need root or sudo
access, and python needs to be installed.
- <tt>./sshuttle -r username@sshserver 0.0.0.0/0 -vv</tt>
(You may be prompted for one or more passwords; first, the
local password to become root using either sudo or su, and
then the remote ssh password. Or you might have sudo and ssh set
up to not require passwords, in which case you won't be
prompted at all.)
That's it! Now your local machine can access the remote network as if you
were right there. And if your "client" machine is a router, everyone on
your local network can make connections to your remote network.
You don't need to install sshuttle on the remote server;
the remote server just needs to have python available.
sshuttle will automatically upload and run its source code
to the remote python interpreter.
This creates a transparent proxy server on your local machine for all IP
addresses that match 0.0.0.0/0. (You can use more specific IP addresses if
you want; use any number of IP addresses or subnets to change which
addresses get proxied. Using 0.0.0.0/0 proxies <i>everything</i>, which is
interesting if you don't trust the people on your local network.)
Any TCP session you initiate to one of the proxied IP addresses will be
captured by sshuttle and sent over an ssh session to the remote copy of
sshuttle, which will then regenerate the connection on that end, and funnel
the data back and forth through ssh.
Fun, right? A poor man's instant VPN, and you don't even have to have
admin access on the server.
Theory of Operation
-------------------
sshuttle is not exactly a VPN, and not exactly port forwarding. It's kind
of both, and kind of neither.
It's like a VPN, since it can forward every port on an entire network, not
just ports you specify. Conveniently, it lets you use the "real" IP
addresses of each host rather than faking port numbers on localhost.
On the other hand, the way it *works* is more like ssh port forwarding than
a VPN. Normally, a VPN forwards your data one packet at a time, and
doesn't care about individual connections; ie. it's "stateless" with respect
to the traffic. sshuttle is the opposite of stateless; it tracks every
single connection.
You could compare sshuttle to something like the old <a
href="http://en.wikipedia.org/wiki/Slirp">Slirp</a> program, which was a
userspace TCP/IP implementation that did something similar. But it
operated on a packet-by-packet basis on the client side, reassembling the
packets on the server side. That worked okay back in the "real live serial
port" days, because serial ports had predictable latency and buffering.
But you can't safely just forward TCP packets over a TCP session (like ssh),
because TCP's performance depends fundamentally on packet loss; it
<i>must</i> experience packet loss in order to know when to slow down! At
the same time, the outer TCP session (ssh, in this case) is a reliable
transport, which means that what you forward through the tunnel <i>never</i>
experiences packet loss. The ssh session itself experiences packet loss, of
course, but TCP fixes it up and ssh (and thus you) never know the
difference. But neither does your inner TCP session, and extremely screwy
performance ensues.
sshuttle assembles the TCP stream locally, multiplexes it statefully over
an ssh session, and disassembles it back into packets at the other end. So
it never ends up doing TCP-over-TCP. It's just data-over-TCP, which is
safe.
Useless Trivia
--------------
Back in 1998 (12 years ago! Yikes!), I released the first version of <a
href="http://alumnit.ca/wiki/?TunnelVisionReadMe">Tunnel Vision</a>, a
semi-intelligent VPN client for Linux. Unfortunately, I made two big mistakes:
I implemented the key exchange myself (oops), and I ended up doing
TCP-over-TCP (double oops). The resulting program worked okay - and people
used it for years - but the performance was always a bit funny. And nobody
ever found any security flaws in my key exchange, either, but that doesn't
mean anything. :)
The same year, dcoombs and I also released Fast Forward, a proxy server
supporting transparent proxying. Among other things, we used it for
automatically splitting traffic across more than one Internet connection (a
tool we called "Double Vision").
I was still in university at the time. A couple years after that, one of my
professors was working with some graduate students on the technology that
would eventually become <a href="http://www.slipstream.com/">Slipstream
Internet Acceleration</a>. He asked me to do a contract for him to build an
initial prototype of a transparent proxy server for mobile networks. The
idea was similar to sshuttle: if you reassemble and then disassemble the TCP
packets, you can reduce latency and improve performance vs. just forwarding
the packets over a plain VPN or mobile network. (It's unlikely that any of
my code has persisted in the Slipstream product today, but the concept is
still pretty cool. I'm still horrified that people use plain TCP on
complex mobile networks with crazily variable latency, for which it was
never really intended.)
That project I did for Slipstream was what first gave me the idea to merge
the concepts of Fast Forward, Double Vision, and Tunnel Vision into a single
program that was the best of all worlds. And here we are, at last, 10 years
later. You're welcome.
--
Avery Pennarun <apenwarr@gmail.com>
Mailing list:
Subscribe by sending a message to <sshuttle+subscribe@googlegroups.com>
List archives are at: http://groups.google.com/group/sshuttle

26
assembler.py Normal file
View File

@ -0,0 +1,26 @@
import sys, zlib
z = zlib.decompressobj()
mainmod = sys.modules[__name__]
while 1:
name = sys.stdin.readline().strip()
if name:
nbytes = int(sys.stdin.readline())
if verbosity >= 2:
sys.stderr.write('server: assembling %r (%d bytes)\n'
% (name, nbytes))
content = z.decompress(sys.stdin.read(nbytes))
exec compile(content, name, "exec")
# FIXME: this crushes everything into a single module namespace,
# then makes each of the module names point at this one. Gross.
assert(name.endswith('.py'))
modname = name[:-3]
mainmod.__dict__[modname] = mainmod
else:
break
verbose = verbosity
sys.stderr.flush()
sys.stdout.flush()
main()

374
client.py
View File

@ -1,102 +1,324 @@
import struct, socket, select, subprocess, errno import struct, socket, select, errno, re, signal
import helpers, ssnet, ssh import compat.ssubprocess as ssubprocess
import helpers, ssnet, ssh, ssyslog
from ssnet import SockWrapper, Handler, Proxy, Mux, MuxWrapper from ssnet import SockWrapper, Handler, Proxy, Mux, MuxWrapper
from helpers import * from helpers import *
def original_dst(sock): _extra_fd = os.open('/dev/null', os.O_RDONLY)
SO_ORIGINAL_DST = 80
SOCKADDR_MIN = 16 def _islocal(ip):
sockaddr_in = sock.getsockopt(socket.SOL_IP, SO_ORIGINAL_DST, SOCKADDR_MIN) sock = socket.socket()
(proto, port, a,b,c,d) = struct.unpack('!HHBBBB', sockaddr_in[:8]) try:
assert(socket.htons(proto) == socket.AF_INET) try:
ip = '%d.%d.%d.%d' % (a,b,c,d) sock.bind((ip, 0))
return (ip,port) except socket.error, e:
if e.args[0] == errno.EADDRNOTAVAIL:
return False # not a local IP
else:
raise
finally:
sock.close()
return True # it's a local IP, or there would have been an error
def iptables_setup(port, subnets): def got_signal(signum, frame):
subnets_str = ['%s/%d' % (ip,width) for ip,width in subnets] log('exiting on signal %d\n' % signum)
argv = (['sudo', sys.argv[0]] + sys.exit(1)
['-v'] * (helpers.verbose or 0) +
['--iptables', str(port)] + subnets_str)
rv = subprocess.call(argv)
if rv != 0:
raise Fatal('%r returned %d' % (argv, rv))
def _main(listener, listenport, use_server, remotename, subnets): _pidname = None
handlers = [] def check_daemon(pidfile):
if use_server: global _pidname
if helpers.verbose >= 1: _pidname = os.path.abspath(pidfile)
helpers.logprefix = 'c : ' try:
oldpid = open(_pidname).read(1024)
except IOError, e:
if e.errno == errno.ENOENT:
return # no pidfile, ok
else: else:
helpers.logprefix = 'client: ' raise Fatal("can't read %s: %s" % (_pidname, e))
(serverproc, serversock) = ssh.connect(remotename) if not oldpid:
mux = Mux(serversock, serversock) os.unlink(_pidname)
handlers.append(mux) return # invalid pidfile, ok
oldpid = int(oldpid.strip() or 0)
if oldpid <= 0:
os.unlink(_pidname)
return # invalid pidfile, ok
try:
os.kill(oldpid, 0)
except OSError, e:
if e.errno == errno.ESRCH:
os.unlink(_pidname)
return # outdated pidfile, ok
elif e.errno == errno.EPERM:
pass
else:
raise
raise Fatal("%s: sshuttle is already running (pid=%d)"
% (_pidname, oldpid))
expected = 'SSHUTTLE0001'
initstring = serversock.recv(len(expected)) def daemonize():
if os.fork():
rv = serverproc.poll() os._exit(0)
os.setsid()
if os.fork():
os._exit(0)
outfd = os.open(_pidname, os.O_WRONLY|os.O_CREAT|os.O_EXCL, 0666)
try:
os.write(outfd, '%d\n' % os.getpid())
finally:
os.close(outfd)
os.chdir("/")
# Normal exit when killed, or try/finally won't work and the pidfile won't
# be deleted.
signal.signal(signal.SIGTERM, got_signal)
si = open('/dev/null', 'r+')
os.dup2(si.fileno(), 0)
os.dup2(si.fileno(), 1)
si.close()
ssyslog.stderr_to_syslog()
def daemon_cleanup():
try:
os.unlink(_pidname)
except OSError, e:
if e.errno == errno.ENOENT:
pass
else:
raise
def original_dst(sock):
try:
SO_ORIGINAL_DST = 80
SOCKADDR_MIN = 16
sockaddr_in = sock.getsockopt(socket.SOL_IP,
SO_ORIGINAL_DST, SOCKADDR_MIN)
(proto, port, a,b,c,d) = struct.unpack('!HHBBBB', sockaddr_in[:8])
assert(socket.htons(proto) == socket.AF_INET)
ip = '%d.%d.%d.%d' % (a,b,c,d)
return (ip,port)
except socket.error, e:
if e.args[0] == errno.ENOPROTOOPT:
return sock.getsockname()
raise
class FirewallClient:
def __init__(self, port, subnets_include, subnets_exclude):
self.port = port
self.auto_nets = []
self.subnets_include = subnets_include
self.subnets_exclude = subnets_exclude
argvbase = ([sys.argv[0]] +
['-v'] * (helpers.verbose or 0) +
['--firewall', str(port)])
if ssyslog._p:
argvbase += ['--syslog']
argv_tries = [
['sudo', '-p', '[local sudo] Password: '] + argvbase,
['su', '-c', ' '.join(argvbase)],
argvbase
]
# we can't use stdin/stdout=subprocess.PIPE here, as we normally would,
# because stupid Linux 'su' requires that stdin be attached to a tty.
# Instead, attach a *bidirectional* socket to its stdout, and use
# that for talking in both directions.
(s1,s2) = socket.socketpair()
def setup():
# run in the child process
s2.close()
e = None
if os.getuid() == 0:
argv_tries = argv_tries[-1:] # last entry only
for argv in argv_tries:
try:
if argv[0] == 'su':
sys.stderr.write('[local su] ')
self.p = ssubprocess.Popen(argv, stdout=s1, preexec_fn=setup)
e = None
break
except OSError, e:
pass
self.argv = argv
s1.close()
self.pfile = s2.makefile('wb+')
if e:
log('Spawning firewall manager: %r\n' % self.argv)
raise Fatal(e)
line = self.pfile.readline()
self.check()
if line != 'READY\n':
raise Fatal('%r expected READY, got %r' % (self.argv, line))
def check(self):
rv = self.p.poll()
if rv: if rv:
raise Fatal('server died with error code %d' % rv) raise Fatal('%r returned %d' % (self.argv, rv))
if initstring != expected:
raise Fatal('expected server init string %r; got %r'
% (expected, initstring))
# we definitely want to do this *after* starting ssh, or we might end def start(self):
# up intercepting the ssh connection! self.pfile.write('ROUTES\n')
iptables_setup(listenport, subnets) for (ip,width) in self.subnets_include+self.auto_nets:
self.pfile.write('%d,0,%s\n' % (width, ip))
for (ip,width) in self.subnets_exclude:
self.pfile.write('%d,1,%s\n' % (width, ip))
self.pfile.write('GO\n')
self.pfile.flush()
line = self.pfile.readline()
self.check()
if line != 'STARTED\n':
raise Fatal('%r expected STARTED, got %r' % (self.argv, line))
def sethostip(self, hostname, ip):
assert(not re.search(r'[^-\w]', hostname))
assert(not re.search(r'[^0-9.]', ip))
self.pfile.write('HOST %s,%s\n' % (hostname, ip))
self.pfile.flush()
def done(self):
self.pfile.close()
rv = self.p.wait()
if rv:
raise Fatal('cleanup: %r returned %d' % (self.argv, rv))
def _main(listener, fw, ssh_cmd, remotename, python, seed_hosts, auto_nets,
syslog, daemon):
handlers = []
if helpers.verbose >= 1:
helpers.logprefix = 'c : '
else:
helpers.logprefix = 'client: '
debug1('connecting to server...\n')
try:
(serverproc, serversock) = ssh.connect(ssh_cmd, remotename, python,
stderr=ssyslog._p and ssyslog._p.stdin)
except socket.error, e:
if e.args[0] == errno.EPIPE:
raise Fatal("failed to establish ssh session (1)")
else:
raise
mux = Mux(serversock, serversock)
handlers.append(mux)
expected = 'SSHUTTLE0001'
try:
initstring = serversock.recv(len(expected))
except socket.error, e:
if e.args[0] == errno.ECONNRESET:
raise Fatal("failed to establish ssh session (2)")
else:
raise
rv = serverproc.poll()
if rv:
raise Fatal('server died with error code %d' % rv)
if initstring != expected:
raise Fatal('expected server init string %r; got %r'
% (expected, initstring))
debug1('connected.\n')
if daemon:
daemonize()
log('daemonizing (%s).\n' % _pidname)
elif syslog:
debug1('switching to syslog.\n')
ssyslog.stderr_to_syslog()
def onroutes(routestr):
if auto_nets:
for line in routestr.strip().split('\n'):
(ip,width) = line.split(',', 1)
fw.auto_nets.append((ip,int(width)))
# we definitely want to do this *after* starting ssh, or we might end
# up intercepting the ssh connection!
#
# Moreover, now that we have the --auto-nets option, we have to wait
# for the server to send us that message anyway. Even if we haven't
# set --auto-nets, we might as well wait for the message first, then
# ignore its contents.
mux.got_routes = None
fw.start()
mux.got_routes = onroutes
def onhostlist(hostlist):
debug2('got host list: %r\n' % hostlist)
for line in hostlist.strip().split():
if line:
name,ip = line.split(',', 1)
fw.sethostip(name, ip)
mux.got_host_list = onhostlist
def onaccept(): def onaccept():
sock,srcip = listener.accept() global _extra_fd
try:
sock,srcip = listener.accept()
except socket.error, e:
if e.args[0] in [errno.EMFILE, errno.ENFILE]:
debug1('Rejected incoming connection: too many open files!\n')
# free up an fd so we can eat the connection
os.close(_extra_fd)
try:
sock,srcip = listener.accept()
sock.close()
finally:
_extra_fd = os.open('/dev/null', os.O_RDONLY)
return
else:
raise
dstip = original_dst(sock) dstip = original_dst(sock)
debug1('Accept: %r:%r -> %r:%r.\n' % (srcip[0],srcip[1], debug1('Accept: %s:%r -> %s:%r.\n' % (srcip[0],srcip[1],
dstip[0],dstip[1])) dstip[0],dstip[1]))
if dstip == sock.getsockname(): if dstip[1] == listener.getsockname()[1] and _islocal(dstip[0]):
debug1("-- ignored: that's my address!\n") debug1("-- ignored: that's my address!\n")
sock.close() sock.close()
return return
if use_server: chan = mux.next_channel()
chan = mux.next_channel() mux.send(chan, ssnet.CMD_CONNECT, '%s,%s' % dstip)
mux.send(chan, ssnet.CMD_CONNECT, '%s,%s' % dstip) outwrap = MuxWrapper(mux, chan)
outwrap = MuxWrapper(mux, chan)
else:
outwrap = ssnet.connect_dst(dstip[0], dstip[1])
handlers.append(Proxy(SockWrapper(sock, sock), outwrap)) handlers.append(Proxy(SockWrapper(sock, sock), outwrap))
handlers.append(Handler([listener], onaccept)) handlers.append(Handler([listener], onaccept))
if seed_hosts != None:
debug1('seed_hosts: %r\n' % seed_hosts)
mux.send(0, ssnet.CMD_HOST_REQ, '\n'.join(seed_hosts))
while 1: while 1:
if use_server: rv = serverproc.poll()
rv = serverproc.poll() if rv:
if rv: raise Fatal('server died with error code %d' % rv)
raise Fatal('server died with error code %d' % rv)
r = set() ssnet.runonce(handlers, mux)
w = set() mux.callback()
x = set() mux.check_fullness()
handlers = filter(lambda s: s.ok, handlers)
for s in handlers:
s.pre_select(r,w,x)
debug2('Waiting: %d[%d,%d,%d]...\n'
% (len(handlers), len(r), len(w), len(x)))
(r,w,x) = select.select(r,w,x)
#log('r=%r w=%r x=%r\n' % (r,w,x))
ready = set(r) | set(w) | set(x)
for s in handlers:
if s.socks & ready:
s.callback()
def main(listenip, use_server, remotename, subnets): def main(listenip, ssh_cmd, remotename, python, seed_hosts, auto_nets,
subnets_include, subnets_exclude, syslog, daemon, pidfile):
if syslog:
ssyslog.start_syslog()
if daemon:
try:
check_daemon(pidfile)
except Fatal, e:
log("%s\n" % e)
return 5
debug1('Starting sshuttle proxy.\n') debug1('Starting sshuttle proxy.\n')
listener = socket.socket() listener = socket.socket()
listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
if listenip[1]: if listenip[1]:
ports = [listenip[1]] ports = [listenip[1]]
else: else:
ports = xrange(12300,65536) ports = xrange(12300,9000,-1)
last_e = None last_e = None
bound = False bound = False
debug2('Binding:') debug2('Binding:')
@ -116,7 +338,17 @@ def main(listenip, use_server, remotename, subnets):
listenip = listener.getsockname() listenip = listener.getsockname()
debug1('Listening on %r.\n' % (listenip,)) debug1('Listening on %r.\n' % (listenip,))
fw = FirewallClient(listenip[1], subnets_include, subnets_exclude)
try: try:
return _main(listener, listenip[1], use_server, remotename, subnets) return _main(listener, fw, ssh_cmd, remotename,
python, seed_hosts, auto_nets, syslog, daemon)
finally: finally:
iptables_setup(listenip[1], []) try:
if daemon:
# it's not our child anymore; can't waitpid
fw.p.returncode = 0
fw.done()
finally:
if daemon:
daemon_cleanup()

0
compat/__init__.py Normal file
View File

1305
compat/ssubprocess.py Normal file

File diff suppressed because it is too large Load Diff

304
firewall.py Normal file
View File

@ -0,0 +1,304 @@
import re, errno
import compat.ssubprocess as ssubprocess
import helpers, ssyslog
from helpers import *
def ipt_chain_exists(name):
argv = ['iptables', '-t', 'nat', '-nL']
p = ssubprocess.Popen(argv, stdout = ssubprocess.PIPE)
for line in p.stdout:
if line.startswith('Chain %s ' % name):
return True
rv = p.wait()
if rv:
raise Fatal('%r returned %d' % (argv, rv))
def ipt(*args):
argv = ['iptables', '-t', 'nat'] + list(args)
debug1('>> %s\n' % ' '.join(argv))
rv = ssubprocess.call(argv)
if rv:
raise Fatal('%r returned %d' % (argv, rv))
# We name the chain based on the transproxy port number so that it's possible
# to run multiple copies of sshuttle at the same time. Of course, the
# multiple copies shouldn't have overlapping subnets, or only the most-
# recently-started one will win (because we use "-I OUTPUT 1" instead of
# "-A OUTPUT").
def do_iptables(port, subnets):
chain = 'sshuttle-%s' % port
# basic cleanup/setup of chains
if ipt_chain_exists(chain):
ipt('-D', 'OUTPUT', '-j', chain)
ipt('-D', 'PREROUTING', '-j', chain)
ipt('-F', chain)
ipt('-X', chain)
if subnets:
ipt('-N', chain)
ipt('-F', chain)
ipt('-I', 'OUTPUT', '1', '-j', chain)
ipt('-I', 'PREROUTING', '1', '-j', chain)
# create new subnet entries. Note that we're sorting in a very
# particular order: we need to go from most-specific (largest swidth)
# to least-specific, and at any given level of specificity, we want
# excludes to come first. That's why the columns are in such a non-
# intuitive order.
for swidth,sexclude,snet in sorted(subnets, reverse=True):
if sexclude:
ipt('-A', chain, '-j', 'RETURN',
'--dest', '%s/%s' % (snet,swidth),
'-p', 'tcp')
else:
ipt('-A', chain, '-j', 'REDIRECT',
'--dest', '%s/%s' % (snet,swidth),
'-p', 'tcp',
'--to-ports', str(port),
'-m', 'ttl', '!', '--ttl', '42' # to prevent infinite loops
)
def ipfw_rule_exists(n):
argv = ['ipfw', 'list']
p = ssubprocess.Popen(argv, stdout = ssubprocess.PIPE)
found = False
for line in p.stdout:
if line.startswith('%05d ' % n):
if not ('ipttl 42 setup keep-state' in line
or ('skipto %d' % (n+1)) in line
or 'check-state' in line):
log('non-sshuttle ipfw rule: %r\n' % line.strip())
raise Fatal('non-sshuttle ipfw rule #%d already exists!' % n)
found = True
rv = p.wait()
if rv:
raise Fatal('%r returned %d' % (argv, rv))
return found
_oldctls = {}
def _fill_oldctls(prefix):
argv = ['sysctl', prefix]
p = ssubprocess.Popen(argv, stdout = ssubprocess.PIPE)
for line in p.stdout:
assert(line[-1] == '\n')
(k,v) = line[:-1].split(': ', 1)
_oldctls[k] = v
rv = p.wait()
if rv:
raise Fatal('%r returned %d' % (argv, rv))
if not line:
raise Fatal('%r returned no data' % (argv,))
def _sysctl_set(name, val):
argv = ['sysctl', '-w', '%s=%s' % (name, val)]
debug1('>> %s\n' % ' '.join(argv))
rv = ssubprocess.call(argv, stdout = open('/dev/null', 'w'))
_changedctls = []
def sysctl_set(name, val):
PREFIX = 'net.inet.ip'
assert(name.startswith(PREFIX + '.'))
val = str(val)
if not _oldctls:
_fill_oldctls(PREFIX)
if not (name in _oldctls):
debug1('>> No such sysctl: %r\n' % name)
return
oldval = _oldctls[name]
if val != oldval:
_changedctls.append(name)
return _sysctl_set(name, val)
def ipfw(*args):
argv = ['ipfw', '-q'] + list(args)
debug1('>> %s\n' % ' '.join(argv))
rv = ssubprocess.call(argv)
if rv:
raise Fatal('%r returned %d' % (argv, rv))
def do_ipfw(port, subnets):
sport = str(port)
xsport = str(port+1)
# cleanup any existing rules
if ipfw_rule_exists(port):
ipfw('delete', sport)
while _changedctls:
name = _changedctls.pop()
oldval = _oldctls[name]
_sysctl_set(name, oldval)
if subnets:
sysctl_set('net.inet.ip.fw.enable', 1)
sysctl_set('net.inet.ip.scopedroute', 0)
ipfw('add', sport, 'check-state', 'ip',
'from', 'any', 'to', 'any')
# create new subnet entries
for swidth,sexclude,snet in sorted(subnets, reverse=True):
if sexclude:
ipfw('add', sport, 'skipto', xsport,
'log', 'tcp',
'from', 'any', 'to', '%s/%s' % (snet,swidth))
else:
ipfw('add', sport, 'fwd', '127.0.0.1,%d' % port,
'log', 'tcp',
'from', 'any', 'to', '%s/%s' % (snet,swidth),
'not', 'ipttl', '42', 'keep-state', 'setup')
def program_exists(name):
paths = (os.getenv('PATH') or os.defpath).split(os.pathsep)
for p in paths:
fn = '%s/%s' % (p, name)
if os.path.exists(fn):
return not os.path.isdir(fn) and os.access(fn, os.X_OK)
hostmap = {}
def rewrite_etc_hosts(port):
HOSTSFILE='/etc/hosts'
BAKFILE='%s.sbak' % HOSTSFILE
APPEND='# sshuttle-firewall-%d AUTOCREATED' % port
old_content = ''
st = None
try:
old_content = open(HOSTSFILE).read()
st = os.stat(HOSTSFILE)
except IOError, e:
if e.errno == errno.ENOENT:
pass
else:
raise
if old_content.strip() and not os.path.exists(BAKFILE):
os.link(HOSTSFILE, BAKFILE)
tmpname = "%s.%d.tmp" % (HOSTSFILE, port)
f = open(tmpname, 'w')
for line in old_content.rstrip().split('\n'):
if line.find(APPEND) >= 0:
continue
f.write('%s\n' % line)
for (name,ip) in sorted(hostmap.items()):
f.write('%-30s %s\n' % ('%s %s' % (ip,name), APPEND))
f.close()
if st:
os.chown(tmpname, st.st_uid, st.st_gid)
os.chmod(tmpname, st.st_mode)
else:
os.chown(tmpname, 0, 0)
os.chmod(tmpname, 0644)
os.rename(tmpname, HOSTSFILE)
def restore_etc_hosts(port):
global hostmap
hostmap = {}
rewrite_etc_hosts(port)
# This is some voodoo for setting up the kernel's transparent
# proxying stuff. If subnets is empty, we just delete our sshuttle rules;
# otherwise we delete it, then make them from scratch.
#
# This code is supposed to clean up after itself by deleting its rules on
# exit. In case that fails, it's not the end of the world; future runs will
# supercede it in the transproxy list, at least, so the leftover rules
# are hopefully harmless.
def main(port, syslog):
assert(port > 0)
assert(port <= 65535)
if os.getuid() != 0:
raise Fatal('you must be root (or enable su/sudo) to set the firewall')
if program_exists('ipfw'):
do_it = do_ipfw
elif program_exists('iptables'):
do_it = do_iptables
else:
raise Fatal("can't find either ipfw or iptables; check your PATH")
# because of limitations of the 'su' command, the *real* stdin/stdout
# are both attached to stdout initially. Clone stdout into stdin so we
# can read from it.
os.dup2(1, 0)
if syslog:
ssyslog.start_syslog()
ssyslog.stderr_to_syslog()
debug1('firewall manager ready.\n')
sys.stdout.write('READY\n')
sys.stdout.flush()
# ctrl-c shouldn't be passed along to me. When the main sshuttle dies,
# I'll die automatically.
os.setsid()
# we wait until we get some input before creating the rules. That way,
# sshuttle can launch us as early as possible (and get sudo password
# authentication as early in the startup process as possible).
line = sys.stdin.readline(128)
if not line:
return # parent died; nothing to do
subnets = []
if line != 'ROUTES\n':
raise Fatal('firewall: expected ROUTES but got %r' % line)
while 1:
line = sys.stdin.readline(128)
if not line:
raise Fatal('firewall: expected route but got %r' % line)
elif line == 'GO\n':
break
try:
(width,exclude,ip) = line.strip().split(',', 2)
except:
raise Fatal('firewall: expected route or GO but got %r' % line)
subnets.append((int(width), bool(int(exclude)), ip))
try:
if line:
debug1('firewall manager: starting transproxy.\n')
do_it(port, subnets)
sys.stdout.write('STARTED\n')
try:
sys.stdout.flush()
except IOError:
# the parent process died for some reason; he's surely been loud
# enough, so no reason to report another error
return
# Now we wait until EOF or any other kind of exception. We need
# to stay running so that we don't need a *second* password
# authentication at shutdown time - that cleanup is important!
while 1:
line = sys.stdin.readline(128)
if line.startswith('HOST '):
(name,ip) = line[5:].strip().split(',', 1)
hostmap[name] = ip
rewrite_etc_hosts(port)
elif line:
raise Fatal('expected EOF, got %r' % line)
else:
break
finally:
try:
debug1('firewall manager: undoing changes.\n')
except:
pass
do_it(port, [])
restore_etc_hosts(port)

View File

@ -4,9 +4,14 @@ logprefix = ''
verbose = 0 verbose = 0
def log(s): def log(s):
sys.stdout.flush() try:
sys.stderr.write(logprefix + s) sys.stdout.flush()
sys.stderr.flush() sys.stderr.write(logprefix + s)
sys.stderr.flush()
except IOError:
# this could happen if stderr gets forcibly disconnected, eg. because
# our tty closes. That sucks, but it's no reason to abort the program.
pass
def debug1(s): def debug1(s):
if verbose >= 1: if verbose >= 1:
@ -16,6 +21,17 @@ def debug2(s):
if verbose >= 2: if verbose >= 2:
log(s) log(s)
def debug3(s):
if verbose >= 3:
log(s)
class Fatal(Exception): class Fatal(Exception):
pass pass
def list_contains_any(l, sub):
for i in sub:
if i in l:
return True
return False

277
hostwatch.py Normal file
View File

@ -0,0 +1,277 @@
import time, socket, re, select, errno
if not globals().get('skip_imports'):
import compat.ssubprocess as ssubprocess
import helpers
from helpers import *
POLL_TIME = 60*15
NETSTAT_POLL_TIME = 30
CACHEFILE=os.path.expanduser('~/.sshuttle.hosts')
_nmb_ok = True
_smb_ok = True
hostnames = {}
queue = {}
null = open('/dev/null', 'rb+')
def _is_ip(s):
return re.match(r'\d+\.\d+\.\d+\.\d+$', s)
def write_host_cache():
tmpname = '%s.%d.tmp' % (CACHEFILE, os.getpid())
try:
f = open(tmpname, 'wb')
for name,ip in sorted(hostnames.items()):
f.write('%s,%s\n' % (name, ip))
f.close()
os.rename(tmpname, CACHEFILE)
finally:
try:
os.unlink(tmpname)
except:
pass
def read_host_cache():
try:
f = open(CACHEFILE)
except IOError, e:
if e.errno == errno.ENOENT:
return
else:
raise
for line in f:
words = line.strip().split(',')
if len(words) == 2:
(name,ip) = words
name = re.sub(r'[^-\w]', '-', name).strip()
ip = re.sub(r'[^0-9.]', '', ip).strip()
if name and ip:
found_host(name, ip)
def found_host(hostname, ip):
hostname = re.sub(r'\..*', '', hostname)
hostname = re.sub(r'[^-\w]', '_', hostname)
if (ip.startswith('127.') or ip.startswith('255.')
or hostname == 'localhost'):
return
oldip = hostnames.get(hostname)
if oldip != ip:
hostnames[hostname] = ip
debug1('Found: %s: %s\n' % (hostname, ip))
sys.stdout.write('%s,%s\n' % (hostname, ip))
write_host_cache()
def _check_etc_hosts():
debug2(' > hosts\n')
for line in open('/etc/hosts'):
line = re.sub(r'#.*', '', line)
words = line.strip().split()
if not words:
continue
ip = words[0]
names = words[1:]
if _is_ip(ip):
debug3('< %s %r\n' % (ip, names))
for n in names:
check_host(n)
found_host(n, ip)
def _check_revdns(ip):
debug2(' > rev: %s\n' % ip)
try:
r = socket.gethostbyaddr(ip)
debug3('< %s\n' % r[0])
check_host(r[0])
found_host(r[0], ip)
except socket.herror, e:
pass
def _check_dns(hostname):
debug2(' > dns: %s\n' % hostname)
try:
ip = socket.gethostbyname(hostname)
debug3('< %s\n' % ip)
check_host(ip)
found_host(hostname, ip)
except socket.gaierror, e:
pass
def _check_netstat():
debug2(' > netstat\n')
argv = ['netstat', '-n']
try:
p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null)
content = p.stdout.read()
p.wait()
except OSError, e:
log('%r failed: %r\n' % (argv, e))
return
for ip in re.findall(r'\d+\.\d+\.\d+\.\d+', content):
debug3('< %s\n' % ip)
check_host(ip)
def _check_smb(hostname):
return
global _smb_ok
if not _smb_ok:
return
argv = ['smbclient', '-U', '%', '-L', hostname]
debug2(' > smb: %s\n' % hostname)
try:
p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null)
lines = p.stdout.readlines()
p.wait()
except OSError, e:
log('%r failed: %r\n' % (argv, e))
_smb_ok = False
return
lines.reverse()
# junk at top
while lines:
line = lines.pop().strip()
if re.match(r'Server\s+', line):
break
# server list section:
# Server Comment
# ------ -------
while lines:
line = lines.pop().strip()
if not line or re.match(r'-+\s+-+', line):
continue
if re.match(r'Workgroup\s+Master', line):
break
words = line.split()
hostname = words[0].lower()
debug3('< %s\n' % hostname)
check_host(hostname)
# workgroup list section:
# Workgroup Master
# --------- ------
while lines:
line = lines.pop().strip()
if re.match(r'-+\s+', line):
continue
if not line:
break
words = line.split()
(workgroup, hostname) = (words[0].lower(), words[1].lower())
debug3('< group(%s) -> %s\n' % (workgroup, hostname))
check_host(hostname)
check_workgroup(workgroup)
if lines:
assert(0)
def _check_nmb(hostname, is_workgroup, is_master):
return
global _nmb_ok
if not _nmb_ok:
return
argv = ['nmblookup'] + ['-M']*is_master + ['--', hostname]
debug2(' > n%d%d: %s\n' % (is_workgroup, is_master, hostname))
try:
p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null)
lines = p.stdout.readlines()
rv = p.wait()
except OSError, e:
log('%r failed: %r\n' % (argv, e))
_nmb_ok = False
return
if rv:
log('%r returned %d\n' % (argv, rv))
return
for line in lines:
m = re.match(r'(\d+\.\d+\.\d+\.\d+) (\w+)<\w\w>\n', line)
if m:
g = m.groups()
(ip, name) = (g[0], g[1].lower())
debug3('< %s -> %s\n' % (name, ip))
if is_workgroup:
_enqueue(_check_smb, ip)
else:
found_host(name, ip)
check_host(name)
def check_host(hostname):
if _is_ip(hostname):
_enqueue(_check_revdns, hostname)
else:
_enqueue(_check_dns, hostname)
_enqueue(_check_smb, hostname)
_enqueue(_check_nmb, hostname, False, False)
def check_workgroup(hostname):
_enqueue(_check_nmb, hostname, True, False)
_enqueue(_check_nmb, hostname, True, True)
def _enqueue(op, *args):
t = (op,args)
if queue.get(t) == None:
queue[t] = 0
def _stdin_still_ok(timeout):
r,w,x = select.select([sys.stdin.fileno()], [], [], timeout)
if r:
b = os.read(sys.stdin.fileno(), 4096)
if not b:
return False
return True
def hw_main(seed_hosts):
if helpers.verbose >= 2:
helpers.logprefix = 'HH: '
else:
helpers.logprefix = 'hostwatch: '
read_host_cache()
_enqueue(_check_etc_hosts)
_enqueue(_check_netstat)
check_host('localhost')
check_host(socket.gethostname())
check_workgroup('workgroup')
check_workgroup('-')
for h in seed_hosts:
check_host(h)
while 1:
now = time.time()
for t,last_polled in queue.items():
(op,args) = t
if not _stdin_still_ok(0):
break
maxtime = POLL_TIME
if op == _check_netstat:
maxtime = NETSTAT_POLL_TIME
if now - last_polled > maxtime:
queue[t] = time.time()
op(*args)
try:
sys.stdout.flush()
except IOError:
break
# FIXME: use a smarter timeout based on oldest last_polled
if not _stdin_still_ok(1):
break

View File

@ -1,66 +0,0 @@
import subprocess, re
import helpers
from helpers import *
def chain_exists(name):
argv = ['iptables', '-t', 'nat', '-nL']
p = subprocess.Popen(argv, stdout = subprocess.PIPE)
for line in p.stdout:
if line.startswith('Chain %s ' % name):
return True
rv = p.wait()
if rv:
raise Exception('%r returned %d' % (argv, rv))
def ipt(*args):
argv = ['iptables', '-t', 'nat'] + list(args)
debug1('>> %s\n' % ' '.join(argv))
rv = subprocess.call(argv)
if rv:
raise Exception('%r returned %d' % (argv, rv))
# This is some iptables voodoo for setting up the Linux kernel's transparent
# proxying stuff. If subnets is empty, we just delete our sshuttle chain;
# otherwise we delete it, then make it from scratch.
#
# We name the chain based on the transproxy port number so that it's possible
# to run multiple copies of sshuttle at the same time. Of course, the
# multiple copies shouldn't have overlapping subnets, or only the most-
# recently-started one will win (because we use "-I OUTPUT 1" instead of
# "-A OUTPUT").
#
# sshuttle is supposed to clean up after itself by deleting extra chains on
# exit. In case that fails, it's not the end of the world; future runs will
# supercede it in the transproxy list, at least, so the leftover iptables
# chains are mostly harmless.
def main(port, subnets):
assert(port > 0)
assert(port <= 65535)
chain = 'sshuttle-%s' % port
# basic cleanup/setup of chains
if chain_exists(chain):
ipt('-D', 'OUTPUT', '-j', chain)
ipt('-D', 'PREROUTING', '-j', chain)
ipt('-F', chain)
ipt('-X', chain)
if subnets:
ipt('-N', chain)
ipt('-F', chain)
ipt('-I', 'OUTPUT', '1', '-j', chain)
ipt('-I', 'PREROUTING', '1', '-j', chain)
# create new subnet entries
for snet,swidth in subnets:
ipt('-A', chain, '-j', 'REDIRECT',
'--dest', '%s/%s' % (snet,swidth),
'-p', 'tcp',
'--to-ports', str(port),
'-m', 'ttl', '!', '--ttl', '42' # to prevent infinite loops
)
subnets_str = ['%s/%d' % (ip,width) for ip,width in subnets]

69
main.py
View File

@ -1,6 +1,7 @@
#!/usr/bin/env python #!/usr/bin/env python
import sys, os, re import sys, os, re
import helpers, options, client, server, iptables import helpers, options, client, server, firewall, hostwatch
import compat.ssubprocess as ssubprocess
from helpers import * from helpers import *
@ -45,43 +46,77 @@ def parse_ipport(s):
optspec = """ optspec = """
sshuttle [-l [ip:]port] [-r [username@]sshserver] <subnets...> sshuttle [-l [ip:]port] [-r [username@]sshserver[:port]] <subnets...>
sshuttle --iptables <port> <subnets...>
sshuttle --server sshuttle --server
sshuttle --firewall <port> <subnets...>
sshuttle --hostwatch
-- --
l,listen= transproxy to this ip address and port number [default=0] l,listen= transproxy to this ip address and port number [127.0.0.1:0]
H,auto-hosts scan for remote hostnames and update local /etc/hosts
N,auto-nets automatically determine subnets to route
python= path to python interpreter on the remote server [python]
r,remote= ssh hostname (and optional username) of remote sshuttle server r,remote= ssh hostname (and optional username) of remote sshuttle server
x,exclude= exclude this subnet (can be used more than once)
v,verbose increase debug message verbosity v,verbose increase debug message verbosity
noserver don't use a separate server process (mostly for debugging) e,ssh-cmd= the command to use to connect to the remote [ssh]
server [internal use only] seed-hosts= with -H, use these hostnames for initial scan (comma-separated)
iptables [internal use only] D,daemon run in the background as a daemon
syslog send log messages to syslog (default if you use --daemon)
pidfile= pidfile name (only if using --daemon) [./sshuttle.pid]
server (internal use only)
firewall (internal use only)
hostwatch (internal use only)
""" """
o = options.Options('sshuttle', optspec) o = options.Options('sshuttle', optspec)
(opt, flags, extra) = o.parse(sys.argv[1:]) (opt, flags, extra) = o.parse(sys.argv[1:])
if opt.daemon:
opt.syslog = 1
helpers.verbose = opt.verbose helpers.verbose = opt.verbose
try: try:
if opt.server: if opt.server:
if len(extra) != 0:
o.fatal('no arguments expected')
sys.exit(server.main()) sys.exit(server.main())
elif opt.iptables: elif opt.firewall:
if len(extra) < 1: if len(extra) != 1:
o.fatal('at least one argument expected') o.fatal('exactly one argument expected')
sys.exit(iptables.main(int(extra[0]), sys.exit(firewall.main(int(extra[0]), opt.syslog))
parse_subnets(extra[1:]))) elif opt.hostwatch:
sys.exit(hostwatch.hw_main(extra))
else: else:
if len(extra) < 1: if len(extra) < 1 and not opt.auto_nets:
o.fatal('at least one subnet expected') o.fatal('at least one subnet (or -N) expected')
includes = extra
excludes = ['127.0.0.0/8']
for k,v in flags:
if k in ('-x','--exclude'):
excludes.append(v)
remotename = opt.remote remotename = opt.remote
if remotename == '' or remotename == '-': if remotename == '' or remotename == '-':
remotename = None remotename = None
if opt.seed_hosts and not opt.auto_hosts:
o.fatal('--seed-hosts only works if you also use -H')
if opt.seed_hosts:
sh = re.split(r'[\s,]+', (opt.seed_hosts or "").strip())
elif opt.auto_hosts:
sh = []
else:
sh = None
sys.exit(client.main(parse_ipport(opt.listen or '0.0.0.0:0'), sys.exit(client.main(parse_ipport(opt.listen or '0.0.0.0:0'),
not opt.noserver, opt.ssh_cmd,
remotename, remotename,
parse_subnets(extra))) opt.python,
sh,
opt.auto_nets,
parse_subnets(includes),
parse_subnets(excludes),
opt.syslog, opt.daemon, opt.pidfile))
except Fatal, e: except Fatal, e:
log('fatal: %s\n' % e) log('fatal: %s\n' % e)
sys.exit(99) sys.exit(99)
except KeyboardInterrupt: except KeyboardInterrupt:
log('\nKeyboard interrupt: exiting.\n') log('\n')
log('Keyboard interrupt: exiting.\n')
sys.exit(1) sys.exit(1)

View File

@ -1,30 +1,94 @@
import sys, textwrap, getopt, re """Command-line options parser.
With the help of an options spec string, easily parse command-line options.
"""
import sys, os, textwrap, getopt, re, struct
class OptDict: class OptDict:
def __init__(self): def __init__(self):
self._opts = {} self._opts = {}
def __setitem__(self, k, v): def __setitem__(self, k, v):
if k.startswith('no-') or k.startswith('no_'):
k = k[3:]
v = not v
self._opts[k] = v self._opts[k] = v
def __getitem__(self, k): def __getitem__(self, k):
if k.startswith('no-') or k.startswith('no_'):
return not self._opts[k[3:]]
return self._opts[k] return self._opts[k]
def __getattr__(self, k): def __getattr__(self, k):
return self[k] return self[k]
def _default_onabort(msg):
sys.exit(97)
def _intify(v):
try:
vv = int(v or '')
if str(vv) == v:
return vv
except ValueError:
pass
return v
def _atoi(v):
try:
return int(v or 0)
except ValueError:
return 0
def _remove_negative_kv(k, v):
if k.startswith('no-') or k.startswith('no_'):
return k[3:], not v
return k,v
def _remove_negative_k(k):
return _remove_negative_kv(k, None)[0]
def _tty_width():
s = struct.pack("HHHH", 0, 0, 0, 0)
try:
import fcntl, termios
s = fcntl.ioctl(sys.stderr.fileno(), termios.TIOCGWINSZ, s)
except (IOError, ImportError):
return _atoi(os.environ.get('WIDTH')) or 70
(ysize,xsize,ypix,xpix) = struct.unpack('HHHH', s)
return xsize
class Options: class Options:
def __init__(self, exe, optspec, optfunc=getopt.gnu_getopt): """Option parser.
When constructed, two strings are mandatory. The first one is the command
name showed before error messages. The second one is a string called an
optspec that specifies the synopsis and option flags and their description.
For more information about optspecs, consult the bup-options(1) man page.
Two optional arguments specify an alternative parsing function and an
alternative behaviour on abort (after having output the usage string).
By default, the parser function is getopt.gnu_getopt, and the abort
behaviour is to exit the program.
"""
def __init__(self, exe, optspec, optfunc=getopt.gnu_getopt,
onabort=_default_onabort):
self.exe = exe self.exe = exe
self.optspec = optspec self.optspec = optspec
self._onabort = onabort
self.optfunc = optfunc self.optfunc = optfunc
self._aliases = {} self._aliases = {}
self._shortopts = 'h?' self._shortopts = 'h?'
self._longopts = ['help'] self._longopts = ['help']
self._hasparms = {} self._hasparms = {}
self._defaults = {}
self._usagestr = self._gen_usage() self._usagestr = self._gen_usage()
def _gen_usage(self): def _gen_usage(self):
out = [] out = []
lines = self.optspec.strip().split('\n') lines = self.optspec.strip().split('\n')
@ -36,10 +100,13 @@ class Options:
out.append('%s: %s\n' % (first_syn and 'usage' or ' or', l)) out.append('%s: %s\n' % (first_syn and 'usage' or ' or', l))
first_syn = False first_syn = False
out.append('\n') out.append('\n')
last_was_option = False
while lines: while lines:
l = lines.pop() l = lines.pop()
if l.startswith(' '): if l.startswith(' '):
out.append('\n%s\n' % l.lstrip()) out.append('%s%s\n' % (last_was_option and '\n' or '',
l.lstrip()))
last_was_option = False
elif l: elif l:
(flags, extra) = l.split(' ', 1) (flags, extra) = l.split(' ', 1)
extra = extra.strip() extra = extra.strip()
@ -48,18 +115,24 @@ class Options:
has_parm = 1 has_parm = 1
else: else:
has_parm = 0 has_parm = 0
g = re.search(r'\[([^\]]*)\]$', extra)
if g:
defval = g.group(1)
else:
defval = None
flagl = flags.split(',') flagl = flags.split(',')
flagl_nice = [] flagl_nice = []
for f in flagl: for f in flagl:
f_nice = re.sub(r'\W', '_', f) f,dvi = _remove_negative_kv(f, _intify(defval))
self._aliases[f] = flagl[0] self._aliases[f] = _remove_negative_k(flagl[0])
self._aliases[f_nice] = flagl[0]
self._hasparms[f] = has_parm self._hasparms[f] = has_parm
self._defaults[f] = dvi
if len(f) == 1: if len(f) == 1:
self._shortopts += f + (has_parm and ':' or '') self._shortopts += f + (has_parm and ':' or '')
flagl_nice.append('-' + f) flagl_nice.append('-' + f)
else: else:
assert(not f.startswith('no-')) # supported implicitly f_nice = re.sub(r'\W', '_', f)
self._aliases[f_nice] = _remove_negative_k(flagl[0])
self._longopts.append(f + (has_parm and '=' or '')) self._longopts.append(f + (has_parm and '=' or ''))
self._longopts.append('no-' + f) self._longopts.append('no-' + f)
flagl_nice.append('--' + f) flagl_nice.append('--' + f)
@ -67,52 +140,62 @@ class Options:
if has_parm: if has_parm:
flags_nice += ' ...' flags_nice += ' ...'
prefix = ' %-20s ' % flags_nice prefix = ' %-20s ' % flags_nice
argtext = '\n'.join(textwrap.wrap(extra, width=70, argtext = '\n'.join(textwrap.wrap(extra, width=_tty_width(),
initial_indent=prefix, initial_indent=prefix,
subsequent_indent=' '*28)) subsequent_indent=' '*28))
out.append(argtext + '\n') out.append(argtext + '\n')
last_was_option = True
else: else:
out.append('\n') out.append('\n')
last_was_option = False
return ''.join(out).rstrip() + '\n' return ''.join(out).rstrip() + '\n'
def usage(self): def usage(self, msg=""):
"""Print usage string to stderr and abort."""
sys.stderr.write(self._usagestr) sys.stderr.write(self._usagestr)
sys.exit(97) e = self._onabort and self._onabort(msg) or None
if e:
raise e
def fatal(self, s): def fatal(self, s):
sys.stderr.write('error: %s\n' % s) """Print an error message to stderr and abort with usage string."""
return self.usage() msg = 'error: %s\n' % s
sys.stderr.write(msg)
return self.usage(msg)
def parse(self, args): def parse(self, args):
"""Parse a list of arguments and return (options, flags, extra).
In the returned tuple, "options" is an OptDict with known options,
"flags" is a list of option flags that were used on the command-line,
and "extra" is a list of positional arguments.
"""
try: try:
(flags,extra) = self.optfunc(args, self._shortopts, self._longopts) (flags,extra) = self.optfunc(args, self._shortopts, self._longopts)
except getopt.GetoptError, e: except getopt.GetoptError, e:
self.fatal(e) self.fatal(e)
opt = OptDict() opt = OptDict()
for f in self._aliases.values():
opt[f] = None for k,v in self._defaults.iteritems():
k = self._aliases[k]
opt[k] = v
for (k,v) in flags: for (k,v) in flags:
while k.startswith('-'): k = k.lstrip('-')
k = k[1:] if k in ('h', '?', 'help'):
if k in ['h', '?', 'help']:
self.usage() self.usage()
if k.startswith('no-'): if k.startswith('no-'):
k = self._aliases[k[3:]] k = self._aliases[k[3:]]
opt[k] = None v = 0
else: else:
k = self._aliases[k] k = self._aliases[k]
if not self._hasparms[k]: if not self._hasparms[k]:
assert(v == '') assert(v == '')
opt[k] = (opt._opts.get(k) or 0) + 1 v = (opt._opts.get(k) or 0) + 1
else: else:
try: v = _intify(v)
vv = int(v) opt[k] = v
if str(vv) == v: for (f1,f2) in self._aliases.iteritems():
v = vv opt[f1] = opt._opts.get(f2)
except ValueError:
pass
opt[k] = v
for (f1,f2) in self._aliases.items():
opt[f1] = opt[f2]
return (opt,flags,extra) return (opt,flags,extra)

177
server.py
View File

@ -1,45 +1,176 @@
import struct, socket, select import re, struct, socket, select, traceback
import ssnet, helpers if not globals().get('skip_imports'):
from ssnet import SockWrapper, Handler, Proxy, Mux, MuxWrapper import ssnet, helpers, hostwatch
from helpers import * import compat.ssubprocess as ssubprocess
from ssnet import SockWrapper, Handler, Proxy, Mux, MuxWrapper
from helpers import *
def _ipmatch(ipstr):
if ipstr == 'default':
ipstr = '0.0.0.0/0'
m = re.match(r'^(\d+(\.\d+(\.\d+(\.\d+)?)?)?)(?:/(\d+))?$', ipstr)
if m:
g = m.groups()
ips = g[0]
width = int(g[4] or 32)
if g[1] == None:
ips += '.0.0.0'
width = min(width, 8)
elif g[2] == None:
ips += '.0.0'
width = min(width, 16)
elif g[3] == None:
ips += '.0'
width = min(width, 24)
return (struct.unpack('!I', socket.inet_aton(ips))[0], width)
def _ipstr(ip, width):
if width >= 32:
return ip
else:
return "%s/%d" % (ip, width)
def _maskbits(netmask):
if not netmask:
return 32
for i in range(32):
if netmask[0] & _shl(1, i):
return 32-i
return 0
def _shl(n, bits):
return n * int(2**bits)
def _list_routes():
argv = ['netstat', '-rn']
p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE)
routes = []
for line in p.stdout:
cols = re.split(r'\s+', line)
ipw = _ipmatch(cols[0])
if not ipw:
continue # some lines won't be parseable; never mind
maskw = _ipmatch(cols[2]) # linux only
mask = _maskbits(maskw) # returns 32 if maskw is null
width = min(ipw[1], mask)
ip = ipw[0] & _shl(_shl(1, width) - 1, 32-width)
routes.append((socket.inet_ntoa(struct.pack('!I', ip)), width))
rv = p.wait()
if rv != 0:
log('WARNING: %r returned %d\n' % (argv, rv))
log('WARNING: That prevents --auto-nets from working.\n')
return routes
def list_routes():
for (ip,width) in _list_routes():
if not ip.startswith('0.') and not ip.startswith('127.'):
yield (ip,width)
def _exc_dump():
exc_info = sys.exc_info()
return ''.join(traceback.format_exception(*exc_info))
def start_hostwatch(seed_hosts):
s1,s2 = socket.socketpair()
pid = os.fork()
if not pid:
# child
rv = 99
try:
try:
s2.close()
os.dup2(s1.fileno(), 1)
os.dup2(s1.fileno(), 0)
s1.close()
rv = hostwatch.hw_main(seed_hosts) or 0
except Exception, e:
log('%s\n' % _exc_dump())
rv = 98
finally:
os._exit(rv)
s1.close()
return pid,s2
class Hostwatch:
def __init__(self):
self.pid = 0
self.sock = None
def main(): def main():
# synchronization header
sys.stdout.write('SSHUTTLE0001')
sys.stdout.flush()
if helpers.verbose >= 1: if helpers.verbose >= 1:
helpers.logprefix = ' s: ' helpers.logprefix = ' s: '
else: else:
helpers.logprefix = 'server: ' helpers.logprefix = 'server: '
routes = list(list_routes())
debug1('available routes:\n')
for r in routes:
debug1(' %s/%d\n' % r)
# synchronization header
sys.stdout.write('SSHUTTLE0001')
sys.stdout.flush()
handlers = [] handlers = []
mux = Mux(socket.fromfd(sys.stdin.fileno(), mux = Mux(socket.fromfd(sys.stdin.fileno(),
socket.AF_INET, socket.SOCK_STREAM), socket.AF_INET, socket.SOCK_STREAM),
socket.fromfd(sys.stdout.fileno(), socket.fromfd(sys.stdout.fileno(),
socket.AF_INET, socket.SOCK_STREAM)) socket.AF_INET, socket.SOCK_STREAM))
handlers.append(mux) handlers.append(mux)
routepkt = ''
for r in routes:
routepkt += '%s,%d\n' % r
mux.send(0, ssnet.CMD_ROUTES, routepkt)
hw = Hostwatch()
hw.leftover = ''
def hostwatch_ready():
assert(hw.pid)
content = hw.sock.recv(4096)
if content:
lines = (hw.leftover + content).split('\n')
if lines[-1]:
# no terminating newline: entry isn't complete yet!
hw.leftover = lines.pop()
lines.append('')
else:
hw.leftover = ''
mux.send(0, ssnet.CMD_HOST_LIST, '\n'.join(lines))
else:
raise Fatal('hostwatch process died')
def got_host_req(data):
if not hw.pid:
(hw.pid,hw.sock) = start_hostwatch(data.strip().split())
handlers.append(Handler(socks = [hw.sock],
callback = hostwatch_ready))
mux.got_host_req = got_host_req
def new_channel(channel, data): def new_channel(channel, data):
(dstip,dstport) = data.split(',', 1) (dstip,dstport) = data.split(',', 1)
dstport = int(dstport) dstport = int(dstport)
outwrap = ssnet.connect_dst(dstip,dstport) outwrap = ssnet.connect_dst(dstip,dstport)
handlers.append(Proxy(MuxWrapper(mux, channel), outwrap)) handlers.append(Proxy(MuxWrapper(mux, channel), outwrap))
mux.new_channel = new_channel mux.new_channel = new_channel
while mux.ok: while mux.ok:
r = set() if hw.pid:
w = set() assert(hw.pid > 0)
x = set() (rpid, rv) = os.waitpid(hw.pid, os.WNOHANG)
handlers = filter(lambda s: s.ok, handlers) if rpid:
for s in handlers: raise Fatal('hostwatch exited unexpectedly: code 0x%04x\n' % rv)
s.pre_select(r,w,x)
debug2('Waiting: %d[%d,%d,%d]...\n' ssnet.runonce(handlers, mux)
% (len(handlers), len(r), len(w), len(x))) mux.check_fullness()
(r,w,x) = select.select(r,w,x) mux.callback()
#log('r=%r w=%r x=%r\n' % (r,w,x))
ready = set(r) | set(w) | set(x)
for s in handlers:
if s.socks & ready:
s.callback()

101
ssh.py
View File

@ -1,38 +1,95 @@
import sys, os, re, subprocess, socket import sys, os, re, socket, zlib
import compat.ssubprocess as ssubprocess
import helpers import helpers
from helpers import *
def connect(rhost):
def readfile(name):
basedir = os.path.dirname(os.path.abspath(sys.argv[0]))
path = [basedir] + sys.path
for d in path:
fullname = os.path.join(d, name)
if os.path.exists(fullname):
return open(fullname, 'rb').read()
raise Exception("can't find file %r in any of %r" % (name, path))
def empackage(z, filename):
(path,basename) = os.path.split(filename)
content = z.compress(readfile(filename))
content += z.flush(zlib.Z_SYNC_FLUSH)
return '%s\n%d\n%s' % (basename,len(content), content)
def connect(ssh_cmd, rhostport, python, stderr):
main_exe = sys.argv[0] main_exe = sys.argv[0]
nicedir = os.path.split(os.path.abspath(main_exe))[0] portl = []
nicedir = re.sub(r':', "_", nicedir)
rhostIsIPv6 = False
if (rhostport or '').count(':') > 1:
rhostIsIPv6 = True
if rhostport.count(']') or rhostport.count('['):
result = rhostport.split(']')
rhost = result[0].strip('[')
if len(result) > 1:
result[1] = result[1].strip(':')
if result[1] is not '':
portl = ['-p', str(int(result[1]))]
else: # can't disambiguate IPv6 colons and a port number. pass the hostname through.
rhost = rhostport
else: # IPv4
l = (rhostport or '').split(':', 1)
rhost = l[0]
if len(l) > 1:
portl = ['-p', str(int(l[1]))]
if rhost == '-': if rhost == '-':
rhost = None rhost = None
ipv6flag = []
if rhostIsIPv6:
ipv6flag = ['-6']
z = zlib.compressobj(1)
content = readfile('assembler.py')
content2 = (empackage(z, 'helpers.py') +
empackage(z, 'compat/ssubprocess.py') +
empackage(z, 'ssnet.py') +
empackage(z, 'hostwatch.py') +
empackage(z, 'server.py') +
"\n")
pyscript = r"""
import sys;
skip_imports=1;
verbosity=%d;
exec compile(sys.stdin.read(%d), "assembler.py", "exec")
""" % (helpers.verbose or 0, len(content))
pyscript = re.sub(r'\s+', ' ', pyscript.strip())
if not rhost: if not rhost:
argv = ['sshuttle', '--server'] + ['-v']*(helpers.verbose or 0) argv = [python, '-c', pyscript]
else: else:
# WARNING: shell quoting security holes are possible here, so we if ssh_cmd:
# have to be super careful. We have to use 'sh -c' because sshl = ssh_cmd.split(' ')
# csh-derived shells can't handle PATH= notation. We can't else:
# set PATH in advance, because ssh probably replaces it. We sshl = ['ssh']
# can't exec *safely* using argv, because *both* ssh and 'sh -c' argv = (sshl +
# allow shellquoting. So we end up having to double-shellquote portl +
# stuff here. ipv6flag +
escapedir = re.sub(r'([^\w/])', r'\\\\\\\1', nicedir) [rhost, '--', "'%s' -c '%s'" % (python, pyscript)])
cmd = r"""
sh -c PATH=%s:'$PATH exec sshuttle --server%s'
""" % (escapedir, ' -v' * (helpers.verbose or 0))
argv = ['ssh', rhost, '--', cmd.strip()]
(s1,s2) = socket.socketpair() (s1,s2) = socket.socketpair()
def setup(): def setup():
# runs in the child process # runs in the child process
s2.close() s2.close()
if not rhost:
os.environ['PATH'] = ':'.join([nicedir,
os.environ.get('PATH', '')])
os.setsid()
s1a,s1b = os.dup(s1.fileno()), os.dup(s1.fileno()) s1a,s1b = os.dup(s1.fileno()), os.dup(s1.fileno())
s1.close() s1.close()
p = subprocess.Popen(argv, stdin=s1a, stdout=s1b, preexec_fn=setup) debug2('executing: %r\n' % argv)
p = ssubprocess.Popen(argv, stdin=s1a, stdout=s1b, preexec_fn=setup,
close_fds=True, stderr=stderr)
os.close(s1a) os.close(s1a)
os.close(s1b) os.close(s1b)
s2.sendall(content)
s2.sendall(content2)
return p, s2 return p, s2

243
sshuttle.md Normal file
View File

@ -0,0 +1,243 @@
% sshuttle(8) Sshuttle 0.44
% Avery Pennarun <apenwarr@gmail.com>
% 2010-12-31
# NAME
sshuttle - a transparent proxy-based VPN using ssh
# SYNOPSIS
sshuttle [options...] [-r [username@]sshserver[:port]] \<subnets...\>
# DESCRIPTION
sshuttle allows you to create a VPN connection from your
machine to any remote server that you can connect to via
ssh, as long as that server has python 2.3 or higher.
To work, you must have root access on the local machine,
but you can have a normal account on the server.
It's valid to run sshuttle more than once simultaneously on
a single client machine, connecting to a different server
every time, so you can be on more than one VPN at once.
If run on a router, sshuttle can forward traffic for your
entire subnet to the VPN.
# OPTIONS
\<subnets...\>
: a list of subnets to route over the VPN, in the form
`a.b.c.d[/width]`. Valid examples are 1.2.3.4 (a
single IP address), 1.2.3.4/32 (equivalent to 1.2.3.4),
1.2.3.0/24 (a 24-bit subnet, ie. with a 255.255.255.0
netmask), and 0/0 ('just route everything through the
VPN').
-l, --listen=*[ip:]port*
: use this ip address and port number as the transparent
proxy port. By default sshuttle finds an available
port automatically and listens on IP 127.0.0.1
(localhost), so you don't need to override it, and
connections are only proxied from the local machine,
not from outside machines. If you want to accept
connections from other machines on your network (ie. to
run sshuttle on a router) try enabling IP Forwarding in
your kernel, then using `--listen 0.0.0.0:0`.
-H, --auto-hosts
: scan for remote hostnames and update the local /etc/hosts
file with matching entries for as long as the VPN is
open. This is nicer than changing your system's DNS
(/etc/resolv.conf) settings, for several reasons. First,
hostnames are added without domain names attached, so
you can `ssh thatserver` without worrying if your local
domain matches the remote one. Second, if you sshuttle
into more than one VPN at a time, it's impossible to
use more than one DNS server at once anyway, but
sshuttle correctly merges /etc/hosts entries between
all running copies. Third, if you're only routing a
few subnets over the VPN, you probably would prefer to
keep using your local DNS server for everything else.
-N, --auto-nets
: in addition to the subnets provided on the command
line, ask the server which subnets it thinks we should
route, and route those automatically. The suggestions
are taken automatically from the server's routing
table.
--python
: specify the name/path of the remote python interpreter.
The default is just `python`, which means to use the
default python interpreter on the remote system's PATH.
-r, --remote=*[username@]sshserver[:port]*
: the remote hostname and optional username and ssh
port number to use for connecting to the remote server.
For example, example.com, testuser@example.com,
testuser@example.com:2222, or example.com:2244.
-x, --exclude=*subnet*
: explicitly exclude this subnet from forwarding. The
format of this option is the same as the `<subnets>`
option. To exclude more than one subnet, specify the
`-x` option more than once. You can say something like
`0/0 -x 1.2.3.0/24` to forward everything except the
local subnet over the VPN, for example.
-v, --verbose
: print more information about the session. This option
can be used more than once for increased verbosity. By
default, sshuttle prints only error messages.
-e, --ssh-cmd
: the command to use to connect to the remote server. The
default is just `ssh`. Use this if your ssh client is
in a non-standard location or you want to provide extra
options to the ssh command, for example, `-e 'ssh -v'`.
--seed-hosts
: a comma-separated list of hostnames to use to
initialize the `--auto-hosts` scan algorithm.
`--auto-hosts` does things like poll local SMB servers
for lists of local hostnames, but can speed things up
if you use this option to give it a few names to start
from.
-D, --daemon
: automatically fork into the background after connecting
to the remote server. Implies `--syslog`.
--syslog
: after connecting, send all log messages to the
`syslog`(3) service instead of stderr. This is
implicit if you use `--daemon`.
--pidfile=*pidfilename*
: when using `--daemon`, save sshuttle's pid to
*pidfilename*. The default is `sshuttle.pid` in the
current directory.
--server
: (internal use only) run the sshuttle server on
stdin/stdout. This is what the client runs on
the remote end.
--firewall
: (internal use only) run the firewall manager. This is
the only part of sshuttle that must run as root. If
you start sshuttle as a non-root user, it will
automatically run `sudo` or `su` to start the firewall
manager, but the core of sshuttle still runs as a
normal user.
--hostwatch
: (internal use only) run the hostwatch daemon. This
process runs on the server side and collects hostnames for
the `--auto-hosts` option. Using this option by itself
makes it a lot easier to debug and test the `--auto-hosts`
feature.
# EXAMPLES
Test locally by proxying all local connections, without using ssh:
$ sshuttle -v 0/0
Starting sshuttle proxy.
Listening on ('0.0.0.0', 12300).
[local sudo] Password:
firewall manager ready.
c : connecting to server...
s: available routes:
s: 192.168.42.0/24
c : connected.
firewall manager: starting transproxy.
c : Accept: 192.168.42.106:50035 -> 192.168.42.121:139.
c : Accept: 192.168.42.121:47523 -> 77.141.99.22:443.
...etc...
^C
firewall manager: undoing changes.
KeyboardInterrupt
c : Keyboard interrupt: exiting.
c : SW#8:192.168.42.121:47523: deleting
c : SW#6:192.168.42.106:50035: deleting
Test connection to a remote server, with automatic hostname
and subnet guessing:
$ sshuttle -vNHr example.org
Starting sshuttle proxy.
Listening on ('0.0.0.0', 12300).
firewall manager ready.
c : connecting to server...
s: available routes:
s: 77.141.99.0/24
c : connected.
c : seed_hosts: []
firewall manager: starting transproxy.
hostwatch: Found: testbox1: 1.2.3.4
hostwatch: Found: mytest2: 5.6.7.8
hostwatch: Found: domaincontroller: 99.1.2.3
c : Accept: 192.168.42.121:60554 -> 77.141.99.22:22.
^C
firewall manager: undoing changes.
c : Keyboard interrupt: exiting.
c : SW#6:192.168.42.121:60554: deleting
# DISCUSSION
When it starts, sshuttle creates an ssh session to the
server specified by the `-r` option. If `-r` is omitted,
it will start both its client and server locally, which is
sometimes useful for testing.
After connecting to the remote server, sshuttle uploads its
(python) source code to the remote end and executes it
there. Thus, you don't need to install sshuttle on the
remote server, and there are never sshuttle version
conflicts between client and server.
Unlike most VPNs, sshuttle forwards sessions, not packets.
That is, it uses kernel transparent proxying (`iptables
REDIRECT` rules on Linux, or `ipfw fwd` rules on BSD) to
capture outgoing TCP sessions, then creates entirely
separate TCP sessions out to the original destination at
the other end of the tunnel.
Packet-level forwarding (eg. using the tun/tap devices on
Linux) seems elegant at first, but it results in
several problems, notably the 'tcp over tcp' problem. The
tcp protocol depends fundamentally on packets being dropped
in order to implement its congestion control agorithm; if
you pass tcp packets through a tcp-based tunnel (such as
ssh), the inner tcp packets will never be dropped, and so
the inner tcp stream's congestion control will be
completely broken, and performance will be terrible. Thus,
packet-based VPNs (such as IPsec and openvpn) cannot use
tcp-based encrypted streams like ssh or ssl, and have to
implement their own encryption from scratch, which is very
complex and error prone.
sshuttle's simplicity comes from the fact that it can
safely use the existing ssh encrypted tunnel without
incurring a performance penalty. It does this by letting
the client-side kernel manage the incoming tcp stream, and
the server-side kernel manage the outgoing tcp stream;
there is no need for congestion control to be shared
between the two separate streams, so a tcp-based tunnel is
fine.
# SEE ALSO
`ssh`(1), `python`(1)

232
ssnet.py
View File

@ -1,5 +1,12 @@
import struct, socket, errno, select import struct, socket, errno, select
from helpers import * if not globals().get('skip_imports'):
from helpers import *
# these don't exist in the socket module in python 2.3!
SHUT_RD = 0
SHUT_WR = 1
SHUT_RDWR = 2
HDR_LEN = 8 HDR_LEN = 8
@ -8,22 +15,44 @@ CMD_EXIT = 0x4200
CMD_PING = 0x4201 CMD_PING = 0x4201
CMD_PONG = 0x4202 CMD_PONG = 0x4202
CMD_CONNECT = 0x4203 CMD_CONNECT = 0x4203
CMD_CLOSE = 0x4204 CMD_STOP_SENDING = 0x4204
CMD_EOF = 0x4205 CMD_EOF = 0x4205
CMD_DATA = 0x4206 CMD_DATA = 0x4206
CMD_ROUTES = 0x4207
CMD_HOST_REQ = 0x4208
CMD_HOST_LIST = 0x4209
cmd_to_name = { cmd_to_name = {
CMD_EXIT: 'EXIT', CMD_EXIT: 'EXIT',
CMD_PING: 'PING', CMD_PING: 'PING',
CMD_PONG: 'PONG', CMD_PONG: 'PONG',
CMD_CONNECT: 'CONNECT', CMD_CONNECT: 'CONNECT',
CMD_CLOSE: 'CLOSE', CMD_STOP_SENDING: 'STOP_SENDING',
CMD_EOF: 'EOF', CMD_EOF: 'EOF',
CMD_DATA: 'DATA', CMD_DATA: 'DATA',
CMD_ROUTES: 'ROUTES',
CMD_HOST_REQ: 'HOST_REQ',
CMD_HOST_LIST: 'HOST_LIST',
} }
def _add(l, elem):
if not elem in l:
l.append(elem)
def _fds(l):
out = []
for i in l:
try:
out.append(i.fileno())
except AttributeError:
out.append(i)
out.sort()
return out
def _nb_clean(func, *args): def _nb_clean(func, *args):
try: try:
return func(*args) return func(*args)
@ -31,6 +60,7 @@ def _nb_clean(func, *args):
if e.errno not in (errno.EWOULDBLOCK, errno.EAGAIN): if e.errno not in (errno.EWOULDBLOCK, errno.EAGAIN):
raise raise
else: else:
debug3('%s: err was: %s\n' % (func.__name__, e))
return None return None
@ -45,8 +75,12 @@ def _try_peername(sock):
return 'unknown' return 'unknown'
_swcount = 0
class SockWrapper: class SockWrapper:
def __init__(self, rsock, wsock, connect_to=None, peername=None): def __init__(self, rsock, wsock, connect_to=None, peername=None):
global _swcount
_swcount += 1
debug3('creating new SockWrapper (%d now exist\n)' % _swcount)
self.exc = None self.exc = None
self.rsock = rsock self.rsock = rsock
self.wsock = wsock self.wsock = wsock
@ -57,30 +91,49 @@ class SockWrapper:
self.try_connect() self.try_connect()
def __del__(self): def __del__(self):
debug1('%r: deleting\n' % self) global _swcount
_swcount -= 1
debug1('%r: deleting (%d remain)\n' % (self, _swcount))
if self.exc: if self.exc:
debug1('%r: error was: %r\n' % (self, self.exc)) debug1('%r: error was: %r\n' % (self, self.exc))
def __repr__(self): def __repr__(self):
return 'SW:%s' % (self.peername,) if self.rsock == self.wsock:
fds = '#%d' % self.rsock.fileno()
else:
fds = '#%d,%d' % (self.rsock.fileno(), self.wsock.fileno())
return 'SW%s:%s' % (fds, self.peername)
def seterr(self, e): def seterr(self, e):
if not self.exc: if not self.exc:
self.exc = e self.exc = e
self.nowrite()
self.noread()
def try_connect(self): def try_connect(self):
if self.connect_to and self.shut_write:
self.noread()
self.connect_to = None
if not self.connect_to: if not self.connect_to:
return # already connected return # already connected
self.rsock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42)
self.rsock.setblocking(False) self.rsock.setblocking(False)
debug3('%r: trying connect to %r\n' % (self, self.connect_to))
try: try:
self.rsock.connect(self.connect_to) self.rsock.connect(self.connect_to)
# connected successfully (Linux)
self.connect_to = None self.connect_to = None
except socket.error, e: except socket.error, e:
debug3('%r: connect result: %r\n' % (self, e))
if e.args[0] in [errno.EINPROGRESS, errno.EALREADY]: if e.args[0] in [errno.EINPROGRESS, errno.EALREADY]:
pass # not connected yet pass # not connected yet
elif e.args[0] in [errno.ECONNREFUSED, errno.ETIMEDOUT]: elif e.args[0] == errno.EISCONN:
# connected successfully (BSD)
self.connect_to = None
elif e.args[0] in [errno.ECONNREFUSED, errno.ETIMEDOUT,
errno.EHOSTUNREACH, errno.ENETUNREACH,
errno.EACCES, errno.EPERM]:
# a "normal" kind of error # a "normal" kind of error
self.connect_to = None
self.seterr(e) self.seterr(e)
else: else:
raise # error we've never heard of?! barf completely. raise # error we've never heard of?! barf completely.
@ -89,17 +142,20 @@ class SockWrapper:
if not self.shut_read: if not self.shut_read:
debug2('%r: done reading\n' % self) debug2('%r: done reading\n' % self)
self.shut_read = True self.shut_read = True
#self.rsock.shutdown(socket.SHUT_RD) # doesn't do anything anyway #self.rsock.shutdown(SHUT_RD) # doesn't do anything anyway
def nowrite(self): def nowrite(self):
if not self.shut_write: if not self.shut_write:
debug2('%r: done writing\n' % self) debug2('%r: done writing\n' % self)
self.shut_write = True self.shut_write = True
try: try:
self.wsock.shutdown(socket.SHUT_WR) self.wsock.shutdown(SHUT_WR)
except socket.error, e: except socket.error, e:
self.seterr(e) self.seterr(e)
def too_full(self):
return False # fullness is determined by the socket's select() state
def uwrite(self, buf): def uwrite(self, buf):
if self.connect_to: if self.connect_to:
return 0 # still connecting return 0 # still connecting
@ -109,8 +165,6 @@ class SockWrapper:
except OSError, e: except OSError, e:
# unexpected error... stream is dead # unexpected error... stream is dead
self.seterr(e) self.seterr(e)
self.nowrite()
self.noread()
return 0 return 0
def write(self, buf): def write(self, buf):
@ -143,7 +197,7 @@ class SockWrapper:
wrote = outwrap.write(self.buf[0]) wrote = outwrap.write(self.buf[0])
self.buf[0] = self.buf[0][wrote:] self.buf[0] = self.buf[0][wrote:]
while self.buf and not self.buf[0]: while self.buf and not self.buf[0]:
self.buf[0:1] = [] self.buf.pop(0)
if not self.buf and self.shut_read: if not self.buf and self.shut_read:
outwrap.nowrite() outwrap.nowrite()
@ -151,12 +205,13 @@ class SockWrapper:
class Handler: class Handler:
def __init__(self, socks = None, callback = None): def __init__(self, socks = None, callback = None):
self.ok = True self.ok = True
self.socks = set(socks or []) self.socks = socks or []
if callback: if callback:
self.callback = callback self.callback = callback
def pre_select(self, r, w, x): def pre_select(self, r, w, x):
r |= self.socks for i in self.socks:
_add(r, i)
def callback(self): def callback(self):
log('--no callback defined-- %r\n' % self) log('--no callback defined-- %r\n' % self)
@ -165,7 +220,7 @@ class Handler:
v = s.recv(4096) v = s.recv(4096)
if not v: if not v:
log('--closed-- %r\n' % self) log('--closed-- %r\n' % self)
self.socks = set() self.socks = []
self.ok = False self.ok = False
@ -177,19 +232,26 @@ class Proxy(Handler):
self.wrap2 = wrap2 self.wrap2 = wrap2
def pre_select(self, r, w, x): def pre_select(self, r, w, x):
if self.wrap1.shut_read: self.wrap2.nowrite()
if self.wrap1.shut_write: self.wrap2.noread()
if self.wrap2.shut_read: self.wrap1.nowrite()
if self.wrap2.shut_write: self.wrap1.noread()
if self.wrap1.connect_to: if self.wrap1.connect_to:
w.add(self.wrap1.rsock) _add(w, self.wrap1.rsock)
elif self.wrap1.buf: elif self.wrap1.buf:
w.add(self.wrap2.wsock) if not self.wrap2.too_full():
_add(w, self.wrap2.wsock)
elif not self.wrap1.shut_read: elif not self.wrap1.shut_read:
r.add(self.wrap1.rsock) _add(r, self.wrap1.rsock)
if self.wrap2.connect_to: if self.wrap2.connect_to:
w.add(self.wrap2.rsock) _add(w, self.wrap2.rsock)
elif self.wrap2.buf: elif self.wrap2.buf:
w.add(self.wrap1.wsock) if not self.wrap1.too_full():
_add(w, self.wrap1.wsock)
elif not self.wrap2.shut_read: elif not self.wrap2.shut_read:
r.add(self.wrap2.rsock) _add(r, self.wrap2.rsock)
def callback(self): def callback(self):
self.wrap1.try_connect() self.wrap1.try_connect()
@ -198,9 +260,17 @@ class Proxy(Handler):
self.wrap2.fill() self.wrap2.fill()
self.wrap1.copy_to(self.wrap2) self.wrap1.copy_to(self.wrap2)
self.wrap2.copy_to(self.wrap1) self.wrap2.copy_to(self.wrap1)
if self.wrap1.buf and self.wrap2.shut_write:
self.wrap1.buf = []
self.wrap1.noread()
if self.wrap2.buf and self.wrap1.shut_write:
self.wrap2.buf = []
self.wrap2.noread()
if (self.wrap1.shut_read and self.wrap2.shut_read and if (self.wrap1.shut_read and self.wrap2.shut_read and
not self.wrap1.buf and not self.wrap2.buf): not self.wrap1.buf and not self.wrap2.buf):
self.ok = False self.ok = False
self.wrap1.nowrite()
self.wrap2.nowrite()
class Mux(Handler): class Mux(Handler):
@ -208,12 +278,15 @@ class Mux(Handler):
Handler.__init__(self, [rsock, wsock]) Handler.__init__(self, [rsock, wsock])
self.rsock = rsock self.rsock = rsock
self.wsock = wsock self.wsock = wsock
self.new_channel = None self.new_channel = self.got_routes = None
self.got_host_req = self.got_host_list = None
self.channels = {} self.channels = {}
self.chani = 0 self.chani = 0
self.want = 0 self.want = 0
self.inbuf = '' self.inbuf = ''
self.outbuf = [] self.outbuf = []
self.fullness = 0
self.too_full = False
self.send(0, CMD_PING, 'chicken') self.send(0, CMD_PING, 'chicken')
def next_channel(self): def next_channel(self):
@ -224,38 +297,77 @@ class Mux(Handler):
self.chani = 1 self.chani = 1
if not self.channels.get(self.chani): if not self.channels.get(self.chani):
return self.chani return self.chani
def amount_queued(self):
total = 0
for b in self.outbuf:
total += len(b)
return total
def check_fullness(self):
if self.fullness > 32768:
if not self.too_full:
self.send(0, CMD_PING, 'rttest')
self.too_full = True
#ob = []
#for b in self.outbuf:
# (s1,s2,c) = struct.unpack('!ccH', b[:4])
# ob.append(c)
#log('outbuf: %d %r\n' % (self.amount_queued(), ob))
def send(self, channel, cmd, data): def send(self, channel, cmd, data):
data = str(data) data = str(data)
assert(len(data) <= 65535) assert(len(data) <= 65535)
p = struct.pack('!ccHHH', 'S', 'S', channel, cmd, len(data)) + data p = struct.pack('!ccHHH', 'S', 'S', channel, cmd, len(data)) + data
self.outbuf.append(p) self.outbuf.append(p)
debug2(' > channel=%d cmd=%s len=%d\n' debug2(' > channel=%d cmd=%s len=%d (fullness=%d)\n'
% (channel, cmd_to_name[cmd], len(data))) % (channel, cmd_to_name.get(cmd,hex(cmd)),
#log('Mux: send queue is %d/%d\n' len(data), self.fullness))
# % (len(self.outbuf), sum(len(b) for b in self.outbuf))) self.fullness += len(data)
def got_packet(self, channel, cmd, data): def got_packet(self, channel, cmd, data):
debug2('< channel=%d cmd=%s len=%d\n' debug2('< channel=%d cmd=%s len=%d\n'
% (channel, cmd_to_name[cmd], len(data))) % (channel, cmd_to_name.get(cmd,hex(cmd)), len(data)))
if cmd == CMD_PING: if cmd == CMD_PING:
self.send(0, CMD_PONG, data) self.send(0, CMD_PONG, data)
elif cmd == CMD_PONG: elif cmd == CMD_PONG:
debug2('received PING response\n') debug2('received PING response\n')
self.too_full = False
self.fullness = 0
elif cmd == CMD_EXIT: elif cmd == CMD_EXIT:
self.ok = False self.ok = False
elif cmd == CMD_CONNECT: elif cmd == CMD_CONNECT:
assert(not self.channels.get(channel)) assert(not self.channels.get(channel))
if self.new_channel: if self.new_channel:
self.new_channel(channel, data) self.new_channel(channel, data)
elif cmd == CMD_ROUTES:
if self.got_routes:
self.got_routes(data)
else:
raise Exception('got CMD_ROUTES without got_routes?')
elif cmd == CMD_HOST_REQ:
if self.got_host_req:
self.got_host_req(data)
else:
raise Exception('got CMD_HOST_REQ without got_host_req?')
elif cmd == CMD_HOST_LIST:
if self.got_host_list:
self.got_host_list(data)
else:
raise Exception('got CMD_HOST_LIST without got_host_list?')
else: else:
callback = self.channels[channel] callback = self.channels.get(channel)
callback(cmd, data) if not callback:
log('warning: closed channel %d got cmd=%s len=%d\n'
% (channel, cmd_to_name.get(cmd,hex(cmd)), len(data)))
else:
callback(cmd, data)
def flush(self): def flush(self):
self.wsock.setblocking(False) self.wsock.setblocking(False)
if self.outbuf and self.outbuf[0]: if self.outbuf and self.outbuf[0]:
wrote = _nb_clean(os.write, self.wsock.fileno(), self.outbuf[0]) wrote = _nb_clean(os.write, self.wsock.fileno(), self.outbuf[0])
debug2('mux wrote: %r/%d\n' % (wrote, len(self.outbuf[0])))
if wrote: if wrote:
self.outbuf[0] = self.outbuf[0][wrote:] self.outbuf[0] = self.outbuf[0][wrote:]
while self.outbuf and not self.outbuf[0]: while self.outbuf and not self.outbuf[0]:
@ -263,7 +375,10 @@ class Mux(Handler):
def fill(self): def fill(self):
self.rsock.setblocking(False) self.rsock.setblocking(False)
b = _nb_clean(os.read, self.rsock.fileno(), 32768) try:
b = _nb_clean(os.read, self.rsock.fileno(), 32768)
except OSError, e:
raise Fatal('other end: %r' % e)
#log('<<< %r\n' % b) #log('<<< %r\n' % b)
if b == '': # EOF if b == '': # EOF
self.ok = False self.ok = False
@ -290,9 +405,9 @@ class Mux(Handler):
break break
def pre_select(self, r, w, x): def pre_select(self, r, w, x):
r.add(self.rsock) _add(r, self.rsock)
if self.outbuf: if self.outbuf:
w.add(self.wsock) _add(w, self.wsock)
def callback(self): def callback(self):
(r,w,x) = select.select([self.rsock], [self.wsock], [], 0) (r,w,x) = select.select([self.rsock], [self.wsock], [], 0)
@ -308,6 +423,7 @@ class MuxWrapper(SockWrapper):
self.mux = mux self.mux = mux
self.channel = channel self.channel = channel
self.mux.channels[channel] = self.got_packet self.mux.channels[channel] = self.got_packet
self.socks = []
debug2('new channel: %d\n' % channel) debug2('new channel: %d\n' % channel)
def __del__(self): def __del__(self):
@ -320,15 +436,29 @@ class MuxWrapper(SockWrapper):
def noread(self): def noread(self):
if not self.shut_read: if not self.shut_read:
self.shut_read = True self.shut_read = True
self.mux.send(self.channel, CMD_STOP_SENDING, '')
self.maybe_close()
def nowrite(self): def nowrite(self):
if not self.shut_write: if not self.shut_write:
self.shut_write = True self.shut_write = True
self.mux.send(self.channel, CMD_EOF, '') self.mux.send(self.channel, CMD_EOF, '')
self.maybe_close()
def maybe_close(self):
if self.shut_read and self.shut_write:
# remove the mux's reference to us. The python garbage collector
# will then be able to reap our object.
self.mux.channels[self.channel] = None
def too_full(self):
return self.mux.too_full
def uwrite(self, buf): def uwrite(self, buf):
if len(buf) > 65535: if self.mux.too_full:
buf = buf[:32768] return 0 # too much already enqueued
if len(buf) > 2048:
buf = buf[:2048]
self.mux.send(self.channel, CMD_DATA, buf) self.mux.send(self.channel, CMD_DATA, buf)
return len(buf) return len(buf)
@ -339,11 +469,10 @@ class MuxWrapper(SockWrapper):
return None # no data available right now return None # no data available right now
def got_packet(self, cmd, data): def got_packet(self, cmd, data):
if cmd == CMD_CLOSE: if cmd == CMD_EOF:
self.noread() self.noread()
elif cmd == CMD_STOP_SENDING:
self.nowrite() self.nowrite()
elif cmd == CMD_EOF:
self.noread()
elif cmd == CMD_DATA: elif cmd == CMD_DATA:
self.buf.append(data) self.buf.append(data)
else: else:
@ -354,6 +483,35 @@ class MuxWrapper(SockWrapper):
def connect_dst(ip, port): def connect_dst(ip, port):
debug2('Connecting to %s:%d\n' % (ip, port)) debug2('Connecting to %s:%d\n' % (ip, port))
outsock = socket.socket() outsock = socket.socket()
outsock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42)
return SockWrapper(outsock, outsock, return SockWrapper(outsock, outsock,
connect_to = (ip,port), connect_to = (ip,port),
peername = '%s:%d' % (ip,port)) peername = '%s:%d' % (ip,port))
def runonce(handlers, mux):
r = []
w = []
x = []
to_remove = filter(lambda s: not s.ok, handlers)
for h in to_remove:
handlers.remove(h)
for s in handlers:
s.pre_select(r,w,x)
debug2('Waiting: %d r=%r w=%r x=%r (fullness=%d/%d)\n'
% (len(handlers), _fds(r), _fds(w), _fds(x),
mux.fullness, mux.too_full))
(r,w,x) = select.select(r,w,x)
debug2(' Ready: %d r=%r w=%r x=%r\n'
% (len(handlers), _fds(r), _fds(w), _fds(x)))
ready = r+w+x
did = {}
for h in handlers:
for s in h.socks:
if s in ready:
h.callback()
did[s] = 1
for s in ready:
if not s in did:
raise Fatal('socket %r was not used by any handler' % s)

16
ssyslog.py Normal file
View File

@ -0,0 +1,16 @@
import sys, os
from compat import ssubprocess
_p = None
def start_syslog():
global _p
_p = ssubprocess.Popen(['logger',
'-p', 'daemon.notice',
'-t', 'sshuttle'], stdin=ssubprocess.PIPE)
def stderr_to_syslog():
sys.stdout.flush()
sys.stderr.flush()
os.dup2(_p.stdin.fileno(), 2)