Make a config setting
  * everything can be set on command line, env vars or config file
  * make a config mode to edit the config
  * make it so you can make aliases memstore: home: etc

Names
  * rclone - domain ok, apt ok

Todo
  * Make a test suite which can run on all the given types of fs
  * Copy should use the sync code as it is more efficient at directory listing
  * Factor fses into own packages
  * FIXME: ls without an argument for buckets/containers?
  * FIXME: More -dry-run checks for object transfer
  * Might be quicker to check md5sums first? for swift <-> swift certainly, and maybe for small files
  * Ignoring the pseudo directories
    * if object.PseudoDirectory {
      * fmt.Printf("%9s %19s %s\n", "Directory", "-", fs.Remote())
  * Make Account wrapper
    * limit bandwidth for a pool of all individual connectinos
    * do timeouts by setting a limit, seeing whether io has happened
      and resetting it if it has
    * make Account do progress meter
  * Make logging controllable with flags (mostly done)
  * -timeout: Make all timeouts be settable with command line parameters
  * Check the locking in swift module!
  * Windows paths? Do we need to translate / and \?
  * Make a fs.Errorf and count errors and log them at a different level
  * Add max object size to fs metadata - 5GB for swift, infinite for local, ? for s3
    * tie into -max-size flag

Drive
  * Do we need the secrets or just the code?  If just the code then
    can make a web service which does the request on the clients
    behalf so don't need to expose the client secrets
  * Apparently we don't need -drive-client-id or -drive-client-secret once we have a token
  * Make a cgi which we send the user to
    * It has the client secrets
    * It gets google to authenticate
    * It receives the token back
    * It displays the token to the user to paste in to the code
    * Should be https really
  * Sometimes get: Failed to copy: Upload failed: googleapi: Error 403: Rate Limit Exceeded
    * quota is 100.0 requests/second/user

Ideas
  * could do encryption - put IV into metadata?
  * optimise remote copy container to another container using remote
    copy if local is same as remote - use an optional Copier interface
  * Allow subpaths container:/sub/path
  * look at auth from env in s3 module - add to swift?
  * support
      * sftp
      * scp
      * Google cloud storage: https://developers.google.com/storage/
      * Google drive: https://developers.google.com/drive/
      * rsync over ssh
      * dropbox: https://github.com/nickoneill/go-dropbox (no MD5s)
  * grive seems to have its secrets in the source code which would make things easier!

     

Need to make directory objects otherwise can't upload an empty directory
  * Or could upload empty directories only?
  * Can't purge a local filesystem because it leaves the directories behind

s3
  * Can maybe set last modified?
    * https://forums.aws.amazon.com/message.jspa?messageID=214062
    * Otherwise can set metadata
  * Returns etag and last modified in bucket list


Bugs

local & drive need to delete directories

2013/01/18 16:31:32 Waiting for deletions to finish
2013/01/18 16:31:32 z3: FIXME Skipping directory
2013/01/18 16:31:32 z3/x: Deleted
2013/01/18 16:31:32 Deleting path
2013/01/18 16:31:32 Rmdir failed: remove z3: directory not empty

------------------------------------------------------------

Non verbose - not sure number transferred got counted up? CHECK