diff --git a/MAINTAINERS.md b/MAINTAINERS.md
new file mode 100644
index 000000000..2ce3e289c
--- /dev/null
+++ b/MAINTAINERS.md
@@ -0,0 +1,58 @@
+# Maintainers guide for rclone #
+
+Current active maintainers of rclone are
+
+ * Nick Craig-Wood
+ * Stefan Breunig
+
+**This is a work in progress Draft**
+
+This is a guide for how to be an rclone maintainer.
+
+## Triaging Tickets ##
+
+***FIXME*** this section needs some work!
+
+When a ticket comes in it should be triaged. This means it should be classified into a bug or an enhancement or a request for support.
+
+Quite a lot of tickets need a bit of back an forth to determine whether it is a valid ticket.
+
+If it turns out to be a bug or an enhancement it should be tagged as such, with the appropriate other tags. Don't forget the "quickie" tag to give new contributors something easy to do to get going.
+
+When a ticket is tagged it should be added to a milestone, either the next release, the one after, Soon or Unplanned. Bugs can be added to the "Known Bugs" milestone if they aren't planned to be fixed or need to wait for something (eg the next go release).
+
+***FIXME*** I don't think I've quite got the milestone thing sorted yet. I was wondering about classifying them into priority, or what?
+
+Tickets [with no milestone](https://github.com/ncw/rclone/issues?utf8=✓&q=is%3Aissue%20is%3Aopen%20no%3Amile) are good candidates for ones that have slipped between the gaps and need following up.
+
+## Closing Tickets ##
+
+Close tickets as soon as you can - make sure they are tagged with a release. Post a link to a beta in the ticket with the fix in, asking for feedback.
+
+## Pull requests ##
+
+Try to process pull requests promptly!
+
+Merging pull requests on Github itself works quite well now-a-days so you can squash and rebase or rebase pull requests. rclone doesn't use merge commits.
+
+Sometimes pull requests need to be left open for a while - this especially true of contributions of new backends which take a long time to get right.
+
+## Merges ##
+
+If you are merging a branch locally then do `git merge --ff-only branch-name` to avoid a merge commit. You'll need to rebase the branch if it doesn't merge cleanly.
+
+## Release cycle ##
+
+Rclone aims for a 6-8 week release cycle. Sometimes release cycles take longer if there is something big to merge that didn't stabilize properly or for personal reasons.
+
+High impact regressions should be fixed before the next release.
+
+Near the start of the release cycle the dependencies should be updated with `make update` to give time for bugs to surface.
+
+Towards the end of the release cycle try not to merge anything too big so let things settle down.
+
+Follow the instructions in RELEASE.md for making the release. Note that the testing part is the most time consuming often needing several rounds of test and fix depending on exactly how many new features rclone has gained.
+
+## TODO ##
+
+I should probably make a mailing list for maintainers or at least an rclone-dev list, and I should probably make a dev@rclone.org to register with cloud providers.
diff --git a/MANUAL.html b/MANUAL.html
index f07d0007b..fd1b9026e 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -12,7 +12,7 @@
Rclone
@@ -30,6 +30,7 @@
Yandex Disk
SFTP
FTP
+HTTP
The local filesystem
Features
@@ -135,12 +136,12 @@ sudo zypper install snapd
OpenWrt
Enable the snap-openwrt feed.
-First you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
+First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
The easiest way to make the config is to run rclone with the config option:
rclone config
See the following for detailed instructions for
Usage
@@ -387,7 +389,7 @@ if src is directory
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
rclone copyto source:path dest:path
rclone cryptcheck
-Cryptcheck checks the integritity of a crypted remote.
+Cryptcheck checks the integrity of a crypted remote.
Synopsis
rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
@@ -446,11 +448,13 @@ if src is directory
rclone mount
Mount the remote as a mountpoint. EXPERIMENTAL
Synopsis
-rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's cloud storage systems as a file system with FUSE.
+rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount
+Or on Windows like this where X: is an unused drive letter
+rclone mount remote:path/to/files X:
When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped.
The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually with
# Linux
@@ -460,7 +464,7 @@ umount /path/to/local/mount
Limitations
This can only write files seqentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) won't work from the root - you will need to specify a bucket, or a path within the bucket. So swift:
won't work whereas swift:bucket
will as will swift:bucket/path
. None of these support the concept of directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
-Only supported on Linux, FreeBSD and OS X at the moment.
+Only supported on Linux, FreeBSD, OS X and Windows at the moment.
rclone mount vs rclone sync/copy
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.
Filters
@@ -486,15 +490,17 @@ umount /path/to/local/mount
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem. (default 2)
+ --umask int Override the permission bits set by the filesystem.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
rclone moveto
Move file or directory from source to dest.
@@ -535,13 +541,13 @@ if src is directory
Obscure password for use in the rclone.conf
rclone obscure password
rclone rmdirs
-Remove any empty directoryies under the path.
+Remove empty directories under the path.
Synopsis
This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path
Copying single files
-rclone normally syncs or copies directories. However if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn't.
+rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn't.
For example, suppose you have a remote with a file in called test.jpg
, then you could copy just that file like this
rclone copy remote:test.jpg /tmp/download
The file test.jpg
will be placed inside /tmp/download
.
@@ -549,7 +555,7 @@ if src is directory
rclone copy --no-traverse --files-from /tmp/files remote: /tmp/download
Where /tmp/files
contains the single line
test.jpg
-It is recommended to use copy
when copying single files not sync
. They have pretty much the same effect but copy
will use a lot less memory.
+It is recommended to use copy
when copying individual files, not sync
. They have pretty much the same effect but copy
will use a lot less memory.
Quoting and the shell
When you are typing commands to your computer you are using something called the command line shell. This interprets various characters in an OS specific way.
Here are some gotchas which may help users unfamiliar with the shell rules
@@ -564,6 +570,12 @@ if src is directory
rclone copy "E:\folder name\folder name\folder name" remote:backup
If you are using the root directory on its own then don't quote it (see #464 for why), eg
rclone copy E:\ remote:backup
+Copying files or directories with :
in the names
+rclone uses :
to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a :
up to the first /
so if you need to act on a file or directory like this then use the full path starting with a /
, or use ./
as a current directory prefix.
+So to sync a directory called sync:me
to a remote called remote:
use
+rclone sync ./sync:me remote:path
+or
+rclone sync /full/path/to/sync:me remote:path
Server Side Copy
Most remotes (but not all - see the overview) support server side copy.
This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.
@@ -571,7 +583,7 @@ if src is directory
rclone copy s3:oldbucket s3:newbucket
Will copy the contents of oldbucket
to newbucket
without downloading and re-uploading.
Remotes which don't support server side copy will download and re-upload in this case.
-Server side copies are used with sync
and copy
and will be identified in the log when using the -v
flag. The may also be used with move
if the remote doesn't support server side move.
+Server side copies are used with sync
and copy
and will be identified in the log when using the -v
flag. The move
command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.
Server side copies will only be attempted if the remote names are the same.
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
@@ -579,7 +591,7 @@ rclone sync /path/to/files remote:current-backup
Options
Rclone has a number of options to control its behaviour.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-Options which use SIZE use kByte by default. However a suffix of b
for bytes, k
for kBytes, M
for MBytes and G
for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
+Options which use SIZE use kByte by default. However, a suffix of b
for bytes, k
for kBytes, M
for MBytes and G
for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
--backup-dir=DIR
When using sync
, copy
or move
any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.
If --suffix
is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
@@ -591,18 +603,20 @@ rclone sync /path/to/files remote:current-backup
--bwlimit=BANDWIDTH_SPEC
This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.
Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0
which means to not limit bandwidth.
-For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
-It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as "HH:MM,BANDWIDTH HH:MM,BANDWITH...".
+For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
+It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as "HH:MM,BANDWIDTH HH:MM,BANDWIDTH...".
An example of a typical timetable to avoid link saturation during daytime working hours could be:
--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
In this example, the transfer bandwidth will be set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc.
-Note that the units are Bytes/s not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
+Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
+On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2
signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit
quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:
+kill -SIGUSR2 $(pidof rclone)
--buffer-size=SIZE
Use this sized buffer to speed up file transfers. Each --transfer
will use this much memory for buffering.
-Set to 0 to disable the buffering for the minimum memory use.
+Set to 0 to disable the buffering for the minimum memory usage.
--checkers=N
-The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg s3, swift, dropbox) this can take a significant amount of time so they are run in parallel.
+The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.
The default is to run 8 checkers in parallel.
-c, --checksum
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.
@@ -631,22 +645,22 @@ rclone sync /path/to/files remote:current-backup
--ignore-size
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum
is set then it only checks the checksum.
It will also cause rclone to skip verifying the sizes are the same after transfer.
-This can be useful for transferring files to and from onedrive which occasionally misreports the size of image files (see #399 for more info).
+This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 for more info).
-I, --ignore-times
Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.
Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum
).
--log-file=FILE
-Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
+Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
--log-level LEVEL
This sets the log level for rclone. The default log level is INFO
.
DEBUG
is equivalent to -vv
. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.
INFO
is equivalent to -v
. It outputs information about each transfer and prints stats once a minute by default.
NOTICE
is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.
-ERROR
is equivalent to -q
. It only output error messages.
+ERROR
is equivalent to -q
. It only outputs error messages.
--low-level-retries NUMBER
This controls the number of low level retries rclone does.
A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v
flag.
-This shouldn't need to be changed from the default in normal operations, however if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries
flag) quicker.
+This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries
flag) quicker.
Disable low level retries with --low-level-retries 1
.
--max-depth=N
This modifies the recursion depth for all the commands except purge.
@@ -668,18 +682,21 @@ rclone sync /path/to/files remote:current-backup
Normally rclone outputs stats and a completion message. If you set this flag it will make as little output as possible.
--retries int
Retry the entire sync if it fails this many times it fails (default 3).
-Some remotes can be unreliable and a few retries helps pick up the files which didn't get transferred because of errors.
+Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors.
Disable retries with --retries 1
.
--size-only
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.
-This can be useful transferring files from dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
+This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
--stats=TIME
Commands which transfer data (sync
, copy
, copyto
, move
, moveto
) will print data transfer stats at regular intervals to show their progress.
This sets the interval.
The default is 1m
. Use 0 to disable.
-If you set the stats interval then all command can show stats. This can be useful when running other commands, check
or mount
for example.
+If you set the stats interval then all commands can show stats. This can be useful when running other commands, check
or mount
for example.
+Stats are logged at INFO
level by default which means they won't show at default log level NOTICE
. Use --stats-log-level NOTICE
or -v
to make them show. See the Logging section for more info on log levels.
+--stats-log-level string
+Log level to show --stats
output at. This can be DEBUG
, INFO
, NOTICE
, or ERROR
. The default is INFO
. This means at the default level of logging which is NOTICE
the stats won't show - if you want them to then use -stats-log-level NOTICE
. See the Logging section for more info on log levels.
--stats-unit=bits|bytes
-By default data transfer rates will be printed in bytes/second.
+By default, data transfer rates will be printed in bytes/second.
This option allows the data rate to be printed in bits/second.
Data transfer volume will still be reported in bytes.
The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
@@ -689,11 +706,22 @@ rclone sync /path/to/files remote:current-backup
See --backup-dir
for more info.
--syslog
On capable OSes (not Windows or Plan9) send all log output to syslog.
-This can be useful for running rclone in script or rclone mount
.
+This can be useful for running rclone in a script or rclone mount
.
--syslog-facility string
If using --syslog
this sets the syslog facility (eg KERN
, USER
). See man syslog
for a list of possible facilities. The default facility is DAEMON
.
+--tpslimit float
+Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second.
+For example to limit rclone to 10 HTTP transactions per second use --tpslimit 10
, or to 1 transaction every 2 seconds use --tpslimit 0.5
.
+Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited).
+This can be very useful for rclone mount
to control the behaviour of applications using it.
+See also --tpslimit-burst
.
+--tpslimit-burst int
+Max burst of transactions for --tpslimit
. (default 1)
+Normally --tpslimit
will do exactly the number of transaction per second specified. However if you supply --tps-burst
then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied.
+For example if you provide --tpslimit-burst 10
then if rclone has been idle for more than 10*--tpslimit
then it can do 10 transactions very quickly before they are limited again.
+This may be used to increase performance of --tpslimit
without changing the long term average number of transactions per second.
--track-renames
-By default rclone doesn't not keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
+By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync
, copy
, and move
operations and perform renaming server-side.
Files will be matched by size and hash - if both match then a rename will be considered.
If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console.
@@ -706,7 +734,7 @@ rclone sync /path/to/files remote:current-backup
Specifying --delete-after
(the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors
.
--fast-list
When doing anything which involves a directory listing (eg sync
, copy
, ls
- in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.
-However some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg s3, b2, gcs, swift, hubic).
+However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
If you use the --fast-list
flag then rclone will use this method for listing directories. This will have the following consequences for the listing:
- It will use fewer transactions (important if you pay for them)
@@ -763,7 +791,7 @@ c) Change Password
u) Unencrypt configuration
q) Quit to main menu
c/u/q>
-Your configuration is now encrypted, and every time you start rclone you will now be asked for the password. In the same menu you can change the password or completely remove encryption from your configuration.
+Your configuration is now encrypted, and every time you start rclone you will now be asked for the password. In the same menu, you can change the password or completely remove encryption from your configuration.
There is no way to recover the configuration if you lose your password.
rclone uses nacl secretbox which in turn uses XSalsa20 and Poly1305 to encrypt and authenticate your configuration with secret-key cryptography. The password is SHA-256 hashed, which produces the key for secretbox. The hashed password is not stored.
While this provides very good security, we do not recommend storing your encrypted rclone configuration in public if it contains sensitive information, maybe except if you use a very strong password.
@@ -797,7 +825,7 @@ export RCLONE_CONFIG_PASS
--no-traverse
The --no-traverse
flag controls whether the destination file system is traversed when using the copy
or move
commands. --no-traverse
is not compatible with sync
and will be ignored if you supply it with sync
.
If you are only copying a small number of files and/or have a large number of files on the destination then --no-traverse
will stop rclone listing the destination and save time.
-However if you are copying a large number of files, especially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse
.
+However, if you are copying a large number of files, especially if you are doing a copy where lots of the files haven't changed and won't need copying then you shouldn't use --no-traverse
.
It can also be used to reduce the memory usage of rclone when copying - rclone --no-traverse copy src dst
won't load either the source or destination listings into memory so will use the minimum amount of memory.
Filtering
For the filtering options
@@ -819,8 +847,8 @@ export RCLONE_CONFIG_PASS
See the filtering section.
Logging
rclone has 4 levels of logging, Error
, Notice
, Info
and Debug
.
-By default rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls
).
-By default rclone will produce Error
and Notice
level messages.
+By default, rclone logs to standard error. This means you can redirect standard error and still see the normal output of rclone commands (eg rclone ls
).
+By default, rclone will produce Error
and Notice
level messages.
If you use the -q
flag, rclone will only produce Error
messages.
If you use the -v
flag, rclone will produce Error
, Notice
and Info
messages.
If you use the -vv
flag, rclone will produce Error
, Notice
, Info
and Debug
messages.
@@ -829,21 +857,21 @@ export RCLONE_CONFIG_PASS
If you use the --syslog
flag then rclone will log to syslog and the --syslog-facility
control which facility it uses.
Rclone prefixes all log messages with their level in capitals, eg INFO which makes it easy to grep the log file for different kinds of information.
Exit Code
-If any errors occurred during the command, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.
-During the startup phase rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.
-When rclone is running it will accumulate errors as it goes along, and only exit with an non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q
) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
+If any errors occur during the command execution, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.
+During the startup phase, rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.
+When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q
) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
Environment Variables
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
Options
Every option in rclone can have its default set by environment variable.
-To find the name of the environment variable, first take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
-For example to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
+To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
+For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
Or to always use the trash in drive --drive-use-trash
, set RCLONE_DRIVE_USE_TRASH=true
.
The same parser is used for the options and the environment variables so they take exactly the same form.
Config file
You can set defaults for values in the config file on an individual remote basis. If you want to use this feature, you will need to discover the name of the config items that you want. The easiest way is to run through rclone config
by hand, then look in the config file to see what the values are (the config file can be found by looking at the help for --config
in rclone help
).
To find the name of the environment variable, you need to set, take RCLONE_
+ name of remote + _
+ name of config file option and make it all uppercase.
-For example to configure an S3 remote named mys3:
without a config file (using unix ways of setting environment variables):
+For example, to configure an S3 remote named mys3:
without a config file (using unix ways of setting environment variables):
$ export RCLONE_CONFIG_MYS3_TYPE=s3
$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
$ export RCLONE_CONFIG_MYS3_SECRET_ACCESS_KEY=XXX
@@ -1232,6 +1260,14 @@ user2/stuff
- |
+HTTP |
+- |
+No |
+Yes |
+No |
+R |
+
+
The local filesystem |
All |
Yes |
@@ -1391,6 +1427,15 @@ The hashes are used when transferring data as an integrity check and can be spec
No |
+HTTP |
+No |
+No |
+No |
+No |
+No |
+No |
+
+
The local filesystem |
Yes |
No |
@@ -1541,18 +1586,14 @@ y/e/d> y
By default rclone will delete files permanently when requested. If sending them to the trash is required instead then use the --drive-use-trash
flag.
Specific options
Here are the command line options specific to this cloud storage system.
+--drive-auth-owner-only
+Only consider files owned by the authenticated user.
--drive-chunk-size=SIZE
Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
---drive-full-list
-No longer does anything - kept for backwards compatibility.
---drive-upload-cutoff=SIZE
-File size cutoff for switching to chunked upload. Default is 8 MB.
---drive-use-trash
-Send files to the trash instead of deleting permanently. Defaults to off, namely deleting files permanently.
---drive-auth-owner-only
-Only consider files owned by the authenticated user. Requires that --drive-full-list=true (default).
+--drive-auth-owner-only
+Only consider files owned by the authenticated user.
Google documents can only be exported from Google drive. When rclone downloads a Google doc it chooses a format to download depending upon this setting.
By default the formats are docx,xlsx,pptx,svg
which are a sensible default for an editable document.
@@ -1676,10 +1717,31 @@ y/e/d> y
+--drive-list-chunk int
+Size of listing chunk 100-1000. 0 to disable. (default 1000)
+--drive-shared-with-me
+Only show files that are shared with me
--drive-skip-gdocs
Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
+--drive-trashed-only
+Only show files that are in the trash. This will show trashed files in their original directory structure.
+--drive-upload-cutoff=SIZE
+File size cutoff for switching to chunked upload. Default is 8 MB.
+--drive-use-trash
+Send files to the trash instead of deleting permanently. Defaults to off, namely deleting files permanently.
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.
+Duplicated files
+Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.
+Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
+Use rclone dedupe
to fix duplicated files.
+Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.
+Rclone appears to be re-copying files it shouldn't
+There are two possible reasons for rclone to recopy files which haven't changed to Google Drive.
+The first is the duplicated file issue above - run rclone dedupe
and check your logs for duplicate object or directory messages.
+The second is that sometimes Google reports different sizes for the Google Docs exports which will cause rclone to re-download Google Docs for no apparent reason. --ignore-size
is a not very satisfactory work-around for this if it is causing you a lot of problems.
+Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y"
+This is the same problem as above. Google reports the google doc is one size, but rclone downloads a different size. Work-around with the --ignore-size
flag or wait for rclone to retry the download which it will.
Making your own client_id
When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.
However you might find you get better performance making your own client_id if you are a heavy user. Or you may not depending on exactly how Google have been raising rclone's rate limit.
@@ -2013,37 +2075,47 @@ region = other-v2-signature
Minio
Minio is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
-To use it, install Minio following the instructions from the web site.
+To use it, install Minio following the instructions here.
When it configures itself Minio will print something like this
-AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
+Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
+AccessKey: USWUXHGYZQYFYFFIT3RE
+SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+Region: us-east-1
+SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
-Minio Object Storage:
- http://127.0.0.1:9000
- http://10.0.0.3:9000
+Browser Access:
+ http://192.168.1.106:9000 http://172.23.0.1:9000
-Minio Browser:
- http://127.0.0.1:9000
- http://10.0.0.3:9000
+Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
+ $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+
+Object API (Amazon S3 compatible):
+ Go: https://docs.minio.io/docs/golang-client-quickstart-guide
+ Java: https://docs.minio.io/docs/java-client-quickstart-guide
+ Python: https://docs.minio.io/docs/python-client-quickstart-guide
+ JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
+ .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
+
+Drive Capacity: 26 GiB Free, 165 GiB Total
These details need to go into rclone config
like this. Note that it is important to put the region in as stated above.
env_auth> 1
-access_key_id> WLGDGYAQYIGI833EV05A
-secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
+access_key_id> USWUXHGYZQYFYFFIT3RE
+secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region> us-east-1
-endpoint> http://10.0.0.3:9000
+endpoint> http://192.168.1.106:9000
location_constraint>
server_side_encryption>
Which makes the config file look like this
[minio]
env_auth = false
-access_key_id = WLGDGYAQYIGI833EV05A
-secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
+access_key_id = USWUXHGYZQYFYFFIT3RE
+secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
-endpoint = http://10.0.0.3:9000
+endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =
-Minio doesn't support all the features of S3 yet. In particular it doesn't support MD5 checksums (ETags) or metadata. This means rclone can't check MD5SUMs or store the modified date. However you can work around this with the --size-only
flag of rclone.
So once set up, for example to copy files into a bucket
-rclone --size-only copy /path/to/files minio:bucket
+rclone copy /path/to/files minio:bucket
Swift
Swift refers to Openstack Object Storage. Commercial implementations of that being:
@@ -2336,6 +2408,52 @@ Choose a number from below, or type in your own value
5 / Project team owners get OWNER access, and all Users get WRITER access.
\ "publicReadWrite"
bucket_acl> 2
+Location for the newly created buckets.
+Choose a number from below, or type in your own value
+ 1 / Empty for default location (US).
+ \ ""
+ 2 / Multi-regional location for Asia.
+ \ "asia"
+ 3 / Multi-regional location for Europe.
+ \ "eu"
+ 4 / Multi-regional location for United States.
+ \ "us"
+ 5 / Taiwan.
+ \ "asia-east1"
+ 6 / Tokyo.
+ \ "asia-northeast1"
+ 7 / Singapore.
+ \ "asia-southeast1"
+ 8 / Sydney.
+ \ "australia-southeast1"
+ 9 / Belgium.
+ \ "europe-west1"
+10 / London.
+ \ "europe-west2"
+11 / Iowa.
+ \ "us-central1"
+12 / South Carolina.
+ \ "us-east1"
+13 / Northern Virginia.
+ \ "us-east4"
+14 / Oregon.
+ \ "us-west1"
+location> 12
+The storage class to use when storing objects in Google Cloud Storage.
+Choose a number from below, or type in your own value
+ 1 / Default
+ \ ""
+ 2 / Multi-regional storage class
+ \ "MULTI_REGIONAL"
+ 3 / Regional storage class
+ \ "REGIONAL"
+ 4 / Nearline storage class
+ \ "NEARLINE"
+ 5 / Coldline storage class
+ \ "COLDLINE"
+ 6 / Durable reduced availability storage class
+ \ "DURABLE_REDUCED_AVAILABILITY"
+storage_class> 5
Remote config
Use auto config?
* Say Y if not sure
@@ -2796,7 +2914,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
Specific options
Here are the command line options specific to this cloud storage system.
--b2-chunk-size valuee=SIZE
-When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of --transfers
chunks in progress at once. 100,000,000 Bytes is the minimim size (default 96M).
+When uploading large files chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of --transfers
chunks in progress at once. 5,000,000 Bytes is the minimim size (default 96M).
--b2-upload-cutoff=SIZE
Cutoff for switching to chunked upload (default 190.735 MiB == 200 MB). Files above this size will be uploaded in chunks of --b2-chunk-size
.
This value should be set no larger than 4.657GiB (== 5GB) as this is the largest file size that can be uploaded.
@@ -2910,14 +3028,12 @@ y/e/d> y
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the users home directory.
Here is an example of making a SFTP configuration. First run
rclone config
-This will guide you through an interactive setup process. You will need your account number (a short hex number) and key (a long hex number) which you can get from the SFTP control panel.
+This will guide you through an interactive setup process.
No remotes found - make a new one
n) New remote
-r) Rename remote
-c) Copy remote
s) Set configuration password
q) Quit config
-n/r/c/s/q> n
+n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
@@ -2949,6 +3065,8 @@ Choose a number from below, or type in your own value
\ "sftp"
14 / Yandex Disk
\ "yandex"
+15 / http Connection
+ \ "http"
Storage> sftp
SSH host to connect to
Choose a number from below, or type in your own value
@@ -2956,21 +3074,24 @@ Choose a number from below, or type in your own value
\ "example.com"
host> example.com
SSH username, leave blank for current username, ncw
-user>
+user> sftpuser
SSH port, leave blank to use default (22)
port>
-SSH password, leave blank to use ssh-agent
+SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> n
+Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+key_file>
Remote config
--------------------
[remote]
host = example.com
-user =
+user = sftpuser
port =
pass =
+key_file =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -2985,6 +3106,21 @@ y/e/d> y
rclone ls remote:path/to/directory
Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
rclone sync /home/local/directory remote:directory
+SSH Authentication
+The SFTP remote supports 3 authentication methods
+
+- Password
+- Key file
+- ssh-agent
+
+Key files should be unencrypted PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa
.
+If you don't specify pass
or key_file
then it will attempt to contact an ssh-agent.
+ssh-agent on macOS
+Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg
+eval `ssh-agent -s` && ssh-add -A
+And then at the end of the session
+eval `ssh-agent -k`
+These commands can be used in scripts of course.
Modified time
Modified times are stored on the server to 1 second precision.
Modified times are used in syncing and are fully supported.
@@ -2994,6 +3130,189 @@ y/e/d> y
SFTP isn't supported under plan9 until this issue is fixed.
Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
+FTP
+FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.
+Here is an example of making an FTP configuration. First run
+rclone config
+This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous
as username and your email address as the password.
+No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 8 / Google Drive
+ \ "drive"
+ 9 / Hubic
+ \ "hubic"
+10 / Local Disk
+ \ "local"
+11 / Microsoft OneDrive
+ \ "onedrive"
+12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+13 / SSH/SFTP Connection
+ \ "sftp"
+14 / Yandex Disk
+ \ "yandex"
+Storage> ftp
+FTP host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to ftp.example.com
+ \ "ftp.example.com"
+host> ftp.example.com
+FTP username, leave blank for current username, ncw
+user>
+FTP port, leave blank to use default (21)
+port>
+FTP password
+y) Yes type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Remote config
+--------------------
+[remote]
+host = ftp.example.com
+user =
+port =
+pass = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+This remote is called remote
and can now be used like this
+See all directories in the home directory
+rclone lsd remote:
+Make a new directory
+rclone mkdir remote:path/to/directory
+List the contents of a directory
+rclone ls remote:path/to/directory
+Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
+rclone sync /home/local/directory remote:directory
+Modified time
+FTP does not support modified times. Any times you see on the server will be time of upload.
+Checksums
+FTP does not support any checksums.
+Limitations
+Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
+Note that --timeout
isn't supported (but --contimeout
is).
+FTP could support server side move but doesn't yet.
+HTTP
+The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)
+Paths are specified as remote:
or remote:path/to/dir
.
+Here is an example of how to make a remote called remote
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 8 / Google Drive
+ \ "drive"
+ 9 / Hubic
+ \ "hubic"
+10 / Local Disk
+ \ "local"
+11 / Microsoft OneDrive
+ \ "onedrive"
+12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+13 / SSH/SFTP Connection
+ \ "sftp"
+14 / Yandex Disk
+ \ "yandex"
+15 / http Connection
+ \ "http"
+Storage> http
+URL of http host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "https://example.com"
+url> https://beta.rclone.org
+Remote config
+--------------------
+[remote]
+url = https://beta.rclone.org
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+remote http
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+This remote is called remote
and can now be used like this
+See all the top level directories
+rclone lsd remote:
+List the contents of a directory
+rclone ls remote:directory
+Sync the remote directory
to /home/local/directory
, deleting any excess files.
+rclone sync remote:directory /home/local/directory
+Read only
+This remote is read only - you can't upload files to an HTTP server.
+Modified time
+Most HTTP servers store time accurate to 1 second.
+Checksum
+No checksums are stored.
+Usage without a config file
+Note that since only two environment variable need to be set, it is easy to use without a config file like this.
+RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz:
+Or if you prefer
+export RCLONE_CONFIG_ZZ_TYPE=http
+export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org
+rclone lsd zz:
Crypt
The crypt
remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
@@ -3234,102 +3553,12 @@ $ rclone -q ls secret:
Key derivation
Rclone uses scrypt
with parameters N=16384, r=8, p=1
with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.
-FTP
-FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.
-Here is an example of making an FTP configuration. First run
-rclone config
-This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous
as username and your email address as the password.
-No remotes found - make a new one
-n) New remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-n/r/c/s/q> n
-name> remote
-Type of storage to configure.
-Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
-10 / Local Disk
- \ "local"
-11 / Microsoft OneDrive
- \ "onedrive"
-12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-13 / SSH/SFTP Connection
- \ "sftp"
-14 / Yandex Disk
- \ "yandex"
-Storage> ftp
-FTP host to connect to
-Choose a number from below, or type in your own value
- 1 / Connect to ftp.example.com
- \ "ftp.example.com"
-host> ftp.example.com
-FTP username, leave blank for current username, ncw
-user>
-FTP port, leave blank to use default (21)
-port>
-FTP password
-y) Yes type in my own password
-g) Generate random password
-y/g> y
-Enter the password:
-password:
-Confirm the password:
-password:
-Remote config
---------------------
-[remote]
-host = ftp.example.com
-user =
-port =
-pass = *** ENCRYPTED ***
---------------------
-y) Yes this is OK
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-This remote is called remote
and can now be used like this
-See all directories in the home directory
-rclone lsd remote:
-Make a new directory
-rclone mkdir remote:path/to/directory
-List the contents of a directory
-rclone ls remote:path/to/directory
-Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
-rclone sync /home/local/directory remote:directory
-Modified time
-FTP does not support modified times. Any times you see on the server will be time of upload.
-Checksums
-FTP does not support any checksums.
-Limitations
-Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
-Note that --timeout
isn't supported (but --contimeout
is).
-FTP could support server side move but doesn't yet.
Local Filesystem
Local paths are specified as normal filesystem paths, eg /path/to/wherever
, so
rclone sync /home/source /tmp/destination
Will sync /home/source
to /tmp/destination
These can be configured into the config file for consistencies sake, but it is probably easier not to.
-Modified time
+Modified time
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames
Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
@@ -3401,6 +3630,123 @@ nounc = true
NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will not appear as an valid flag.
Changelog
+- v1.37 - 2017-07-22
+
+- New backends
+- FTP - thanks to Antonio Messina
+- HTTP - thanks to Vasiliy Tolstov
+- New commands
+- rclone ncdu - for exploring a remote with a text based user interface.
+- rclone lsjson - for listing with a machine readable output
+- rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox)
+- New Features
+- Implement --fast-list flag
+
+- This allows remotes to list recursively if they can
+- This uses less transactions (important if you pay for them)
+- This may or may not be quicker
+- This will user more memory as it has to hold the listing in memory
+- --old-sync-method deprecated - the remaining uses are covered by --fast-list
+- This involved a major re-write of all the listing code
+
+- Add --tpslimit and --tpslimit-burst to limit transactions per second
+
+- this is useful in conjuction with
rclone mount
to limit external apps
+
+- Add --stats-log-level so can see --stats without -v
+- Print password prompts to stderr - Hraban Luyat
+- Warn about duplicate files when syncing
+- Oauth improvements
+
+- allow auth_url and token_url to be set in the config file
+- Print redirection URI if using own credentials.
+
+- Don't Mkdir at the start of sync to save transactions
+- Compile
+- Update build to go1.8.3
+- Require go1.6 for building rclone
+- Compile 386 builds with "GO386=387" for maximum compatibility
+- Bug Fixes
+- Fix menu selection when no remotes
+- Config saving reworked to not kill the file if disk gets full
+- Don't delete remote if name does not change while renaming
+- moveto, copyto: report transfers and checks as per move and copy
+- Local
+- Add --local-no-unicode-normalization flag - Bob Potter
+- Mount
+- Now supported on Windows using cgofuse and WinFsp - thanks to Bill Zissimopoulos for much help
+- Compare checksums on upload/download via FUSE
+- Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme Vizcaino
+- On read only open of file, make open pending until first read
+- Make --read-only reject modify operations
+- Implement ModTime via FUSE for remotes that support it
+- Allow modTime to be changed even before all writers are closed
+- Fix panic on renames
+- Fix hang on errored upload
+- Crypt
+- Report the name:root as specified by the user
+- Add an "obfuscate" option for filename encryption - Stephen Harris
+- Amazon Drive
+- Fix initialization order for token renewer
+- Remove revoked credentials, allow oauth proxy config and update docs
+- B2
+- Reduce minimum chunk size to 5MB
+- Drive
+- Add team drive support
+- Reduce bandwidth by adding fields for partial responses - Martin Kristensen
+- Implement --drive-shared-with-me flag to view shared with me files - Danny Tsai
+- Add --drive-trashed-only to read only the files in the trash
+- Remove obsolete --drive-full-list
+- Add missing seek to start on retries of chunked uploads
+- Fix stats accounting for upload
+- Convert / in names to a unicode equivalent (/)
+- Poll for Google Drive changes when mounted
+- OneDrive
+- Fix the uploading of files with spaces
+- Fix initialization order for token renewer
+- Display speeds accurately when uploading - Yoni Jah
+- Swap to using http://localhost:53682/ as redirect URL - Michael Ledin
+- Retry on token expired error, reset upload body on retry - Yoni Jah
+- Google Cloud Storage
+- Add ability to specify location and storage class via config and command line - thanks gdm85
+- Create container if necessary on server side copy
+- Increase directory listing chunk to 1000 to increase performance
+- Obtain a refresh token for GCS - Steven Lu
+- Yandex
+- Fix the name reported in log messages (was empty)
+- Correct error return for listing empty directory
+- Dropbox
+- Rewritten to use the v2 API
+
+- Now supports ModTime
+- Can only set by uploading the file again
+- If you uploaded with an old rclone, rclone may upload everything again
+- Use
--size-only
or --checksum
to avoid this
+- Now supports the Dropbox content hashing scheme
+- Now supports low level retries
+
+- S3
+- Work around eventual consistency in bucket creation
+- Create container if necessary on server side copy
+- Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar Ahmed
+- Swift, Hubic
+- Fix zero length directory markers showing in the subdirectory listing
+
+- this caused lots of duplicate transfers
+
+- Fix paged directory listings
+
+- this caused duplicate directory errors
+
+- Create container if necessary on server side copy
+- Increase directory listing chunk to 1000 to increase performance
+- Make sensible error if the user forgets the container
+- SFTP
+- Add support for using ssh key files
+- Fix under Windows
+- Fix ssh agent on Windows
+- Adapt to latest version of library - Igor Kharin
+
- v1.36 - 2017-03-18
Forum
diff --git a/MANUAL.md b/MANUAL.md
index 6e168147f..83eac85df 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Jun 15, 2017
+% Jul 22, 2017
Rclone
======
@@ -21,6 +21,7 @@ Rclone is a command line program to sync files and directories to and from
* Yandex Disk
* SFTP
* FTP
+ * HTTP
* The local filesystem
Features
@@ -190,7 +191,7 @@ Enable the snap-openwrt feed.
Configure
---------
-First you'll need to configure rclone. As the object storage systems
+First, you'll need to configure rclone. As the object storage systems
have quite complicated authentication these are kept in a config file.
(See the `--config` entry for how to find the config file and choose
its location.)
@@ -202,7 +203,7 @@ option:
See the following for detailed instructions for
- * [Google drive](https://rclone.org/drive/)
+ * [Google Drive](https://rclone.org/drive/)
* [Amazon S3](https://rclone.org/s3/)
* [Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/)
* [Dropbox](https://rclone.org/dropbox/)
@@ -215,6 +216,7 @@ See the following for detailed instructions for
* [Yandex Disk](https://rclone.org/yandex/)
* [SFTP](https://rclone.org/sftp/)
* [FTP](https://rclone.org/ftp/)
+ * [HTTP](https://rclone.org/http/)
* [Crypt](https://rclone.org/crypt/) - to encrypt other remotes
Usage
@@ -780,7 +782,7 @@ rclone copyto source:path dest:path
## rclone cryptcheck
-Cryptcheck checks the integritity of a crypted remote.
+Cryptcheck checks the integrity of a crypted remote.
### Synopsis
@@ -959,8 +961,9 @@ Mount the remote as a mountpoint. **EXPERIMENTAL**
-rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
-cloud storage systems as a file system with FUSE.
+rclone mount allows Linux, FreeBSD, macOS and Windows to
+mount any of Rclone's cloud storage systems as a file system with
+FUSE.
This is **EXPERIMENTAL** - use with care.
@@ -970,6 +973,10 @@ Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount
+Or on Windows like this where X: is an unused drive letter
+
+ rclone mount remote:path/to/files X:
+
When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal,
the mount is automatically stopped.
@@ -995,7 +1002,7 @@ None of these support the concept of directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
-Only supported on Linux, FreeBSD and OS X at the moment.
+Only supported on Linux, FreeBSD, OS X and Windows at the moment.
### rclone mount vs rclone sync/copy ##
@@ -1047,15 +1054,17 @@ rclone mount remote:path /path/to/mountpoint [flags]
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem. (default 2)
+ --umask int Override the permission bits set by the filesystem.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
```
@@ -1153,7 +1162,7 @@ rclone obscure password
## rclone rmdirs
-Remove any empty directoryies under the path.
+Remove empty directories under the path.
### Synopsis
@@ -1175,7 +1184,7 @@ rclone rmdirs remote:path
Copying single files
--------------------
-rclone normally syncs or copies directories. However if the source
+rclone normally syncs or copies directories. However, if the source
remote points to a file, rclone will just copy that file. The
destination remote must point to a directory - rclone will give the
error `Failed to create file system for "remote:file": is a file not a
@@ -1196,7 +1205,7 @@ Where `/tmp/files` contains the single line
test.jpg
-It is recommended to use `copy` when copying single files not `sync`.
+It is recommended to use `copy` when copying individual files, not `sync`.
They have pretty much the same effect but `copy` will use a lot less
memory.
@@ -1235,6 +1244,23 @@ If you are using the root directory on its own then don't quote it
rclone copy E:\ remote:backup
+Copying files or directories with `:` in the names
+--------------------------------------------------
+
+rclone uses `:` to mark a remote name. This is, however, a valid
+filename component in non-Windows OSes. The remote name parser will
+only search for a `:` up to the first `/` so if you need to act on a
+file or directory like this then use the full path starting with a
+`/`, or use `./` as a current directory prefix.
+
+So to sync a directory called `sync:me` to a remote called `remote:` use
+
+ rclone sync ./sync:me remote:path
+
+or
+
+ rclone sync /full/path/to/sync:me remote:path
+
Server Side Copy
----------------
@@ -1256,8 +1282,10 @@ Remotes which don't support server side copy **will** download and
re-upload in this case.
Server side copies are used with `sync` and `copy` and will be
-identified in the log when using the `-v` flag. The may also be used
-with `move` if the remote doesn't support server side move.
+identified in the log when using the `-v` flag. The `move` command
+may also use them if remote doesn't support server side move directly.
+This is done by issuing a server side copy then a delete which is much
+quicker than a download and re-upload.
Server side copies will only be attempted if the remote names are the
same.
@@ -1277,7 +1305,7 @@ possibly signed sequence of decimal numbers, each with optional
fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid
time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-Options which use SIZE use kByte by default. However a suffix of `b`
+Options which use SIZE use kByte by default. However, a suffix of `b`
for bytes, `k` for kBytes, `M` for MBytes and `G` for GBytes may be
used. These are the binary units, eg 1, 2\*\*10, 2\*\*20, 2\*\*30
respectively.
@@ -1317,11 +1345,11 @@ Single limits last for the duration of the session. To use a single limit,
specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The
default is `0` which means to not limit bandwidth.
-For example to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M`
+For example, to limit bandwidth usage to 10 MBytes/s use `--bwlimit 10M`
It is also possible to specify a "timetable" of limits, which will cause
certain limits to be applied at certain times. To specify a timetable, format your
-entries as "HH:MM,BANDWIDTH HH:MM,BANDWITH...".
+entries as "HH:MM,BANDWIDTH HH:MM,BANDWIDTH...".
An example of a typical timetable to avoid link saturation during daytime
working hours could be:
@@ -1337,24 +1365,32 @@ unlimited.
Bandwidth limits only apply to the data transfer. They don't apply to the
bandwidth of the directory listings etc.
-Note that the units are Bytes/s not Bits/s. Typically connections are
-measured in Bits/s - to convert divide by 8. For example let's say
+Note that the units are Bytes/s, not Bits/s. Typically connections are
+measured in Bits/s - to convert divide by 8. For example, let's say
you have a 10 Mbit/s connection and you wish rclone to use half of it
- 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a `--bwlimit
0.625M` parameter for rclone.
+On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled by
+sending a `SIGUSR2` signal to rclone. This allows to remove the limitations
+of a long running rclone transfer and to restore it back to the value specified
+with `--bwlimit` quickly when needed. Assuming there is only one rclone instance
+running, you can toggle the limiter like this:
+
+ kill -SIGUSR2 $(pidof rclone)
+
### --buffer-size=SIZE ###
Use this sized buffer to speed up file transfers. Each `--transfer`
will use this much memory for buffering.
-Set to 0 to disable the buffering for the minimum memory use.
+Set to 0 to disable the buffering for the minimum memory usage.
### --checkers=N ###
The number of checkers to run in parallel. Checkers do the equality
-checking of files during a sync. For some storage systems (eg s3,
-swift, dropbox) this can take a significant amount of time so they are
+checking of files during a sync. For some storage systems (eg S3,
+Swift, Dropbox) this can take a significant amount of time so they are
run in parallel.
The default is to run 8 checkers in parallel.
@@ -1441,7 +1477,7 @@ checks the checksum.
It will also cause rclone to skip verifying the sizes are the same
after transfer.
-This can be useful for transferring files to and from onedrive which
+This can be useful for transferring files to and from OneDrive which
occasionally misreports the size of image files (see
[#399](https://github.com/ncw/rclone/issues/399) for more info).
@@ -1458,8 +1494,8 @@ using `--checksum`).
Log all of rclone's output to FILE. This is not active by default.
This can be useful for tracking down problems with syncs in
-combination with the `-v` flag. See the Logging section for more
-info.
+combination with the `-v` flag. See the [Logging section](#logging)
+for more info.
### --log-level LEVEL ###
@@ -1475,7 +1511,7 @@ and prints stats once a minute by default.
outputs very little when things are working normally. It outputs
warnings and significant events.
-`ERROR` is equivalent to `-q`. It only output error messages.
+`ERROR` is equivalent to `-q`. It only outputs error messages.
### --low-level-retries NUMBER ###
@@ -1486,8 +1522,8 @@ HTTP request. This might be uploading a chunk of a big file for
example. You will see low level retries in the log with the `-v`
flag.
-This shouldn't need to be changed from the default in normal
-operations, however if you get a lot of low level retries you may wish
+This shouldn't need to be changed from the default in normal operations.
+However, if you get a lot of low level retries you may wish
to reduce the value so rclone moves on to a high level retry (see the
`--retries` flag) quicker.
@@ -1551,7 +1587,7 @@ this flag it will make as little output as possible.
Retry the entire sync if it fails this many times it fails (default 3).
-Some remotes can be unreliable and a few retries helps pick up the
+Some remotes can be unreliable and a few retries help pick up the
files which didn't get transferred because of errors.
Disable retries with `--retries 1`.
@@ -1562,7 +1598,7 @@ Normally rclone will look at modification time and size of files to
see if they are equal. If you set this flag then rclone will check
only the size.
-This can be useful transferring files from dropbox which have been
+This can be useful transferring files from Dropbox which have been
modified by the desktop sync client which doesn't set checksums of
modification times in the same way as rclone.
@@ -1576,13 +1612,26 @@ This sets the interval.
The default is `1m`. Use 0 to disable.
-If you set the stats interval then all command can show stats. This
+If you set the stats interval then all commands can show stats. This
can be useful when running other commands, `check` or `mount` for
example.
+Stats are logged at `INFO` level by default which means they won't
+show at default log level `NOTICE`. Use `--stats-log-level NOTICE` or
+`-v` to make them show. See the [Logging section](#logging) for more
+info on log levels.
+
+### --stats-log-level string ###
+
+Log level to show `--stats` output at. This can be `DEBUG`, `INFO`,
+`NOTICE`, or `ERROR`. The default is `INFO`. This means at the
+default level of logging which is `NOTICE` the stats won't show - if
+you want them to then use `-stats-log-level NOTICE`. See the [Logging
+section](#logging) for more info on log levels.
+
### --stats-unit=bits|bytes ###
-By default data transfer rates will be printed in bytes/second.
+By default, data transfer rates will be printed in bytes/second.
This option allows the data rate to be printed in bits/second.
@@ -1605,7 +1654,7 @@ See `--backup-dir` for more info.
On capable OSes (not Windows or Plan9) send all log output to syslog.
-This can be useful for running rclone in script or `rclone mount`.
+This can be useful for running rclone in a script or `rclone mount`.
### --syslog-facility string ###
@@ -1613,9 +1662,43 @@ If using `--syslog` this sets the syslog facility (eg `KERN`, `USER`).
See `man syslog` for a list of possible facilities. The default
facility is `DAEMON`.
+### --tpslimit float ###
+
+Limit HTTP transactions per second to this. Default is 0 which is used
+to mean unlimited transactions per second.
+
+For example to limit rclone to 10 HTTP transactions per second use
+`--tpslimit 10`, or to 1 transaction every 2 seconds use `--tpslimit
+0.5`.
+
+Use this when the number of transactions per second from rclone is
+causing a problem with the cloud storage provider (eg getting you
+banned or rate limited).
+
+This can be very useful for `rclone mount` to control the behaviour of
+applications using it.
+
+See also `--tpslimit-burst`.
+
+### --tpslimit-burst int ###
+
+Max burst of transactions for `--tpslimit`. (default 1)
+
+Normally `--tpslimit` will do exactly the number of transaction per
+second specified. However if you supply `--tps-burst` then rclone can
+save up some transactions from when it was idle giving a burst of up
+to the parameter supplied.
+
+For example if you provide `--tpslimit-burst 10` then if rclone has
+been idle for more than 10*`--tpslimit` then it can do 10 transactions
+very quickly before they are limited again.
+
+This may be used to increase performance of `--tpslimit` without
+changing the long term average number of transactions per second.
+
### --track-renames ###
-By default rclone doesn't not keep track of renamed files, so if you
+By default, rclone doesn't keep track of renamed files, so if you
rename a file locally then sync it to a remote, rclone will delete the
old file on the remote and upload a new copy.
@@ -1669,9 +1752,9 @@ directory and processes it before using more directory lists to
process any subdirectories. This can be parallelised and works very
quickly using the least amount of memory.
-However some remotes have a way of listing all files beneath a
+However, some remotes have a way of listing all files beneath a
directory in one (or a small number) of transactions. These tend to
-be the bucket based remotes (eg s3, b2, gcs, swift, hubic).
+be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
If you use the `--fast-list` flag then rclone will use this method for
listing directories. This will have the following consequences for
@@ -1785,7 +1868,7 @@ c/u/q>
```
Your configuration is now encrypted, and every time you start rclone
-you will now be asked for the password. In the same menu you can
+you will now be asked for the password. In the same menu, you can
change the password or completely remove encryption from your
configuration.
@@ -1890,7 +1973,7 @@ If you are only copying a small number of files and/or have a large
number of files on the destination then `--no-traverse` will stop
rclone listing the destination and save time.
-However if you are copying a large number of files, especially if you
+However, if you are copying a large number of files, especially if you
are doing a copy where lots of the files haven't changed and won't
need copying then you shouldn't use `--no-traverse`.
@@ -1925,11 +2008,11 @@ Logging
rclone has 4 levels of logging, `Error`, `Notice`, `Info` and `Debug`.
-By default rclone logs to standard error. This means you can redirect
+By default, rclone logs to standard error. This means you can redirect
standard error and still see the normal output of rclone commands (eg
`rclone ls`).
-By default rclone will produce `Error` and `Notice` level messages.
+By default, rclone will produce `Error` and `Notice` level messages.
If you use the `-q` flag, rclone will only produce `Error` messages.
@@ -1954,16 +2037,16 @@ information.
Exit Code
---------
-If any errors occurred during the command, rclone will exit with a
+If any errors occur during the command execution, rclone will exit with a
non-zero exit code. This allows scripts to detect when rclone
operations have failed.
-During the startup phase rclone will exit immediately if an error is
+During the startup phase, rclone will exit immediately if an error is
detected in the configuration. There will always be a log message
immediately before exiting.
When rclone is running it will accumulate errors as it goes along, and
-only exit with an non-zero exit code if (after retries) there were
+only exit with a non-zero exit code if (after retries) there were
still failed transfers. For every error counted there will be a high
priority log message (visible with `-q`) showing the message and
which file caused the problem. A high priority message is also shown
@@ -1982,11 +2065,11 @@ can be used to set defaults for options or config file entries.
Every option in rclone can have its default set by environment
variable.
-To find the name of the environment variable, first take the long
+To find the name of the environment variable, first, take the long
option name, strip the leading `--`, change `-` to `_`, make
upper case and prepend `RCLONE_`.
-For example to always set `--stats 5s`, set the environment variable
+For example, to always set `--stats 5s`, set the environment variable
`RCLONE_STATS=5s`. If you set stats on the command line this will
override the environment variable setting.
@@ -2009,7 +2092,7 @@ To find the name of the environment variable, you need to set, take
`RCLONE_` + name of remote + `_` + name of config file option and make
it all uppercase.
-For example to configure an S3 remote named `mys3:` without a config
+For example, to configure an S3 remote named `mys3:` without a config
file (using unix ways of setting environment variables):
```
@@ -2553,6 +2636,7 @@ Here is an overview of the major features of each cloud storage system.
| Yandex Disk | MD5 | Yes | No | No | R/W |
| SFTP | - | Yes | Depends | No | - |
| FTP | - | No | Yes | No | - |
+| HTTP | - | No | Yes | No | R |
| The local filesystem | All | Yes | Depends | No | - |
### Hash ###
@@ -2646,6 +2730,7 @@ operations more efficient.
| Yandex Disk | Yes | No | No | No | No [#575](https://github.com/ncw/rclone/issues/575) | Yes |
| SFTP | No | No | Yes | Yes | No | No |
| FTP | No | No | Yes | Yes | No | No |
+| HTTP | No | No | No | No | No | No |
| The local filesystem | Yes | No | Yes | Yes | No | No |
@@ -2871,6 +2956,10 @@ sending them to the trash is required instead then use the
Here are the command line options specific to this cloud storage
system.
+#### --drive-auth-owner-only ####
+
+Only consider files owned by the authenticated user.
+
#### --drive-chunk-size=SIZE ####
Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
@@ -2880,23 +2969,9 @@ is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
-#### --drive-full-list ####
-
-No longer does anything - kept for backwards compatibility.
-
-#### --drive-upload-cutoff=SIZE ####
-
-File size cutoff for switching to chunked upload. Default is 8 MB.
-
-#### --drive-use-trash ####
-
-Send files to the trash instead of deleting permanently. Defaults to
-off, namely deleting files permanently.
-
#### --drive-auth-owner-only ####
-Only consider files owned by the authenticated user. Requires
-that --drive-full-list=true (default).
+Only consider files owned by the authenticated user.
#### --drive-formats ####
@@ -2945,10 +3020,32 @@ Here are the possible extensions with their corresponding mime types.
| xlsx | application/vnd.openxmlformats-officedocument.spreadsheetml.sheet | Microsoft Office Spreadsheet |
| zip | application/zip | A ZIP file of HTML, Images CSS |
+#### --drive-list-chunk int ####
+
+Size of listing chunk 100-1000. 0 to disable. (default 1000)
+
+#### --drive-shared-with-me ####
+
+Only show files that are shared with me
+
#### --drive-skip-gdocs ####
Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
+#### --drive-trashed-only ####
+
+Only show files that are in the trash. This will show trashed files
+in their original directory structure.
+
+#### --drive-upload-cutoff=SIZE ####
+
+File size cutoff for switching to chunked upload. Default is 8 MB.
+
+#### --drive-use-trash ####
+
+Send files to the trash instead of deleting permanently. Defaults to
+off, namely deleting files permanently.
+
### Limitations ###
Drive has quite a lot of rate limiting. This causes rclone to be
@@ -2956,6 +3053,40 @@ limited to transferring about 2 files per second only. Individual
files may be transferred much faster at 100s of MBytes/s but lots of
small files can take a long time.
+### Duplicated files ###
+
+Sometimes, for no reason I've been able to track down, drive will
+duplicate a file that rclone uploads. Drive unlike all the other
+remotes can have duplicated files.
+
+Duplicated files cause problems with the syncing and you will see
+messages in the log about duplicates.
+
+Use `rclone dedupe` to fix duplicated files.
+
+Note that this isn't just a problem with rclone, even Google Photos on
+Android duplicates files on drive sometimes.
+
+### Rclone appears to be re-copying files it shouldn't ###
+
+There are two possible reasons for rclone to recopy files which
+haven't changed to Google Drive.
+
+The first is the duplicated file issue above - run `rclone dedupe` and
+check your logs for duplicate object or directory messages.
+
+The second is that sometimes Google reports different sizes for the
+Google Docs exports which will cause rclone to re-download Google Docs
+for no apparent reason. `--ignore-size` is a not very satisfactory
+work-around for this if it is causing you a lot of problems.
+
+### Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y" ###
+
+This is the same problem as above. Google reports the google doc is
+one size, but rclone downloads a different size. Work-around with the
+`--ignore-size` flag or wait for rclone to retry the download which it
+will.
+
### Making your own client_id ###
When you use rclone with Google drive in its default configuration you
@@ -3400,20 +3531,31 @@ use the secret key as `xxxxxx/xxxx` it will work fine.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
-To use it, install Minio following the instructions from the web site.
+To use it, install Minio following the instructions [here](https://docs.minio.io/docs/minio-quickstart-guide).
When it configures itself Minio will print something like this
```
-AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
+Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
+AccessKey: USWUXHGYZQYFYFFIT3RE
+SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+Region: us-east-1
+SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
-Minio Object Storage:
- http://127.0.0.1:9000
- http://10.0.0.3:9000
+Browser Access:
+ http://192.168.1.106:9000 http://172.23.0.1:9000
-Minio Browser:
- http://127.0.0.1:9000
- http://10.0.0.3:9000
+Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
+ $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+
+Object API (Amazon S3 compatible):
+ Go: https://docs.minio.io/docs/golang-client-quickstart-guide
+ Java: https://docs.minio.io/docs/java-client-quickstart-guide
+ Python: https://docs.minio.io/docs/python-client-quickstart-guide
+ JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
+ .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
+
+Drive Capacity: 26 GiB Free, 165 GiB Total
```
These details need to go into `rclone config` like this. Note that it
@@ -3421,10 +3563,10 @@ is important to put the region in as stated above.
```
env_auth> 1
-access_key_id> WLGDGYAQYIGI833EV05A
-secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
+access_key_id> USWUXHGYZQYFYFFIT3RE
+secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region> us-east-1
-endpoint> http://10.0.0.3:9000
+endpoint> http://192.168.1.106:9000
location_constraint>
server_side_encryption>
```
@@ -3434,22 +3576,19 @@ Which makes the config file look like this
```
[minio]
env_auth = false
-access_key_id = WLGDGYAQYIGI833EV05A
-secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
+access_key_id = USWUXHGYZQYFYFFIT3RE
+secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
-endpoint = http://10.0.0.3:9000
+endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =
```
-Minio doesn't support all the features of S3 yet. In particular it
-doesn't support MD5 checksums (ETags) or metadata. This means rclone
-can't check MD5SUMs or store the modified date. However you can work
-around this with the `--size-only` flag of rclone.
-
So once set up, for example to copy files into a bucket
- rclone --size-only copy /path/to/files minio:bucket
+```
+rclone copy /path/to/files minio:bucket
+```
Swift
----------------------------------------
@@ -3870,6 +4009,52 @@ Choose a number from below, or type in your own value
5 / Project team owners get OWNER access, and all Users get WRITER access.
\ "publicReadWrite"
bucket_acl> 2
+Location for the newly created buckets.
+Choose a number from below, or type in your own value
+ 1 / Empty for default location (US).
+ \ ""
+ 2 / Multi-regional location for Asia.
+ \ "asia"
+ 3 / Multi-regional location for Europe.
+ \ "eu"
+ 4 / Multi-regional location for United States.
+ \ "us"
+ 5 / Taiwan.
+ \ "asia-east1"
+ 6 / Tokyo.
+ \ "asia-northeast1"
+ 7 / Singapore.
+ \ "asia-southeast1"
+ 8 / Sydney.
+ \ "australia-southeast1"
+ 9 / Belgium.
+ \ "europe-west1"
+10 / London.
+ \ "europe-west2"
+11 / Iowa.
+ \ "us-central1"
+12 / South Carolina.
+ \ "us-east1"
+13 / Northern Virginia.
+ \ "us-east4"
+14 / Oregon.
+ \ "us-west1"
+location> 12
+The storage class to use when storing objects in Google Cloud Storage.
+Choose a number from below, or type in your own value
+ 1 / Default
+ \ ""
+ 2 / Multi-regional storage class
+ \ "MULTI_REGIONAL"
+ 3 / Regional storage class
+ \ "REGIONAL"
+ 4 / Nearline storage class
+ \ "NEARLINE"
+ 5 / Coldline storage class
+ \ "COLDLINE"
+ 6 / Durable reduced availability storage class
+ \ "DURABLE_REDUCED_AVAILABILITY"
+storage_class> 5
Remote config
Use auto config?
* Say Y if not sure
@@ -4704,7 +4889,7 @@ system.
When uploading large files chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
-`--transfers` chunks in progress at once. 100,000,000 Bytes is the
+`--transfers` chunks in progress at once. 5,000,000 Bytes is the
minimim size (default 96M).
#### --b2-upload-cutoff=SIZE ####
@@ -4895,17 +5080,14 @@ Here is an example of making a SFTP configuration. First run
rclone config
-This will guide you through an interactive setup process. You will
-need your account number (a short hex number) and key (a long hex
-number) which you can get from the SFTP control panel.
+This will guide you through an interactive setup process.
+
```
No remotes found - make a new one
n) New remote
-r) Rename remote
-c) Copy remote
s) Set configuration password
q) Quit config
-n/r/c/s/q> n
+n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
@@ -4937,6 +5119,8 @@ Choose a number from below, or type in your own value
\ "sftp"
14 / Yandex Disk
\ "yandex"
+15 / http Connection
+ \ "http"
Storage> sftp
SSH host to connect to
Choose a number from below, or type in your own value
@@ -4944,21 +5128,178 @@ Choose a number from below, or type in your own value
\ "example.com"
host> example.com
SSH username, leave blank for current username, ncw
-user>
+user> sftpuser
SSH port, leave blank to use default (22)
port>
-SSH password, leave blank to use ssh-agent
+SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> n
+Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+key_file>
Remote config
--------------------
[remote]
host = example.com
-user =
+user = sftpuser
port =
pass =
+key_file =
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+This remote is called `remote` and can now be used like this
+
+See all directories in the home directory
+
+ rclone lsd remote:
+
+Make a new directory
+
+ rclone mkdir remote:path/to/directory
+
+List the contents of a directory
+
+ rclone ls remote:path/to/directory
+
+Sync `/home/local/directory` to the remote directory, deleting any
+excess files in the directory.
+
+ rclone sync /home/local/directory remote:directory
+
+### SSH Authentication ###
+
+The SFTP remote supports 3 authentication methods
+
+ * Password
+ * Key file
+ * ssh-agent
+
+Key files should be unencrypted PEM-encoded private key files. For
+instance `/home/$USER/.ssh/id_rsa`.
+
+If you don't specify `pass` or `key_file` then it will attempt to
+contact an ssh-agent.
+
+### ssh-agent on macOS ###
+
+Note that there seem to be various problems with using an ssh-agent on
+macOS due to recent changes in the OS. The most effective work-around
+seems to be to start an ssh-agent in each session, eg
+
+ eval `ssh-agent -s` && ssh-add -A
+
+And then at the end of the session
+
+ eval `ssh-agent -k`
+
+These commands can be used in scripts of course.
+
+### Modified time ###
+
+Modified times are stored on the server to 1 second precision.
+
+Modified times are used in syncing and are fully supported.
+
+### Limitations ###
+
+SFTP does not support any checksums.
+
+The only ssh agent supported under Windows is Putty's pagent.
+
+SFTP isn't supported under plan9 until [this
+issue](https://github.com/pkg/sftp/issues/156) is fixed.
+
+Note that since SFTP isn't HTTP based the following flags don't work
+with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`
+
+Note that `--timeout` isn't supported (but `--contimeout` is).
+
+FTP
+------------------------------
+
+FTP is the File Transfer Protocol. FTP support is provided using the
+[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp)
+package.
+
+Here is an example of making an FTP configuration. First run
+
+ rclone config
+
+This will guide you through an interactive setup process. An FTP remote only
+needs a host together with and a username and a password. With anonymous FTP
+server, you will need to use `anonymous` as username and your email address as
+the password.
+
+```
+No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 8 / Google Drive
+ \ "drive"
+ 9 / Hubic
+ \ "hubic"
+10 / Local Disk
+ \ "local"
+11 / Microsoft OneDrive
+ \ "onedrive"
+12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+13 / SSH/SFTP Connection
+ \ "sftp"
+14 / Yandex Disk
+ \ "yandex"
+Storage> ftp
+FTP host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to ftp.example.com
+ \ "ftp.example.com"
+host> ftp.example.com
+FTP username, leave blank for current username, ncw
+user>
+FTP port, leave blank to use default (21)
+port>
+FTP password
+y) Yes type in my own password
+g) Generate random password
+y/g> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Remote config
+--------------------
+[remote]
+host = ftp.example.com
+user =
+port =
+pass = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
@@ -4987,24 +5328,154 @@ excess files in the directory.
### Modified time ###
-Modified times are stored on the server to 1 second precision.
+FTP does not support modified times. Any times you see on the server
+will be time of upload.
-Modified times are used in syncing and are fully supported.
+### Checksums ###
+
+FTP does not support any checksums.
### Limitations ###
-SFTP does not support any checksums.
-
-The only ssh agent supported under Windows is Putty's pagent.
-
-SFTP isn't supported under plan9 until [this
-issue](https://github.com/pkg/sftp/issues/156) is fixed.
-
-Note that since SFTP isn't HTTP based the following flags don't work
+Note that since FTP isn't HTTP based the following flags don't work
with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`
Note that `--timeout` isn't supported (but `--contimeout` is).
+FTP could support server side move but doesn't yet.
+
+HTTP
+-------------------------------------------------
+
+The HTTP remote is a read only remote for reading files of a
+webserver. The webserver should provide file listings which rclone
+will read and turn into a remote. This has been tested with common
+webservers such as Apache/Nginx/Caddy and will likely work with file
+listings from most web servers. (If it doesn't then please file an
+issue, or send a pull request!)
+
+Paths are specified as `remote:` or `remote:path/to/dir`.
+
+Here is an example of how to make a remote called `remote`. First
+run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 8 / Google Drive
+ \ "drive"
+ 9 / Hubic
+ \ "hubic"
+10 / Local Disk
+ \ "local"
+11 / Microsoft OneDrive
+ \ "onedrive"
+12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+13 / SSH/SFTP Connection
+ \ "sftp"
+14 / Yandex Disk
+ \ "yandex"
+15 / http Connection
+ \ "http"
+Storage> http
+URL of http host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "https://example.com"
+url> https://beta.rclone.org
+Remote config
+--------------------
+[remote]
+url = https://beta.rclone.org
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Current remotes:
+
+Name Type
+==== ====
+remote http
+
+e) Edit existing remote
+n) New remote
+d) Delete remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+e/n/d/r/c/s/q> q
+```
+
+This remote is called `remote` and can now be used like this
+
+See all the top level directories
+
+ rclone lsd remote:
+
+List the contents of a directory
+
+ rclone ls remote:directory
+
+Sync the remote `directory` to `/home/local/directory`, deleting any excess files.
+
+ rclone sync remote:directory /home/local/directory
+
+### Read only ###
+
+This remote is read only - you can't upload files to an HTTP server.
+
+### Modified time ###
+
+Most HTTP servers store time accurate to 1 second.
+
+### Checksum ###
+
+No checksums are stored.
+
+### Usage without a config file ###
+
+Note that since only two environment variable need to be set, it is
+easy to use without a config file like this.
+
+```
+RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz:
+```
+
+Or if you prefer
+
+```
+export RCLONE_CONFIG_ZZ_TYPE=http
+export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org
+rclone lsd zz:
+```
+
Crypt
----------------------------------------
@@ -5410,130 +5881,6 @@ then rclone uses an internal one.
encrypted data. For full protection agains this you should always use
a salt.
-FTP
-------------------------------
-
-FTP is the File Transfer Protocol. FTP support is provided using the
-[github.com/jlaffaye/ftp](https://godoc.org/github.com/jlaffaye/ftp)
-package.
-
-Here is an example of making an FTP configuration. First run
-
- rclone config
-
-This will guide you through an interactive setup process. An FTP remote only
-needs a host together with and a username and a password. With anonymous FTP
-server, you will need to use `anonymous` as username and your email address as
-the password.
-
-```
-No remotes found - make a new one
-n) New remote
-r) Rename remote
-c) Copy remote
-s) Set configuration password
-q) Quit config
-n/r/c/s/q> n
-name> remote
-Type of storage to configure.
-Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
-10 / Local Disk
- \ "local"
-11 / Microsoft OneDrive
- \ "onedrive"
-12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
-13 / SSH/SFTP Connection
- \ "sftp"
-14 / Yandex Disk
- \ "yandex"
-Storage> ftp
-FTP host to connect to
-Choose a number from below, or type in your own value
- 1 / Connect to ftp.example.com
- \ "ftp.example.com"
-host> ftp.example.com
-FTP username, leave blank for current username, ncw
-user>
-FTP port, leave blank to use default (21)
-port>
-FTP password
-y) Yes type in my own password
-g) Generate random password
-y/g> y
-Enter the password:
-password:
-Confirm the password:
-password:
-Remote config
---------------------
-[remote]
-host = ftp.example.com
-user =
-port =
-pass = *** ENCRYPTED ***
---------------------
-y) Yes this is OK
-e) Edit this remote
-d) Delete this remote
-y/e/d> y
-```
-
-This remote is called `remote` and can now be used like this
-
-See all directories in the home directory
-
- rclone lsd remote:
-
-Make a new directory
-
- rclone mkdir remote:path/to/directory
-
-List the contents of a directory
-
- rclone ls remote:path/to/directory
-
-Sync `/home/local/directory` to the remote directory, deleting any
-excess files in the directory.
-
- rclone sync /home/local/directory remote:directory
-
-### Modified time ###
-
-FTP does not support modified times. Any times you see on the server
-will be time of upload.
-
-### Checksums ###
-
-FTP does not support any checksums.
-
-### Limitations ###
-
-Note that since FTP isn't HTTP based the following flags don't work
-with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`
-
-Note that `--timeout` isn't supported (but `--contimeout` is).
-
-FTP could support server side move but doesn't yet.
-
Local Filesystem
-------------------------------------------
@@ -5706,6 +6053,109 @@ flag.
Changelog
---------
+ * v1.37 - 2017-07-22
+ * New backends
+ * FTP - thanks to Antonio Messina
+ * HTTP - thanks to Vasiliy Tolstov
+ * New commands
+ * rclone ncdu - for exploring a remote with a text based user interface.
+ * rclone lsjson - for listing with a machine readable output
+ * rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox)
+ * New Features
+ * Implement --fast-list flag
+ * This allows remotes to list recursively if they can
+ * This uses less transactions (important if you pay for them)
+ * This may or may not be quicker
+ * This will user more memory as it has to hold the listing in memory
+ * --old-sync-method deprecated - the remaining uses are covered by --fast-list
+ * This involved a major re-write of all the listing code
+ * Add --tpslimit and --tpslimit-burst to limit transactions per second
+ * this is useful in conjuction with `rclone mount` to limit external apps
+ * Add --stats-log-level so can see --stats without -v
+ * Print password prompts to stderr - Hraban Luyat
+ * Warn about duplicate files when syncing
+ * Oauth improvements
+ * allow auth_url and token_url to be set in the config file
+ * Print redirection URI if using own credentials.
+ * Don't Mkdir at the start of sync to save transactions
+ * Compile
+ * Update build to go1.8.3
+ * Require go1.6 for building rclone
+ * Compile 386 builds with "GO386=387" for maximum compatibility
+ * Bug Fixes
+ * Fix menu selection when no remotes
+ * Config saving reworked to not kill the file if disk gets full
+ * Don't delete remote if name does not change while renaming
+ * moveto, copyto: report transfers and checks as per move and copy
+ * Local
+ * Add --local-no-unicode-normalization flag - Bob Potter
+ * Mount
+ * Now supported on Windows using cgofuse and WinFsp - thanks to Bill Zissimopoulos for much help
+ * Compare checksums on upload/download via FUSE
+ * Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme Vizcaino
+ * On read only open of file, make open pending until first read
+ * Make --read-only reject modify operations
+ * Implement ModTime via FUSE for remotes that support it
+ * Allow modTime to be changed even before all writers are closed
+ * Fix panic on renames
+ * Fix hang on errored upload
+ * Crypt
+ * Report the name:root as specified by the user
+ * Add an "obfuscate" option for filename encryption - Stephen Harris
+ * Amazon Drive
+ * Fix initialization order for token renewer
+ * Remove revoked credentials, allow oauth proxy config and update docs
+ * B2
+ * Reduce minimum chunk size to 5MB
+ * Drive
+ * Add team drive support
+ * Reduce bandwidth by adding fields for partial responses - Martin Kristensen
+ * Implement --drive-shared-with-me flag to view shared with me files - Danny Tsai
+ * Add --drive-trashed-only to read only the files in the trash
+ * Remove obsolete --drive-full-list
+ * Add missing seek to start on retries of chunked uploads
+ * Fix stats accounting for upload
+ * Convert / in names to a unicode equivalent (/)
+ * Poll for Google Drive changes when mounted
+ * OneDrive
+ * Fix the uploading of files with spaces
+ * Fix initialization order for token renewer
+ * Display speeds accurately when uploading - Yoni Jah
+ * Swap to using http://localhost:53682/ as redirect URL - Michael Ledin
+ * Retry on token expired error, reset upload body on retry - Yoni Jah
+ * Google Cloud Storage
+ * Add ability to specify location and storage class via config and command line - thanks gdm85
+ * Create container if necessary on server side copy
+ * Increase directory listing chunk to 1000 to increase performance
+ * Obtain a refresh token for GCS - Steven Lu
+ * Yandex
+ * Fix the name reported in log messages (was empty)
+ * Correct error return for listing empty directory
+ * Dropbox
+ * Rewritten to use the v2 API
+ * Now supports ModTime
+ * Can only set by uploading the file again
+ * If you uploaded with an old rclone, rclone may upload everything again
+ * Use `--size-only` or `--checksum` to avoid this
+ * Now supports the Dropbox content hashing scheme
+ * Now supports low level retries
+ * S3
+ * Work around eventual consistency in bucket creation
+ * Create container if necessary on server side copy
+ * Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar Ahmed
+ * Swift, Hubic
+ * Fix zero length directory markers showing in the subdirectory listing
+ * this caused lots of duplicate transfers
+ * Fix paged directory listings
+ * this caused duplicate directory errors
+ * Create container if necessary on server side copy
+ * Increase directory listing chunk to 1000 to increase performance
+ * Make sensible error if the user forgets the container
+ * SFTP
+ * Add support for using ssh key files
+ * Fix under Windows
+ * Fix ssh agent on Windows
+ * Adapt to latest version of library - Igor Kharin
* v1.36 - 2017-03-18
* New Features
* SFTP remote (Jack Schmidt)
@@ -6583,6 +7033,11 @@ Contributors
* Ruwbin
* Fabian Möller
* Edward Q. Bridges
+ * Vasiliy Tolstov
+ * Harshavardhana
+ * sainaen
+ * gdm85
+ * Yaroslav Halchenko
# Contact the rclone project #
diff --git a/MANUAL.txt b/MANUAL.txt
index 924738f04..5c45cfd02 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Jun 15, 2017
+Jul 22, 2017
@@ -24,6 +24,7 @@ from
- Yandex Disk
- SFTP
- FTP
+- HTTP
- The local filesystem
Features
@@ -200,7 +201,7 @@ Enable the snap-openwrt feed.
Configure
-First you'll need to configure rclone. As the object storage systems
+First, you'll need to configure rclone. As the object storage systems
have quite complicated authentication these are kept in a config file.
(See the --config entry for how to find the config file and choose its
location.)
@@ -212,7 +213,7 @@ option:
See the following for detailed instructions for
-- Google drive
+- Google Drive
- Amazon S3
- Swift / Rackspace Cloudfiles / Memset Memstore
- Dropbox
@@ -225,6 +226,7 @@ See the following for detailed instructions for
- Yandex Disk
- SFTP
- FTP
+- HTTP
- Crypt - to encrypt other remotes
@@ -720,7 +722,7 @@ time or MD5SUM. It doesn't delete files from the destination.
rclone cryptcheck
-Cryptcheck checks the integritity of a crypted remote.
+Cryptcheck checks the integrity of a crypted remote.
Synopsis
@@ -861,8 +863,8 @@ Mount the remote as a mountpoint. EXPERIMENTAL
Synopsis
-rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
-cloud storage systems as a file system with FUSE.
+rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of
+Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
@@ -873,6 +875,10 @@ Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount
+Or on Windows like this where X: is an unused drive letter
+
+ rclone mount remote:path/to/files X:
+
When the program ends, either via Ctrl+C or receiving a SIGINT or
SIGTERM signal, the mount is automatically stopped.
@@ -898,7 +904,7 @@ as will swift:bucket/path. None of these support the concept of
directories, so empty directories will have a tendency to disappear once
they fall out of the directory cache.
-Only supported on Linux, FreeBSD and OS X at the moment.
+Only supported on Linux, FreeBSD, OS X and Windows at the moment.
rclone mount vs rclone sync/copy
@@ -945,15 +951,17 @@ Options
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem. (default 2)
+ --umask int Override the permission bits set by the filesystem.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
@@ -1040,7 +1048,7 @@ Obscure password for use in the rclone.conf
rclone rmdirs
-Remove any empty directoryies under the path.
+Remove empty directories under the path.
Synopsis
@@ -1056,7 +1064,7 @@ empty directories in.
Copying single files
-rclone normally syncs or copies directories. However if the source
+rclone normally syncs or copies directories. However, if the source
remote points to a file, rclone will just copy that file. The
destination remote must point to a directory - rclone will give the
error
@@ -1078,8 +1086,9 @@ Where /tmp/files contains the single line
test.jpg
-It is recommended to use copy when copying single files not sync. They
-have pretty much the same effect but copy will use a lot less memory.
+It is recommended to use copy when copying individual files, not sync.
+They have pretty much the same effect but copy will use a lot less
+memory.
Quoting and the shell
@@ -1117,6 +1126,23 @@ If you are using the root directory on its own then don't quote it (see
rclone copy E:\ remote:backup
+Copying files or directories with : in the names
+
+rclone uses : to mark a remote name. This is, however, a valid filename
+component in non-Windows OSes. The remote name parser will only search
+for a : up to the first / so if you need to act on a file or directory
+like this then use the full path starting with a /, or use ./ as a
+current directory prefix.
+
+So to sync a directory called sync:me to a remote called remote: use
+
+ rclone sync ./sync:me remote:path
+
+or
+
+ rclone sync /full/path/to/sync:me remote:path
+
+
Server Side Copy
Most remotes (but not all - see the overview) support server side copy.
@@ -1136,8 +1162,10 @@ Remotes which don't support server side copy WILL download and re-upload
in this case.
Server side copies are used with sync and copy and will be identified in
-the log when using the -v flag. The may also be used with move if the
-remote doesn't support server side move.
+the log when using the -v flag. The move command may also use them if
+remote doesn't support server side move directly. This is done by
+issuing a server side copy then a delete which is much quicker than a
+download and re-upload.
Server side copies will only be attempted if the remote names are the
same.
@@ -1157,7 +1185,7 @@ possibly signed sequence of decimal numbers, each with optional fraction
and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units
are "ns", "us" (or "µs"), "ms", "s", "m", "h".
-Options which use SIZE use kByte by default. However a suffix of b for
+Options which use SIZE use kByte by default. However, a suffix of b for
bytes, k for kBytes, M for MBytes and G for GBytes may be used. These
are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
@@ -1195,11 +1223,11 @@ Single limits last for the duration of the session. To use a single
limit, specify the desired bandwidth in kBytes/s, or use a suffix
b|k|M|G. The default is 0 which means to not limit bandwidth.
-For example to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
+For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
It is also possible to specify a "timetable" of limits, which will cause
certain limits to be applied at certain times. To specify a timetable,
-format your entries as "HH:MM,BANDWIDTH HH:MM,BANDWITH...".
+format your entries as "HH:MM,BANDWIDTH HH:MM,BANDWIDTH...".
An example of a typical timetable to avoid link saturation during
daytime working hours could be:
@@ -1215,24 +1243,33 @@ Anything between 11pm and 8am will remain unlimited.
Bandwidth limits only apply to the data transfer. They don't apply to
the bandwidth of the directory listings etc.
-Note that the units are Bytes/s not Bits/s. Typically connections are
-measured in Bits/s - to convert divide by 8. For example let's say you
+Note that the units are Bytes/s, not Bits/s. Typically connections are
+measured in Bits/s - to convert divide by 8. For example, let's say you
have a 10 Mbit/s connection and you wish rclone to use half of it - 5
Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
+On Unix systems (Linux, MacOS, …) the bandwidth limiter can be toggled
+by sending a SIGUSR2 signal to rclone. This allows to remove the
+limitations of a long running rclone transfer and to restore it back to
+the value specified with --bwlimit quickly when needed. Assuming there
+is only one rclone instance running, you can toggle the limiter like
+this:
+
+ kill -SIGUSR2 $(pidof rclone)
+
--buffer-size=SIZE
Use this sized buffer to speed up file transfers. Each --transfer will
use this much memory for buffering.
-Set to 0 to disable the buffering for the minimum memory use.
+Set to 0 to disable the buffering for the minimum memory usage.
--checkers=N
The number of checkers to run in parallel. Checkers do the equality
-checking of files during a sync. For some storage systems (eg s3, swift,
-dropbox) this can take a significant amount of time so they are run in
+checking of files during a sync. For some storage systems (eg S3, Swift,
+Dropbox) this can take a significant amount of time so they are run in
parallel.
The default is to run 8 checkers in parallel.
@@ -1320,7 +1357,7 @@ checksum.
It will also cause rclone to skip verifying the sizes are the same after
transfer.
-This can be useful for transferring files to and from onedrive which
+This can be useful for transferring files to and from OneDrive which
occasionally misreports the size of image files (see #399 for more
info).
@@ -1353,7 +1390,7 @@ NOTICE is the default log level if no logging flags are supplied. It
outputs very little when things are working normally. It outputs
warnings and significant events.
-ERROR is equivalent to -q. It only output error messages.
+ERROR is equivalent to -q. It only outputs error messages.
--low-level-retries NUMBER
@@ -1363,10 +1400,10 @@ A low level retry is used to retry a failing operation - typically one
HTTP request. This might be uploading a chunk of a big file for example.
You will see low level retries in the log with the -v flag.
-This shouldn't need to be changed from the default in normal operations,
-however if you get a lot of low level retries you may wish to reduce the
-value so rclone moves on to a high level retry (see the --retries flag)
-quicker.
+This shouldn't need to be changed from the default in normal operations.
+However, if you get a lot of low level retries you may wish to reduce
+the value so rclone moves on to a high level retry (see the --retries
+flag) quicker.
Disable low level retries with --low-level-retries 1.
@@ -1427,7 +1464,7 @@ flag it will make as little output as possible.
Retry the entire sync if it fails this many times it fails (default 3).
-Some remotes can be unreliable and a few retries helps pick up the files
+Some remotes can be unreliable and a few retries help pick up the files
which didn't get transferred because of errors.
Disable retries with --retries 1.
@@ -1438,7 +1475,7 @@ Normally rclone will look at modification time and size of files to see
if they are equal. If you set this flag then rclone will check only the
size.
-This can be useful transferring files from dropbox which have been
+This can be useful transferring files from Dropbox which have been
modified by the desktop sync client which doesn't set checksums of
modification times in the same way as rclone.
@@ -1451,12 +1488,24 @@ This sets the interval.
The default is 1m. Use 0 to disable.
-If you set the stats interval then all command can show stats. This can
+If you set the stats interval then all commands can show stats. This can
be useful when running other commands, check or mount for example.
+Stats are logged at INFO level by default which means they won't show at
+default log level NOTICE. Use --stats-log-level NOTICE or -v to make
+them show. See the Logging section for more info on log levels.
+
+--stats-log-level string
+
+Log level to show --stats output at. This can be DEBUG, INFO, NOTICE, or
+ERROR. The default is INFO. This means at the default level of logging
+which is NOTICE the stats won't show - if you want them to then use
+-stats-log-level NOTICE. See the Logging section for more info on log
+levels.
+
--stats-unit=bits|bytes
-By default data transfer rates will be printed in bytes/second.
+By default, data transfer rates will be printed in bytes/second.
This option allows the data rate to be printed in bits/second.
@@ -1479,7 +1528,7 @@ See --backup-dir for more info.
On capable OSes (not Windows or Plan9) send all log output to syslog.
-This can be useful for running rclone in script or rclone mount.
+This can be useful for running rclone in a script or rclone mount.
--syslog-facility string
@@ -1487,11 +1536,44 @@ If using --syslog this sets the syslog facility (eg KERN, USER). See
man syslog for a list of possible facilities. The default facility is
DAEMON.
+--tpslimit float
+
+Limit HTTP transactions per second to this. Default is 0 which is used
+to mean unlimited transactions per second.
+
+For example to limit rclone to 10 HTTP transactions per second use
+--tpslimit 10, or to 1 transaction every 2 seconds use --tpslimit 0.5.
+
+Use this when the number of transactions per second from rclone is
+causing a problem with the cloud storage provider (eg getting you banned
+or rate limited).
+
+This can be very useful for rclone mount to control the behaviour of
+applications using it.
+
+See also --tpslimit-burst.
+
+--tpslimit-burst int
+
+Max burst of transactions for --tpslimit. (default 1)
+
+Normally --tpslimit will do exactly the number of transaction per second
+specified. However if you supply --tps-burst then rclone can save up
+some transactions from when it was idle giving a burst of up to the
+parameter supplied.
+
+For example if you provide --tpslimit-burst 10 then if rclone has been
+idle for more than 10*--tpslimit then it can do 10 transactions very
+quickly before they are limited again.
+
+This may be used to increase performance of --tpslimit without changing
+the long term average number of transactions per second.
+
--track-renames
-By default rclone doesn't not keep track of renamed files, so if you
-rename a file locally then sync it to a remote, rclone will delete the
-old file on the remote and upload a new copy.
+By default, rclone doesn't keep track of renamed files, so if you rename
+a file locally then sync it to a remote, rclone will delete the old file
+on the remote and upload a new copy.
If you use this flag, and the remote supports server side copy or server
side move, and the source and destination have a compatible hash, then
@@ -1542,9 +1624,9 @@ and processes it before using more directory lists to process any
subdirectories. This can be parallelised and works very quickly using
the least amount of memory.
-However some remotes have a way of listing all files beneath a directory
-in one (or a small number) of transactions. These tend to be the bucket
-based remotes (eg s3, b2, gcs, swift, hubic).
+However, some remotes have a way of listing all files beneath a
+directory in one (or a small number) of transactions. These tend to be
+the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
If you use the --fast-list flag then rclone will use this method for
listing directories. This will have the following consequences for the
@@ -1656,7 +1738,7 @@ Go into s, Set configuration password:
c/u/q>
Your configuration is now encrypted, and every time you start rclone you
-will now be asked for the password. In the same menu you can change the
+will now be asked for the password. In the same menu, you can change the
password or completely remove encryption from your configuration.
There is no way to recover the configuration if you lose your password.
@@ -1752,7 +1834,7 @@ If you are only copying a small number of files and/or have a large
number of files on the destination then --no-traverse will stop rclone
listing the destination and save time.
-However if you are copying a large number of files, especially if you
+However, if you are copying a large number of files, especially if you
are doing a copy where lots of the files haven't changed and won't need
copying then you shouldn't use --no-traverse.
@@ -1787,11 +1869,11 @@ Logging
rclone has 4 levels of logging, Error, Notice, Info and Debug.
-By default rclone logs to standard error. This means you can redirect
+By default, rclone logs to standard error. This means you can redirect
standard error and still see the normal output of rclone commands (eg
rclone ls).
-By default rclone will produce Error and Notice level messages.
+By default, rclone will produce Error and Notice level messages.
If you use the -q flag, rclone will only produce Error messages.
@@ -1816,16 +1898,16 @@ information.
Exit Code
-If any errors occurred during the command, rclone will exit with a
-non-zero exit code. This allows scripts to detect when rclone operations
-have failed.
+If any errors occur during the command execution, rclone will exit with
+a non-zero exit code. This allows scripts to detect when rclone
+operations have failed.
-During the startup phase rclone will exit immediately if an error is
+During the startup phase, rclone will exit immediately if an error is
detected in the configuration. There will always be a log message
immediately before exiting.
When rclone is running it will accumulate errors as it goes along, and
-only exit with an non-zero exit code if (after retries) there were still
+only exit with a non-zero exit code if (after retries) there were still
failed transfers. For every error counted there will be a high priority
log message (visible with -q) showing the message and which file caused
the problem. A high priority message is also shown when starting a retry
@@ -1843,11 +1925,11 @@ Options
Every option in rclone can have its default set by environment variable.
-To find the name of the environment variable, first take the long option
-name, strip the leading --, change - to _, make upper case and prepend
-RCLONE_.
+To find the name of the environment variable, first, take the long
+option name, strip the leading --, change - to _, make upper case and
+prepend RCLONE_.
-For example to always set --stats 5s, set the environment variable
+For example, to always set --stats 5s, set the environment variable
RCLONE_STATS=5s. If you set stats on the command line this will override
the environment variable setting.
@@ -1870,7 +1952,7 @@ To find the name of the environment variable, you need to set, take
RCLONE_ + name of remote + _ + name of config file option and make it
all uppercase.
-For example to configure an S3 remote named mys3: without a config file
+For example, to configure an S3 remote named mys3: without a config file
(using unix ways of setting environment variables):
$ export RCLONE_CONFIG_MYS3_TYPE=s3
@@ -2417,6 +2499,7 @@ Here is an overview of the major features of each cloud storage system.
Yandex Disk MD5 Yes No No R/W
SFTP - Yes Depends No -
FTP - No Yes No -
+ HTTP - No Yes No R
The local filesystem All Yes Depends No -
Hash
@@ -2509,6 +2592,7 @@ more efficient.
Yandex Disk Yes No No No No #575 Yes
SFTP No No Yes Yes No No
FTP No No Yes Yes No No
+ HTTP No No No No No No
The local filesystem Yes No Yes Yes No No
Purge
@@ -2727,6 +2811,10 @@ Specific options
Here are the command line options specific to this cloud storage system.
+--drive-auth-owner-only
+
+Only consider files owned by the authenticated user.
+
--drive-chunk-size=SIZE
Upload chunk size. Must a power of 2 >= 256k. Default value is 8 MB.
@@ -2736,23 +2824,9 @@ buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
---drive-full-list
-
-No longer does anything - kept for backwards compatibility.
-
---drive-upload-cutoff=SIZE
-
-File size cutoff for switching to chunked upload. Default is 8 MB.
-
---drive-use-trash
-
-Send files to the trash instead of deleting permanently. Defaults to
-off, namely deleting files permanently.
-
--drive-auth-owner-only
-Only consider files owned by the authenticated user. Requires that
---drive-full-list=true (default).
+Only consider files owned by the authenticated user.
--drive-formats
@@ -2869,11 +2943,33 @@ Here are the possible extensions with their corresponding mime types.
CSS
-------------------------------------
+--drive-list-chunk int
+
+Size of listing chunk 100-1000. 0 to disable. (default 1000)
+
+--drive-shared-with-me
+
+Only show files that are shared with me
+
--drive-skip-gdocs
Skip google documents in all listings. If given, gdocs practically
become invisible to rclone.
+--drive-trashed-only
+
+Only show files that are in the trash. This will show trashed files in
+their original directory structure.
+
+--drive-upload-cutoff=SIZE
+
+File size cutoff for switching to chunked upload. Default is 8 MB.
+
+--drive-use-trash
+
+Send files to the trash instead of deleting permanently. Defaults to
+off, namely deleting files permanently.
+
Limitations
Drive has quite a lot of rate limiting. This causes rclone to be limited
@@ -2881,6 +2977,40 @@ to transferring about 2 files per second only. Individual files may be
transferred much faster at 100s of MBytes/s but lots of small files can
take a long time.
+Duplicated files
+
+Sometimes, for no reason I've been able to track down, drive will
+duplicate a file that rclone uploads. Drive unlike all the other remotes
+can have duplicated files.
+
+Duplicated files cause problems with the syncing and you will see
+messages in the log about duplicates.
+
+Use rclone dedupe to fix duplicated files.
+
+Note that this isn't just a problem with rclone, even Google Photos on
+Android duplicates files on drive sometimes.
+
+Rclone appears to be re-copying files it shouldn't
+
+There are two possible reasons for rclone to recopy files which haven't
+changed to Google Drive.
+
+The first is the duplicated file issue above - run rclone dedupe and
+check your logs for duplicate object or directory messages.
+
+The second is that sometimes Google reports different sizes for the
+Google Docs exports which will cause rclone to re-download Google Docs
+for no apparent reason. --ignore-size is a not very satisfactory
+work-around for this if it is causing you a lot of problems.
+
+Google docs downloads sometimes fail with "Failed to copy: read X bytes expecting Y"
+
+This is the same problem as above. Google reports the google doc is one
+size, but rclone downloads a different size. Work-around with the
+--ignore-size flag or wait for rclone to retry the download which it
+will.
+
Making your own client_id
When you use rclone with Google drive in its default configuration you
@@ -3317,28 +3447,39 @@ and devops.
It is very easy to install and provides an S3 compatible server which
can be used by rclone.
-To use it, install Minio following the instructions from the web site.
+To use it, install Minio following the instructions here.
When it configures itself Minio will print something like this
- AccessKey: WLGDGYAQYIGI833EV05A SecretKey: BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF Region: us-east-1
+ Endpoint: http://192.168.1.106:9000 http://172.23.0.1:9000
+ AccessKey: USWUXHGYZQYFYFFIT3RE
+ SecretKey: MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+ Region: us-east-1
+ SQS ARNs: arn:minio:sqs:us-east-1:1:redis arn:minio:sqs:us-east-1:2:redis
- Minio Object Storage:
- http://127.0.0.1:9000
- http://10.0.0.3:9000
+ Browser Access:
+ http://192.168.1.106:9000 http://172.23.0.1:9000
- Minio Browser:
- http://127.0.0.1:9000
- http://10.0.0.3:9000
+ Command-line Access: https://docs.minio.io/docs/minio-client-quickstart-guide
+ $ mc config host add myminio http://192.168.1.106:9000 USWUXHGYZQYFYFFIT3RE MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+
+ Object API (Amazon S3 compatible):
+ Go: https://docs.minio.io/docs/golang-client-quickstart-guide
+ Java: https://docs.minio.io/docs/java-client-quickstart-guide
+ Python: https://docs.minio.io/docs/python-client-quickstart-guide
+ JavaScript: https://docs.minio.io/docs/javascript-client-quickstart-guide
+ .NET: https://docs.minio.io/docs/dotnet-client-quickstart-guide
+
+ Drive Capacity: 26 GiB Free, 165 GiB Total
These details need to go into rclone config like this. Note that it is
important to put the region in as stated above.
env_auth> 1
- access_key_id> WLGDGYAQYIGI833EV05A
- secret_access_key> BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
+ access_key_id> USWUXHGYZQYFYFFIT3RE
+ secret_access_key> MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region> us-east-1
- endpoint> http://10.0.0.3:9000
+ endpoint> http://192.168.1.106:9000
location_constraint>
server_side_encryption>
@@ -3346,21 +3487,16 @@ Which makes the config file look like this
[minio]
env_auth = false
- access_key_id = WLGDGYAQYIGI833EV05A
- secret_access_key = BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
+ access_key_id = USWUXHGYZQYFYFFIT3RE
+ secret_access_key = MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region = us-east-1
- endpoint = http://10.0.0.3:9000
+ endpoint = http://192.168.1.106:9000
location_constraint =
server_side_encryption =
-Minio doesn't support all the features of S3 yet. In particular it
-doesn't support MD5 checksums (ETags) or metadata. This means rclone
-can't check MD5SUMs or store the modified date. However you can work
-around this with the --size-only flag of rclone.
-
So once set up, for example to copy files into a bucket
- rclone --size-only copy /path/to/files minio:bucket
+ rclone copy /path/to/files minio:bucket
Swift
@@ -3769,6 +3905,52 @@ This will guide you through an interactive setup process:
5 / Project team owners get OWNER access, and all Users get WRITER access.
\ "publicReadWrite"
bucket_acl> 2
+ Location for the newly created buckets.
+ Choose a number from below, or type in your own value
+ 1 / Empty for default location (US).
+ \ ""
+ 2 / Multi-regional location for Asia.
+ \ "asia"
+ 3 / Multi-regional location for Europe.
+ \ "eu"
+ 4 / Multi-regional location for United States.
+ \ "us"
+ 5 / Taiwan.
+ \ "asia-east1"
+ 6 / Tokyo.
+ \ "asia-northeast1"
+ 7 / Singapore.
+ \ "asia-southeast1"
+ 8 / Sydney.
+ \ "australia-southeast1"
+ 9 / Belgium.
+ \ "europe-west1"
+ 10 / London.
+ \ "europe-west2"
+ 11 / Iowa.
+ \ "us-central1"
+ 12 / South Carolina.
+ \ "us-east1"
+ 13 / Northern Virginia.
+ \ "us-east4"
+ 14 / Oregon.
+ \ "us-west1"
+ location> 12
+ The storage class to use when storing objects in Google Cloud Storage.
+ Choose a number from below, or type in your own value
+ 1 / Default
+ \ ""
+ 2 / Multi-regional storage class
+ \ "MULTI_REGIONAL"
+ 3 / Regional storage class
+ \ "REGIONAL"
+ 4 / Nearline storage class
+ \ "NEARLINE"
+ 5 / Coldline storage class
+ \ "COLDLINE"
+ 6 / Durable reduced availability storage class
+ \ "DURABLE_REDUCED_AVAILABILITY"
+ storage_class> 5
Remote config
Use auto config?
* Say Y if not sure
@@ -4565,7 +4747,7 @@ Here are the command line options specific to this cloud storage system.
When uploading large files chunk the file into this size. Note that
these chunks are buffered in memory and there might a maximum of
---transfers chunks in progress at once. 100,000,000 Bytes is the minimim
+--transfers chunks in progress at once. 5,000,000 Bytes is the minimim
size (default 96M).
--b2-upload-cutoff=SIZE
@@ -4747,17 +4929,13 @@ Here is an example of making a SFTP configuration. First run
rclone config
-This will guide you through an interactive setup process. You will need
-your account number (a short hex number) and key (a long hex number)
-which you can get from the SFTP control panel.
+This will guide you through an interactive setup process.
No remotes found - make a new one
n) New remote
- r) Rename remote
- c) Copy remote
s) Set configuration password
q) Quit config
- n/r/c/s/q> n
+ n/s/q> n
name> remote
Type of storage to configure.
Choose a number from below, or type in your own value
@@ -4789,6 +4967,8 @@ which you can get from the SFTP control panel.
\ "sftp"
14 / Yandex Disk
\ "yandex"
+ 15 / http Connection
+ \ "http"
Storage> sftp
SSH host to connect to
Choose a number from below, or type in your own value
@@ -4796,21 +4976,174 @@ which you can get from the SFTP control panel.
\ "example.com"
host> example.com
SSH username, leave blank for current username, ncw
- user>
+ user> sftpuser
SSH port, leave blank to use default (22)
port>
- SSH password, leave blank to use ssh-agent
+ SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank
y/g/n> n
+ Path to unencrypted PEM-encoded private key file, leave blank to use ssh-agent.
+ key_file>
Remote config
--------------------
[remote]
host = example.com
- user =
+ user = sftpuser
port =
pass =
+ key_file =
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+This remote is called remote and can now be used like this
+
+See all directories in the home directory
+
+ rclone lsd remote:
+
+Make a new directory
+
+ rclone mkdir remote:path/to/directory
+
+List the contents of a directory
+
+ rclone ls remote:path/to/directory
+
+Sync /home/local/directory to the remote directory, deleting any excess
+files in the directory.
+
+ rclone sync /home/local/directory remote:directory
+
+SSH Authentication
+
+The SFTP remote supports 3 authentication methods
+
+- Password
+- Key file
+- ssh-agent
+
+Key files should be unencrypted PEM-encoded private key files. For
+instance /home/$USER/.ssh/id_rsa.
+
+If you don't specify pass or key_file then it will attempt to contact an
+ssh-agent.
+
+ssh-agent on macOS
+
+Note that there seem to be various problems with using an ssh-agent on
+macOS due to recent changes in the OS. The most effective work-around
+seems to be to start an ssh-agent in each session, eg
+
+ eval `ssh-agent -s` && ssh-add -A
+
+And then at the end of the session
+
+ eval `ssh-agent -k`
+
+These commands can be used in scripts of course.
+
+Modified time
+
+Modified times are stored on the server to 1 second precision.
+
+Modified times are used in syncing and are fully supported.
+
+Limitations
+
+SFTP does not support any checksums.
+
+The only ssh agent supported under Windows is Putty's pagent.
+
+SFTP isn't supported under plan9 until this issue is fixed.
+
+Note that since SFTP isn't HTTP based the following flags don't work
+with it: --dump-headers, --dump-bodies, --dump-auth
+
+Note that --timeout isn't supported (but --contimeout is).
+
+
+FTP
+
+FTP is the File Transfer Protocol. FTP support is provided using the
+github.com/jlaffaye/ftp package.
+
+Here is an example of making an FTP configuration. First run
+
+ rclone config
+
+This will guide you through an interactive setup process. An FTP remote
+only needs a host together with and a username and a password. With
+anonymous FTP server, you will need to use anonymous as username and
+your email address as the password.
+
+ No remotes found - make a new one
+ n) New remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ n/r/c/s/q> n
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 8 / Google Drive
+ \ "drive"
+ 9 / Hubic
+ \ "hubic"
+ 10 / Local Disk
+ \ "local"
+ 11 / Microsoft OneDrive
+ \ "onedrive"
+ 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+ 13 / SSH/SFTP Connection
+ \ "sftp"
+ 14 / Yandex Disk
+ \ "yandex"
+ Storage> ftp
+ FTP host to connect to
+ Choose a number from below, or type in your own value
+ 1 / Connect to ftp.example.com
+ \ "ftp.example.com"
+ host> ftp.example.com
+ FTP username, leave blank for current username, ncw
+ user>
+ FTP port, leave blank to use default (21)
+ port>
+ FTP password
+ y) Yes type in my own password
+ g) Generate random password
+ y/g> y
+ Enter the password:
+ password:
+ Confirm the password:
+ password:
+ Remote config
+ --------------------
+ [remote]
+ host = ftp.example.com
+ user =
+ port =
+ pass = *** ENCRYPTED ***
--------------------
y) Yes this is OK
e) Edit this remote
@@ -4838,23 +5171,148 @@ files in the directory.
Modified time
-Modified times are stored on the server to 1 second precision.
+FTP does not support modified times. Any times you see on the server
+will be time of upload.
-Modified times are used in syncing and are fully supported.
+Checksums
+
+FTP does not support any checksums.
Limitations
-SFTP does not support any checksums.
-
-The only ssh agent supported under Windows is Putty's pagent.
-
-SFTP isn't supported under plan9 until this issue is fixed.
-
-Note that since SFTP isn't HTTP based the following flags don't work
-with it: --dump-headers, --dump-bodies, --dump-auth
+Note that since FTP isn't HTTP based the following flags don't work with
+it: --dump-headers, --dump-bodies, --dump-auth
Note that --timeout isn't supported (but --contimeout is).
+FTP could support server side move but doesn't yet.
+
+
+HTTP
+
+The HTTP remote is a read only remote for reading files of a webserver.
+The webserver should provide file listings which rclone will read and
+turn into a remote. This has been tested with common webservers such as
+Apache/Nginx/Caddy and will likely work with file listings from most web
+servers. (If it doesn't then please file an issue, or send a pull
+request!)
+
+Paths are specified as remote: or remote:path/to/dir.
+
+Here is an example of how to make a remote called remote. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Dropbox
+ \ "dropbox"
+ 5 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 6 / FTP Connection
+ \ "ftp"
+ 7 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 8 / Google Drive
+ \ "drive"
+ 9 / Hubic
+ \ "hubic"
+ 10 / Local Disk
+ \ "local"
+ 11 / Microsoft OneDrive
+ \ "onedrive"
+ 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+ 13 / SSH/SFTP Connection
+ \ "sftp"
+ 14 / Yandex Disk
+ \ "yandex"
+ 15 / http Connection
+ \ "http"
+ Storage> http
+ URL of http host to connect to
+ Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "https://example.com"
+ url> https://beta.rclone.org
+ Remote config
+ --------------------
+ [remote]
+ url = https://beta.rclone.org
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+ Current remotes:
+
+ Name Type
+ ==== ====
+ remote http
+
+ e) Edit existing remote
+ n) New remote
+ d) Delete remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ e/n/d/r/c/s/q> q
+
+This remote is called remote and can now be used like this
+
+See all the top level directories
+
+ rclone lsd remote:
+
+List the contents of a directory
+
+ rclone ls remote:directory
+
+Sync the remote directory to /home/local/directory, deleting any excess
+files.
+
+ rclone sync remote:directory /home/local/directory
+
+Read only
+
+This remote is read only - you can't upload files to an HTTP server.
+
+Modified time
+
+Most HTTP servers store time accurate to 1 second.
+
+Checksum
+
+No checksums are stored.
+
+Usage without a config file
+
+Note that since only two environment variable need to be set, it is easy
+to use without a config file like this.
+
+ RCLONE_CONFIG_ZZ_TYPE=http RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org rclone lsd zz:
+
+Or if you prefer
+
+ export RCLONE_CONFIG_ZZ_TYPE=http
+ export RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org
+ rclone lsd zz:
+
Crypt
@@ -5251,127 +5709,6 @@ encrypted data. For full protection agains this you should always use a
salt.
-FTP
-
-FTP is the File Transfer Protocol. FTP support is provided using the
-github.com/jlaffaye/ftp package.
-
-Here is an example of making an FTP configuration. First run
-
- rclone config
-
-This will guide you through an interactive setup process. An FTP remote
-only needs a host together with and a username and a password. With
-anonymous FTP server, you will need to use anonymous as username and
-your email address as the password.
-
- No remotes found - make a new one
- n) New remote
- r) Rename remote
- c) Copy remote
- s) Set configuration password
- q) Quit config
- n/r/c/s/q> n
- name> remote
- Type of storage to configure.
- Choose a number from below, or type in your own value
- 1 / Amazon Drive
- \ "amazon cloud drive"
- 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
- \ "s3"
- 3 / Backblaze B2
- \ "b2"
- 4 / Dropbox
- \ "dropbox"
- 5 / Encrypt/Decrypt a remote
- \ "crypt"
- 6 / FTP Connection
- \ "ftp"
- 7 / Google Cloud Storage (this is not Google Drive)
- \ "google cloud storage"
- 8 / Google Drive
- \ "drive"
- 9 / Hubic
- \ "hubic"
- 10 / Local Disk
- \ "local"
- 11 / Microsoft OneDrive
- \ "onedrive"
- 12 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
- \ "swift"
- 13 / SSH/SFTP Connection
- \ "sftp"
- 14 / Yandex Disk
- \ "yandex"
- Storage> ftp
- FTP host to connect to
- Choose a number from below, or type in your own value
- 1 / Connect to ftp.example.com
- \ "ftp.example.com"
- host> ftp.example.com
- FTP username, leave blank for current username, ncw
- user>
- FTP port, leave blank to use default (21)
- port>
- FTP password
- y) Yes type in my own password
- g) Generate random password
- y/g> y
- Enter the password:
- password:
- Confirm the password:
- password:
- Remote config
- --------------------
- [remote]
- host = ftp.example.com
- user =
- port =
- pass = *** ENCRYPTED ***
- --------------------
- y) Yes this is OK
- e) Edit this remote
- d) Delete this remote
- y/e/d> y
-
-This remote is called remote and can now be used like this
-
-See all directories in the home directory
-
- rclone lsd remote:
-
-Make a new directory
-
- rclone mkdir remote:path/to/directory
-
-List the contents of a directory
-
- rclone ls remote:path/to/directory
-
-Sync /home/local/directory to the remote directory, deleting any excess
-files in the directory.
-
- rclone sync /home/local/directory remote:directory
-
-Modified time
-
-FTP does not support modified times. Any times you see on the server
-will be time of upload.
-
-Checksums
-
-FTP does not support any checksums.
-
-Limitations
-
-Note that since FTP isn't HTTP based the following flags don't work with
-it: --dump-headers, --dump-bodies, --dump-auth
-
-Note that --timeout isn't supported (but --contimeout is).
-
-FTP could support server side move but doesn't yet.
-
-
Local Filesystem
Local paths are specified as normal filesystem paths, eg
@@ -5527,6 +5864,127 @@ it isn't supported (eg Windows) it will not appear as an valid flag.
Changelog
+- v1.37 - 2017-07-22
+ - New backends
+ - FTP - thanks to Antonio Messina
+ - HTTP - thanks to Vasiliy Tolstov
+ - New commands
+ - rclone ncdu - for exploring a remote with a text based
+ user interface.
+ - rclone lsjson - for listing with a machine readable output
+ - rclone dbhashsum - to show Dropbox style hashes of files (local
+ or Dropbox)
+ - New Features
+ - Implement --fast-list flag
+ - This allows remotes to list recursively if they can
+ - This uses less transactions (important if you pay for them)
+ - This may or may not be quicker
+ - This will user more memory as it has to hold the listing in
+ memory
+ - --old-sync-method deprecated - the remaining uses are
+ covered by --fast-list
+ - This involved a major re-write of all the listing code
+ - Add --tpslimit and --tpslimit-burst to limit transactions per
+ second
+ - this is useful in conjuction with rclone mount to limit
+ external apps
+ - Add --stats-log-level so can see --stats without -v
+ - Print password prompts to stderr - Hraban Luyat
+ - Warn about duplicate files when syncing
+ - Oauth improvements
+ - allow auth_url and token_url to be set in the config file
+ - Print redirection URI if using own credentials.
+ - Don't Mkdir at the start of sync to save transactions
+ - Compile
+ - Update build to go1.8.3
+ - Require go1.6 for building rclone
+ - Compile 386 builds with "GO386=387" for maximum compatibility
+ - Bug Fixes
+ - Fix menu selection when no remotes
+ - Config saving reworked to not kill the file if disk gets full
+ - Don't delete remote if name does not change while renaming
+ - moveto, copyto: report transfers and checks as per move and copy
+ - Local
+ - Add --local-no-unicode-normalization flag - Bob Potter
+ - Mount
+ - Now supported on Windows using cgofuse and WinFsp - thanks to
+ Bill Zissimopoulos for much help
+ - Compare checksums on upload/download via FUSE
+ - Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM -
+ Jérôme Vizcaino
+ - On read only open of file, make open pending until first read
+ - Make --read-only reject modify operations
+ - Implement ModTime via FUSE for remotes that support it
+ - Allow modTime to be changed even before all writers are closed
+ - Fix panic on renames
+ - Fix hang on errored upload
+ - Crypt
+ - Report the name:root as specified by the user
+ - Add an "obfuscate" option for filename encryption - Stephen
+ Harris
+ - Amazon Drive
+ - Fix initialization order for token renewer
+ - Remove revoked credentials, allow oauth proxy config and update
+ docs
+ - B2
+ - Reduce minimum chunk size to 5MB
+ - Drive
+ - Add team drive support
+ - Reduce bandwidth by adding fields for partial responses - Martin
+ Kristensen
+ - Implement --drive-shared-with-me flag to view shared with me
+ files - Danny Tsai
+ - Add --drive-trashed-only to read only the files in the trash
+ - Remove obsolete --drive-full-list
+ - Add missing seek to start on retries of chunked uploads
+ - Fix stats accounting for upload
+ - Convert / in names to a unicode equivalent (/)
+ - Poll for Google Drive changes when mounted
+ - OneDrive
+ - Fix the uploading of files with spaces
+ - Fix initialization order for token renewer
+ - Display speeds accurately when uploading - Yoni Jah
+ - Swap to using http://localhost:53682/ as redirect URL - Michael
+ Ledin
+ - Retry on token expired error, reset upload body on retry - Yoni
+ Jah
+ - Google Cloud Storage
+ - Add ability to specify location and storage class via config and
+ command line - thanks gdm85
+ - Create container if necessary on server side copy
+ - Increase directory listing chunk to 1000 to increase performance
+ - Obtain a refresh token for GCS - Steven Lu
+ - Yandex
+ - Fix the name reported in log messages (was empty)
+ - Correct error return for listing empty directory
+ - Dropbox
+ - Rewritten to use the v2 API
+ - Now supports ModTime
+ - Can only set by uploading the file again
+ - If you uploaded with an old rclone, rclone may upload
+ everything again
+ - Use --size-only or --checksum to avoid this
+ - Now supports the Dropbox content hashing scheme
+ - Now supports low level retries
+ - S3
+ - Work around eventual consistency in bucket creation
+ - Create container if necessary on server side copy
+ - Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar
+ Ahmed
+ - Swift, Hubic
+ - Fix zero length directory markers showing in the subdirectory
+ listing
+ - this caused lots of duplicate transfers
+ - Fix paged directory listings
+ - this caused duplicate directory errors
+ - Create container if necessary on server side copy
+ - Increase directory listing chunk to 1000 to increase performance
+ - Make sensible error if the user forgets the container
+ - SFTP
+ - Add support for using ssh key files
+ - Fix under Windows
+ - Fix ssh agent on Windows
+ - Adapt to latest version of library - Igor Kharin
- v1.36 - 2017-03-18
- New Features
- SFTP remote (Jack Schmidt)
@@ -6484,6 +6942,11 @@ Contributors
- Ruwbin hubus12345@gmail.com
- Fabian Möller fabianm88@gmail.com
- Edward Q. Bridges github@eqbridges.com
+- Vasiliy Tolstov v.tolstov@selfip.ru
+- Harshavardhana harsha@minio.io
+- sainaen sainaen@gmail.com
+- gdm85 gdm85@users.noreply.github.com
+- Yaroslav Halchenko debian@onerussian.com
diff --git a/Makefile b/Makefile
index cb603d74a..04d420c22 100644
--- a/Makefile
+++ b/Makefile
@@ -129,6 +129,12 @@ travis_beta:
rclone --config bin/travis.rclone.conf -v copy --include '*beta-latest*' build/ memstore:beta-rclone-org
@echo Beta release ready at $(BETA_URL)
+# Fetch the windows builds from appveyor
+fetch_windows:
+ rclone -v copy --include 'rclone-v*-windows-*.zip' memstore:beta-rclone-org build/$(TAG) build/
+ cp -av rclone-v*-windows-*.zip rclone-current-windows-386.zip
+ cp -av rclone-v*-windows-*.zip rclone-current-windows-amd64.zip
+
serve: website
cd docs && hugo server -v -w
diff --git a/RELEASE.md b/RELEASE.md
index 8ad6fe2c4..d8eb3a3b1 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -16,10 +16,12 @@ Making a release
* make retag
* # Set the GOPATH for a current stable go compiler
* make cross
- * make upload
- * make upload_website
* git push --tags origin master
* git push --tags origin master:stable # update the stable branch for packager.io
+ * # Wait for the appveyor and travis builds to complete then fetch the windows binaries from appveyor
+ * make fetch_windows
+ * make upload
+ * make upload_website
* make upload_github
* make startdev
* # announce with forum post, twitter post, G+ post
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index b50266ce5..2d315c0ec 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -1,12 +1,115 @@
---
title: "Documentation"
description: "Rclone Changelog"
-date: "2016-11-06"
+date: "2017-07-22"
---
Changelog
---------
+ * v1.37 - 2017-07-22
+ * New backends
+ * FTP - thanks to Antonio Messina
+ * HTTP - thanks to Vasiliy Tolstov
+ * New commands
+ * rclone ncdu - for exploring a remote with a text based user interface.
+ * rclone lsjson - for listing with a machine readable output
+ * rclone dbhashsum - to show Dropbox style hashes of files (local or Dropbox)
+ * New Features
+ * Implement --fast-list flag
+ * This allows remotes to list recursively if they can
+ * This uses less transactions (important if you pay for them)
+ * This may or may not be quicker
+ * This will user more memory as it has to hold the listing in memory
+ * --old-sync-method deprecated - the remaining uses are covered by --fast-list
+ * This involved a major re-write of all the listing code
+ * Add --tpslimit and --tpslimit-burst to limit transactions per second
+ * this is useful in conjuction with `rclone mount` to limit external apps
+ * Add --stats-log-level so can see --stats without -v
+ * Print password prompts to stderr - Hraban Luyat
+ * Warn about duplicate files when syncing
+ * Oauth improvements
+ * allow auth_url and token_url to be set in the config file
+ * Print redirection URI if using own credentials.
+ * Don't Mkdir at the start of sync to save transactions
+ * Compile
+ * Update build to go1.8.3
+ * Require go1.6 for building rclone
+ * Compile 386 builds with "GO386=387" for maximum compatibility
+ * Bug Fixes
+ * Fix menu selection when no remotes
+ * Config saving reworked to not kill the file if disk gets full
+ * Don't delete remote if name does not change while renaming
+ * moveto, copyto: report transfers and checks as per move and copy
+ * Local
+ * Add --local-no-unicode-normalization flag - Bob Potter
+ * Mount
+ * Now supported on Windows using cgofuse and WinFsp - thanks to Bill Zissimopoulos for much help
+ * Compare checksums on upload/download via FUSE
+ * Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM - Jérôme Vizcaino
+ * On read only open of file, make open pending until first read
+ * Make --read-only reject modify operations
+ * Implement ModTime via FUSE for remotes that support it
+ * Allow modTime to be changed even before all writers are closed
+ * Fix panic on renames
+ * Fix hang on errored upload
+ * Crypt
+ * Report the name:root as specified by the user
+ * Add an "obfuscate" option for filename encryption - Stephen Harris
+ * Amazon Drive
+ * Fix initialization order for token renewer
+ * Remove revoked credentials, allow oauth proxy config and update docs
+ * B2
+ * Reduce minimum chunk size to 5MB
+ * Drive
+ * Add team drive support
+ * Reduce bandwidth by adding fields for partial responses - Martin Kristensen
+ * Implement --drive-shared-with-me flag to view shared with me files - Danny Tsai
+ * Add --drive-trashed-only to read only the files in the trash
+ * Remove obsolete --drive-full-list
+ * Add missing seek to start on retries of chunked uploads
+ * Fix stats accounting for upload
+ * Convert / in names to a unicode equivalent (/)
+ * Poll for Google Drive changes when mounted
+ * OneDrive
+ * Fix the uploading of files with spaces
+ * Fix initialization order for token renewer
+ * Display speeds accurately when uploading - Yoni Jah
+ * Swap to using http://localhost:53682/ as redirect URL - Michael Ledin
+ * Retry on token expired error, reset upload body on retry - Yoni Jah
+ * Google Cloud Storage
+ * Add ability to specify location and storage class via config and command line - thanks gdm85
+ * Create container if necessary on server side copy
+ * Increase directory listing chunk to 1000 to increase performance
+ * Obtain a refresh token for GCS - Steven Lu
+ * Yandex
+ * Fix the name reported in log messages (was empty)
+ * Correct error return for listing empty directory
+ * Dropbox
+ * Rewritten to use the v2 API
+ * Now supports ModTime
+ * Can only set by uploading the file again
+ * If you uploaded with an old rclone, rclone may upload everything again
+ * Use `--size-only` or `--checksum` to avoid this
+ * Now supports the Dropbox content hashing scheme
+ * Now supports low level retries
+ * S3
+ * Work around eventual consistency in bucket creation
+ * Create container if necessary on server side copy
+ * Add us-east-2 (Ohio) and eu-west-2 (London) S3 regions - Zahiar Ahmed
+ * Swift, Hubic
+ * Fix zero length directory markers showing in the subdirectory listing
+ * this caused lots of duplicate transfers
+ * Fix paged directory listings
+ * this caused duplicate directory errors
+ * Create container if necessary on server side copy
+ * Increase directory listing chunk to 1000 to increase performance
+ * Make sensible error if the user forgets the container
+ * SFTP
+ * Add support for using ssh key files
+ * Fix under Windows
+ * Fix ssh agent on Windows
+ * Adapt to latest version of library - Igor Kharin
* v1.36 - 2017-03-18
* New Features
* SFTP remote (Jack Schmidt)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 083a75ff2..6f814a6f2 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -1,12 +1,12 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
-Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+Sync files and directories to and from local and remote object stores - v1.37
### Synopsis
@@ -27,6 +27,7 @@ from various cloud storage systems and using file transfer services, such as:
* Yandex Disk
* SFTP
* FTP
+ * HTTP
* The local filesystem
Features
@@ -73,13 +74,13 @@ rclone [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -94,6 +95,8 @@ rclone [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -102,7 +105,7 @@ rclone [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -125,12 +128,15 @@ rclone [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -146,7 +152,7 @@ rclone [flags]
* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
* [rclone copy](/commands/rclone_copy/) - Copy files from source to dest, skipping already copied
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
-* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integritity of a crypted remote.
+* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote.
* [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbbox hash file for all the objects in the path.
* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
@@ -172,4 +178,4 @@ rclone [flags]
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
* [rclone version](/commands/rclone_version/) - Show the version number.
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index 58bc15dc9..394b239c7 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -44,13 +44,13 @@ rclone authorize
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -65,6 +65,8 @@ rclone authorize
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -73,7 +75,7 @@ rclone authorize
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -96,12 +98,15 @@ rclone authorize
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -109,6 +114,6 @@ rclone authorize
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md
index 0c60667a4..e743491db 100644
--- a/docs/content/commands/rclone_cat.md
+++ b/docs/content/commands/rclone_cat.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -70,13 +70,13 @@ rclone cat remote:path [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -91,6 +91,8 @@ rclone cat remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -99,7 +101,7 @@ rclone cat remote:path [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -122,12 +124,15 @@ rclone cat remote:path [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -135,6 +140,6 @@ rclone cat remote:path [flags]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md
index 8e637eb58..1a8414cad 100644
--- a/docs/content/commands/rclone_check.md
+++ b/docs/content/commands/rclone_check.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -59,13 +59,13 @@ rclone check source:path dest:path [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -80,6 +80,8 @@ rclone check source:path dest:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -88,7 +90,7 @@ rclone check source:path dest:path [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -111,12 +113,15 @@ rclone check source:path dest:path [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -124,6 +129,6 @@ rclone check source:path dest:path [flags]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md
index 5bac515bf..af771ad07 100644
--- a/docs/content/commands/rclone_cleanup.md
+++ b/docs/content/commands/rclone_cleanup.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -44,13 +44,13 @@ rclone cleanup remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -65,6 +65,8 @@ rclone cleanup remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -73,7 +75,7 @@ rclone cleanup remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -96,12 +98,15 @@ rclone cleanup remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -109,6 +114,6 @@ rclone cleanup remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md
index 3ca3ac946..399c28d99 100644
--- a/docs/content/commands/rclone_config.md
+++ b/docs/content/commands/rclone_config.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -41,13 +41,13 @@ rclone config
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone config
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone config
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone config
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone config
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index d4df4d441..453be7085 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -80,13 +80,13 @@ rclone copy source:path dest:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -101,6 +101,8 @@ rclone copy source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -109,7 +111,7 @@ rclone copy source:path dest:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -132,12 +134,15 @@ rclone copy source:path dest:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -145,6 +150,6 @@ rclone copy source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index 23ee1be35..2cb9bf1f2 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@@ -67,13 +67,13 @@ rclone copyto source:path dest:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -88,6 +88,8 @@ rclone copyto source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -96,7 +98,7 @@ rclone copyto source:path dest:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -119,12 +121,15 @@ rclone copyto source:path dest:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -132,6 +137,6 @@ rclone copyto source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md
index d94f9eb08..7072be5b2 100644
--- a/docs/content/commands/rclone_cryptcheck.md
+++ b/docs/content/commands/rclone_cryptcheck.md
@@ -1,12 +1,12 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
---
## rclone cryptcheck
-Cryptcheck checks the integritity of a crypted remote.
+Cryptcheck checks the integrity of a crypted remote.
### Synopsis
@@ -64,13 +64,13 @@ rclone cryptcheck remote:path cryptedremote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -85,6 +85,8 @@ rclone cryptcheck remote:path cryptedremote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -93,7 +95,7 @@ rclone cryptcheck remote:path cryptedremote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -116,12 +118,15 @@ rclone cryptcheck remote:path cryptedremote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -129,6 +134,6 @@ rclone cryptcheck remote:path cryptedremote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md
index 70d97ba5c..a6dcf1654 100644
--- a/docs/content/commands/rclone_dbhashsum.md
+++ b/docs/content/commands/rclone_dbhashsum.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/
@@ -46,13 +46,13 @@ rclone dbhashsum remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -67,6 +67,8 @@ rclone dbhashsum remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -75,7 +77,7 @@ rclone dbhashsum remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -98,12 +100,15 @@ rclone dbhashsum remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -111,6 +116,6 @@ rclone dbhashsum remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md
index 8392e3094..f2e0c3893 100644
--- a/docs/content/commands/rclone_dedupe.md
+++ b/docs/content/commands/rclone_dedupe.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
@@ -122,13 +122,13 @@ rclone dedupe [mode] remote:path [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -143,6 +143,8 @@ rclone dedupe [mode] remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -151,7 +153,7 @@ rclone dedupe [mode] remote:path [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -174,12 +176,15 @@ rclone dedupe [mode] remote:path [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -187,6 +192,6 @@ rclone dedupe [mode] remote:path [flags]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md
index 5c1c32dab..5e7159dde 100644
--- a/docs/content/commands/rclone_delete.md
+++ b/docs/content/commands/rclone_delete.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -58,13 +58,13 @@ rclone delete remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -79,6 +79,8 @@ rclone delete remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -87,7 +89,7 @@ rclone delete remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -110,12 +112,15 @@ rclone delete remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -123,6 +128,6 @@ rclone delete remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md
index 77f08b228..38790631a 100644
--- a/docs/content/commands/rclone_genautocomplete.md
+++ b/docs/content/commands/rclone_genautocomplete.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -56,13 +56,13 @@ rclone genautocomplete [output_file]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -77,6 +77,8 @@ rclone genautocomplete [output_file]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -85,7 +87,7 @@ rclone genautocomplete [output_file]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -108,12 +110,15 @@ rclone genautocomplete [output_file]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -121,6 +126,6 @@ rclone genautocomplete [output_file]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md
index 0059dbe71..e6b45d528 100644
--- a/docs/content/commands/rclone_gendocs.md
+++ b/docs/content/commands/rclone_gendocs.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -50,13 +50,13 @@ rclone gendocs output_directory [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -71,6 +71,8 @@ rclone gendocs output_directory [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -79,7 +81,7 @@ rclone gendocs output_directory [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -102,12 +104,15 @@ rclone gendocs output_directory [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -115,6 +120,6 @@ rclone gendocs output_directory [flags]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md
index 11b2e1fd9..15b1a2d0e 100644
--- a/docs/content/commands/rclone_listremotes.md
+++ b/docs/content/commands/rclone_listremotes.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -51,13 +51,13 @@ rclone listremotes [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -72,6 +72,8 @@ rclone listremotes [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -80,7 +82,7 @@ rclone listremotes [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -103,12 +105,15 @@ rclone listremotes [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -116,6 +121,6 @@ rclone listremotes [flags]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index 5ed286409..7bcaa89c1 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@@ -41,13 +41,13 @@ rclone ls remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone ls remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone ls remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone ls remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone ls remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index 75c4a3752..4957775ee 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -41,13 +41,13 @@ rclone lsd remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone lsd remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone lsd remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone lsd remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone lsd remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md
index 17440f53e..ec5e73613 100644
--- a/docs/content/commands/rclone_lsjson.md
+++ b/docs/content/commands/rclone_lsjson.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/
@@ -74,13 +74,13 @@ rclone lsjson remote:path [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -95,6 +95,8 @@ rclone lsjson remote:path [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -103,7 +105,7 @@ rclone lsjson remote:path [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -126,12 +128,15 @@ rclone lsjson remote:path [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -139,6 +144,6 @@ rclone lsjson remote:path [flags]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index 80dbcf793..28f258596 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@@ -41,13 +41,13 @@ rclone lsl remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone lsl remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone lsl remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone lsl remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone lsl remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index 6425609e7..ef4e8329a 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@@ -44,13 +44,13 @@ rclone md5sum remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -65,6 +65,8 @@ rclone md5sum remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -73,7 +75,7 @@ rclone md5sum remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -96,12 +98,15 @@ rclone md5sum remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -109,6 +114,6 @@ rclone md5sum remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md
index 9aa52a15c..837a3b8e9 100644
--- a/docs/content/commands/rclone_mkdir.md
+++ b/docs/content/commands/rclone_mkdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@@ -41,13 +41,13 @@ rclone mkdir remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone mkdir remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone mkdir remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone mkdir remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone mkdir remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index 3833e9674..2b8c96abb 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@@ -12,8 +12,9 @@ Mount the remote as a mountpoint. **EXPERIMENTAL**
-rclone mount allows Linux, FreeBSD and macOS to mount any of Rclone's
-cloud storage systems as a file system with FUSE.
+rclone mount allows Linux, FreeBSD, macOS and Windows to
+mount any of Rclone's cloud storage systems as a file system with
+FUSE.
This is **EXPERIMENTAL** - use with care.
@@ -23,6 +24,10 @@ Start the mount like this
rclone mount remote:path/to/files /path/to/local/mount
+Or on Windows like this where X: is an unused drive letter
+
+ rclone mount remote:path/to/files X:
+
When the program ends, either via Ctrl+C or receiving a SIGINT or SIGTERM signal,
the mount is automatically stopped.
@@ -48,7 +53,7 @@ None of these support the concept of directories, so empty
directories will have a tendency to disappear once they fall out of
the directory cache.
-Only supported on Linux, FreeBSD and OS X at the moment.
+Only supported on Linux, FreeBSD, OS X and Windows at the moment.
### rclone mount vs rclone sync/copy ##
@@ -100,15 +105,17 @@ rclone mount remote:path /path/to/mountpoint [flags]
--debug-fuse Debug the FUSE internals - needs -v.
--default-permissions Makes kernel enforce access control based on the file mode.
--dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
--gid uint32 Override the gid field set by the filesystem. (default 502)
--max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
--no-checksum Don't compare checksums on up/download.
--no-modtime Don't read/write the modification time (can speed things up).
--no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
--poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
--read-only Mount read-only.
--uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem. (default 2)
+ --umask int Override the permission bits set by the filesystem.
--write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
```
@@ -136,13 +143,13 @@ rclone mount remote:path /path/to/mountpoint [flags]
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -157,6 +164,8 @@ rclone mount remote:path /path/to/mountpoint [flags]
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -165,7 +174,7 @@ rclone mount remote:path /path/to/mountpoint [flags]
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -188,12 +197,15 @@ rclone mount remote:path /path/to/mountpoint [flags]
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -201,6 +213,6 @@ rclone mount remote:path /path/to/mountpoint [flags]
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index b985116fc..5c827c8d3 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@@ -58,13 +58,13 @@ rclone move source:path dest:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -79,6 +79,8 @@ rclone move source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -87,7 +89,7 @@ rclone move source:path dest:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -110,12 +112,15 @@ rclone move source:path dest:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -123,6 +128,6 @@ rclone move source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index e4e008920..fc1054f19 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/
@@ -70,13 +70,13 @@ rclone moveto source:path dest:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -91,6 +91,8 @@ rclone moveto source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -99,7 +101,7 @@ rclone moveto source:path dest:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -122,12 +124,15 @@ rclone moveto source:path dest:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -135,6 +140,6 @@ rclone moveto source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md
index 54e116934..080bb3120 100644
--- a/docs/content/commands/rclone_ncdu.md
+++ b/docs/content/commands/rclone_ncdu.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone ncdu"
slug: rclone_ncdu
url: /commands/rclone_ncdu/
@@ -65,13 +65,13 @@ rclone ncdu remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -86,6 +86,8 @@ rclone ncdu remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -94,7 +96,7 @@ rclone ncdu remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -117,12 +119,15 @@ rclone ncdu remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -130,6 +135,6 @@ rclone ncdu remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md
index a28662331..71cd631ec 100644
--- a/docs/content/commands/rclone_obscure.md
+++ b/docs/content/commands/rclone_obscure.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/
@@ -41,13 +41,13 @@ rclone obscure password
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone obscure password
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone obscure password
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone obscure password
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone obscure password
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index e9ea9d5bf..88a7e9ef8 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@@ -45,13 +45,13 @@ rclone purge remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -66,6 +66,8 @@ rclone purge remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -74,7 +76,7 @@ rclone purge remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -97,12 +99,15 @@ rclone purge remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -110,6 +115,6 @@ rclone purge remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md
index ee290487a..c1318c5d8 100644
--- a/docs/content/commands/rclone_rmdir.md
+++ b/docs/content/commands/rclone_rmdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@@ -43,13 +43,13 @@ rclone rmdir remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -64,6 +64,8 @@ rclone rmdir remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -72,7 +74,7 @@ rclone rmdir remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -95,12 +97,15 @@ rclone rmdir remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -108,6 +113,6 @@ rclone rmdir remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md
index 3a6a55aec..130eb59d5 100644
--- a/docs/content/commands/rclone_rmdirs.md
+++ b/docs/content/commands/rclone_rmdirs.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
@@ -48,13 +48,13 @@ rclone rmdirs remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -69,6 +69,8 @@ rclone rmdirs remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -77,7 +79,7 @@ rclone rmdirs remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -100,12 +102,15 @@ rclone rmdirs remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -113,6 +118,6 @@ rclone rmdirs remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index 0fb53e286..775781424 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@@ -44,13 +44,13 @@ rclone sha1sum remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -65,6 +65,8 @@ rclone sha1sum remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -73,7 +75,7 @@ rclone sha1sum remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -96,12 +98,15 @@ rclone sha1sum remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -109,6 +114,6 @@ rclone sha1sum remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md
index f78d22445..a067a5623 100644
--- a/docs/content/commands/rclone_size.md
+++ b/docs/content/commands/rclone_size.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@@ -41,13 +41,13 @@ rclone size remote:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone size remote:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone size remote:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone size remote:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone size remote:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index b989963ed..1ac675e12 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@@ -60,13 +60,13 @@ rclone sync source:path dest:path
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -81,6 +81,8 @@ rclone sync source:path dest:path
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -89,7 +91,7 @@ rclone sync source:path dest:path
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -112,12 +114,15 @@ rclone sync source:path dest:path
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -125,6 +130,6 @@ rclone sync source:path dest:path
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md
index 2cd985729..38ae5a4ed 100644
--- a/docs/content/commands/rclone_version.md
+++ b/docs/content/commands/rclone_version.md
@@ -1,5 +1,5 @@
---
-date: 2017-06-15T20:06:09+01:00
+date: 2017-07-22T18:15:25+01:00
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@@ -41,13 +41,13 @@ rclone version
--delete-before When synchronizing, delete files on destination before transfering
--delete-during When synchronizing, delete files during transfer (default)
--delete-excluded Delete files on dest excluded from sync
- --drive-auth-owner-only Only consider files owned by the authenticated user. Requires drive-full-list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
--drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
--drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-full-list Use a full listing for directory list. More data but usually quicker. (obsolete)
--drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
--drive-shared-with-me Only show files that are shared with me
--drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
--drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
--drive-use-trash Send files to the trash instead of deleting permanently.
--dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
@@ -62,6 +62,8 @@ rclone version
--files-from stringArray Read list of source-file names from file
-f, --filter stringArray Add a file-filtering rule
--filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
--ignore-checksum Skip post copy check of checksums.
--ignore-existing Skip all files that exist on destination
--ignore-size Ignore size when skipping use mod-time or checksum.
@@ -70,7 +72,7 @@ rclone version
--include-from stringArray Read include patterns from file
--local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
--log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
--low-level-retries int Number of low level retries to do. (default 10)
--max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
--max-depth int If set limits the recursion depth to this. (default -1)
@@ -93,12 +95,15 @@ rclone version
--s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
--size-only Skip based on size only, not mod-time or checksum
--stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
--stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
--suffix string Suffix for use with --backup-dir.
--swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
--syslog Use Syslog for logging
--syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
--timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
--track-renames When synchronizing, track file renames and do a server side move if possible
--transfers int Number of file transfers to run in parallel. (default 4)
-u, --update Skip files that are newer on the destination.
@@ -106,6 +111,6 @@ rclone version
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.36-190-gc34f11a9
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.37
-###### Auto generated by spf13/cobra on 15-Jun-2017
+###### Auto generated by spf13/cobra on 22-Jul-2017
diff --git a/docs/content/downloads.md b/docs/content/downloads.md
index f97f4c8d4..1f3bbbeae 100644
--- a/docs/content/downloads.md
+++ b/docs/content/downloads.md
@@ -2,43 +2,43 @@
title: "Rclone downloads"
description: "Download rclone binaries for your OS."
type: page
-date: "2017-03-18"
+date: "2017-07-22"
---
-Rclone Download v1.36
+Rclone Download v1.37
=====================
* Windows
- * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.36-windows-386.zip)
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-windows-amd64.zip)
+ * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.37-windows-386.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-windows-amd64.zip)
* OSX
- * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.36-osx-386.zip)
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-osx-amd64.zip)
+ * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.37-osx-386.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-osx-amd64.zip)
* Linux
- * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.36-linux-386.zip)
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-linux-amd64.zip)
- * [ARM - 32 Bit](https://downloads.rclone.org/rclone-v1.36-linux-arm.zip)
- * [ARM - 64 Bit](https://downloads.rclone.org/rclone-v1.36-linux-arm64.zip)
- * [MIPS - Big Endian](https://downloads.rclone.org/rclone-v1.36-linux-mips.zip)
- * [MIPS - Little Endian](https://downloads.rclone.org/rclone-v1.36-linux-mipsle.zip)
+ * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.37-linux-386.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-linux-amd64.zip)
+ * [ARM - 32 Bit](https://downloads.rclone.org/rclone-v1.37-linux-arm.zip)
+ * [ARM - 64 Bit](https://downloads.rclone.org/rclone-v1.37-linux-arm64.zip)
+ * [MIPS - Big Endian](https://downloads.rclone.org/rclone-v1.37-linux-mips.zip)
+ * [MIPS - Little Endian](https://downloads.rclone.org/rclone-v1.37-linux-mipsle.zip)
* FreeBSD
- * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.36-freebsd-386.zip)
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-freebsd-amd64.zip)
- * [ARM - 32 Bit](https://downloads.rclone.org/rclone-v1.36-freebsd-arm.zip)
+ * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.37-freebsd-386.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-freebsd-amd64.zip)
+ * [ARM - 32 Bit](https://downloads.rclone.org/rclone-v1.37-freebsd-arm.zip)
* NetBSD
- * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.36-netbsd-386.zip)
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-netbsd-amd64.zip)
- * [ARM - 32 Bit](https://downloads.rclone.org/rclone-v1.36-netbsd-arm.zip)
+ * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.37-netbsd-386.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-netbsd-amd64.zip)
+ * [ARM - 32 Bit](https://downloads.rclone.org/rclone-v1.37-netbsd-arm.zip)
* OpenBSD
- * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.36-openbsd-386.zip)
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-openbsd-amd64.zip)
+ * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.37-openbsd-386.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-openbsd-amd64.zip)
* Plan 9
- * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.36-plan9-386.zip)
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-plan9-amd64.zip)
+ * [386 - 32 Bit](https://downloads.rclone.org/rclone-v1.37-plan9-386.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-plan9-amd64.zip)
* Solaris
- * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.36-solaris-amd64.zip)
+ * [AMD64 - 64 Bit](https://downloads.rclone.org/rclone-v1.37-solaris-amd64.zip)
-You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.36).
+You can also find a [mirror of the downloads on github](https://github.com/ncw/rclone/releases/tag/v1.37).
Beta releases
=============
diff --git a/fs/version.go b/fs/version.go
index ed6074c8b..ed1501aa3 100644
--- a/fs/version.go
+++ b/fs/version.go
@@ -1,4 +1,4 @@
package fs
// Version of rclone
-var Version = "v1.36-DEV"
+var Version = "v1.37"
diff --git a/rclone.1 b/rclone.1
index ea99aaf4c..f126ba139 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 1.17.2
.\"
-.TH "rclone" "1" "Jun 15, 2017" "User Manual" ""
+.TH "rclone" "1" "Jul 22, 2017" "User Manual" ""
.hy
.SH Rclone
.PP
@@ -34,6 +34,8 @@ SFTP
.IP \[bu] 2
FTP
.IP \[bu] 2
+HTTP
+.IP \[bu] 2
The local filesystem
.PP
Features
@@ -291,7 +293,7 @@ sudo\ zypper\ install\ snapd
Enable the snap\-openwrt feed.
.SS Configure
.PP
-First you\[aq]ll need to configure rclone.
+First, you\[aq]ll need to configure rclone.
As the object storage systems have quite complicated authentication
these are kept in a config file.
(See the \f[C]\-\-config\f[] entry for how to find the config file and
@@ -308,7 +310,7 @@ rclone\ config
.PP
See the following for detailed instructions for
.IP \[bu] 2
-Google drive (https://rclone.org/drive/)
+Google Drive (https://rclone.org/drive/)
.IP \[bu] 2
Amazon S3 (https://rclone.org/s3/)
.IP \[bu] 2
@@ -335,6 +337,8 @@ SFTP (https://rclone.org/sftp/)
.IP \[bu] 2
FTP (https://rclone.org/ftp/)
.IP \[bu] 2
+HTTP (https://rclone.org/http/)
+.IP \[bu] 2
Crypt (https://rclone.org/crypt/) \- to encrypt other remotes
.SS Usage
.PP
@@ -948,7 +952,7 @@ rclone\ copyto\ source:path\ dest:path
.fi
.SS rclone cryptcheck
.PP
-Cryptcheck checks the integritity of a crypted remote.
+Cryptcheck checks the integrity of a crypted remote.
.SS Synopsis
.PP
rclone cryptcheck checks a remote against a crypted remote.
@@ -1123,7 +1127,7 @@ Mount the remote as a mountpoint.
\f[B]EXPERIMENTAL\f[]
.SS Synopsis
.PP
-rclone mount allows Linux, FreeBSD and macOS to mount any of
+rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of
Rclone\[aq]s cloud storage systems as a file system with FUSE.
.PP
This is \f[B]EXPERIMENTAL\f[] \- use with care.
@@ -1139,6 +1143,14 @@ rclone\ mount\ remote:path/to/files\ /path/to/local/mount
\f[]
.fi
.PP
+Or on Windows like this where X: is an unused drive letter
+.IP
+.nf
+\f[C]
+rclone\ mount\ remote:path/to/files\ X:
+\f[]
+.fi
+.PP
When the program ends, either via Ctrl+C or receiving a SIGINT or
SIGTERM signal, the mount is automatically stopped.
.PP
@@ -1169,7 +1181,7 @@ None of these support the concept of directories, so empty directories
will have a tendency to disappear once they fall out of the directory
cache.
.PP
-Only supported on Linux, FreeBSD and OS X at the moment.
+Only supported on Linux, FreeBSD, OS X and Windows at the moment.
.SS rclone mount vs rclone sync/copy
.PP
File systems expect things to be 100% reliable, whereas cloud storage
@@ -1230,15 +1242,17 @@ rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags]
\ \ \ \ \ \ \-\-debug\-fuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v.
\ \ \ \ \ \ \-\-default\-permissions\ \ \ \ \ \ \ Makes\ kernel\ enforce\ access\ control\ based\ on\ the\ file\ mode.
\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
+\ \ \ \ \ \ \-\-fuse\-flag\ stringArray\ \ \ \ \ Flags\ or\ arguments\ to\ be\ passed\ direct\ to\ libfuse/WinFsp.\ Repeat\ if\ required.
\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
\ \ \ \ \ \ \-\-max\-read\-ahead\ int\ \ \ \ \ \ \ \ The\ number\ of\ bytes\ that\ can\ be\ prefetched\ for\ sequential\ reads.\ (default\ 128k)
\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
+\ \ \-o,\ \-\-option\ stringArray\ \ \ \ \ \ \ \ Option\ for\ libfuse/WinFsp.\ Repeat\ if\ required.
\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
-\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2)
+\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.
\ \ \ \ \ \ \-\-write\-back\-cache\ \ \ \ \ \ \ \ \ \ Makes\ kernel\ buffer\ writes\ before\ sending\ them\ to\ rclone.\ Without\ this,\ writethrough\ caching\ is\ used.
\f[]
.fi
@@ -1343,7 +1357,7 @@ rclone\ obscure\ password
.fi
.SS rclone rmdirs
.PP
-Remove any empty directoryies under the path.
+Remove empty directories under the path.
.SS Synopsis
.PP
This removes any empty directories (or directories that only contain
@@ -1361,7 +1375,7 @@ rclone\ rmdirs\ remote:path
.SS Copying single files
.PP
rclone normally syncs or copies directories.
-However if the source remote points to a file, rclone will just copy
+However, if the source remote points to a file, rclone will just copy
that file.
The destination remote must point to a directory \- rclone will give the
error
@@ -1395,8 +1409,8 @@ test.jpg
\f[]
.fi
.PP
-It is recommended to use \f[C]copy\f[] when copying single files not
-\f[C]sync\f[].
+It is recommended to use \f[C]copy\f[] when copying individual files,
+not \f[C]sync\f[].
They have pretty much the same effect but \f[C]copy\f[] will use a lot
less memory.
.SS Quoting and the shell
@@ -1448,6 +1462,31 @@ If you are using the root directory on its own then don\[aq]t quote it
rclone\ copy\ E:\\\ remote:backup
\f[]
.fi
+.SS Copying files or directories with \f[C]:\f[] in the names
+.PP
+rclone uses \f[C]:\f[] to mark a remote name.
+This is, however, a valid filename component in non\-Windows OSes.
+The remote name parser will only search for a \f[C]:\f[] up to the first
+\f[C]/\f[] so if you need to act on a file or directory like this then
+use the full path starting with a \f[C]/\f[], or use \f[C]\&./\f[] as a
+current directory prefix.
+.PP
+So to sync a directory called \f[C]sync:me\f[] to a remote called
+\f[C]remote:\f[] use
+.IP
+.nf
+\f[C]
+rclone\ sync\ ./sync:me\ remote:path
+\f[]
+.fi
+.PP
+or
+.IP
+.nf
+\f[C]
+rclone\ sync\ /full/path/to/sync:me\ remote:path
+\f[]
+.fi
.SS Server Side Copy
.PP
Most remotes (but not all \- see the
@@ -1473,8 +1512,10 @@ and re\-upload in this case.
.PP
Server side copies are used with \f[C]sync\f[] and \f[C]copy\f[] and
will be identified in the log when using the \f[C]\-v\f[] flag.
-The may also be used with \f[C]move\f[] if the remote doesn\[aq]t
-support server side move.
+The \f[C]move\f[] command may also use them if remote doesn\[aq]t
+support server side move directly.
+This is done by issuing a server side copy then a delete which is much
+quicker than a download and re\-upload.
.PP
Server side copies will only be attempted if the remote names are the
same.
@@ -1498,7 +1539,7 @@ with optional fraction and a unit suffix, such as "300ms", "\-1.5h" or
Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
.PP
Options which use SIZE use kByte by default.
-However a suffix of \f[C]b\f[] for bytes, \f[C]k\f[] for kBytes,
+However, a suffix of \f[C]b\f[] for bytes, \f[C]k\f[] for kBytes,
\f[C]M\f[] for MBytes and \f[C]G\f[] for GBytes may be used.
These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
.SS \-\-backup\-dir=DIR
@@ -1543,13 +1584,13 @@ To use a single limit, specify the desired bandwidth in kBytes/s, or use
a suffix b|k|M|G.
The default is \f[C]0\f[] which means to not limit bandwidth.
.PP
-For example to limit bandwidth usage to 10 MBytes/s use
+For example, to limit bandwidth usage to 10 MBytes/s use
\f[C]\-\-bwlimit\ 10M\f[]
.PP
It is also possible to specify a "timetable" of limits, which will cause
certain limits to be applied at certain times.
To specify a timetable, format your entries as "HH:MM,BANDWIDTH
-HH:MM,BANDWITH...".
+HH:MM,BANDWIDTH...".
.PP
An example of a typical timetable to avoid link saturation during
daytime working hours could be:
@@ -1567,23 +1608,37 @@ Anything between 11pm and 8am will remain unlimited.
Bandwidth limits only apply to the data transfer.
They don\[aq]t apply to the bandwidth of the directory listings etc.
.PP
-Note that the units are Bytes/s not Bits/s.
+Note that the units are Bytes/s, not Bits/s.
Typically connections are measured in Bits/s \- to convert divide by 8.
-For example let\[aq]s say you have a 10 Mbit/s connection and you wish
+For example, let\[aq]s say you have a 10 Mbit/s connection and you wish
rclone to use half of it \- 5 Mbit/s.
This is 5/8 = 0.625MByte/s so you would use a
\f[C]\-\-bwlimit\ 0.625M\f[] parameter for rclone.
+.PP
+On Unix systems (Linux, MacOS, \&...) the bandwidth limiter can be
+toggled by sending a \f[C]SIGUSR2\f[] signal to rclone.
+This allows to remove the limitations of a long running rclone transfer
+and to restore it back to the value specified with \f[C]\-\-bwlimit\f[]
+quickly when needed.
+Assuming there is only one rclone instance running, you can toggle the
+limiter like this:
+.IP
+.nf
+\f[C]
+kill\ \-SIGUSR2\ $(pidof\ rclone)
+\f[]
+.fi
.SS \-\-buffer\-size=SIZE
.PP
Use this sized buffer to speed up file transfers.
Each \f[C]\-\-transfer\f[] will use this much memory for buffering.
.PP
-Set to 0 to disable the buffering for the minimum memory use.
+Set to 0 to disable the buffering for the minimum memory usage.
.SS \-\-checkers=N
.PP
The number of checkers to run in parallel.
Checkers do the equality checking of files during a sync.
-For some storage systems (eg s3, swift, dropbox) this can take a
+For some storage systems (eg S3, Swift, Dropbox) this can take a
significant amount of time so they are run in parallel.
.PP
The default is to run 8 checkers in parallel.
@@ -1674,7 +1729,7 @@ If \f[C]\-\-checksum\f[] is set then it only checks the checksum.
It will also cause rclone to skip verifying the sizes are the same after
transfer.
.PP
-This can be useful for transferring files to and from onedrive which
+This can be useful for transferring files to and from OneDrive which
occasionally misreports the size of image files (see
#399 (https://github.com/ncw/rclone/issues/399) for more info).
.SS \-I, \-\-ignore\-times
@@ -1691,7 +1746,7 @@ Log all of rclone\[aq]s output to FILE.
This is not active by default.
This can be useful for tracking down problems with syncs in combination
with the \f[C]\-v\f[] flag.
-See the Logging section for more info.
+See the Logging section (#logging) for more info.
.SS \-\-log\-level LEVEL
.PP
This sets the log level for rclone.
@@ -1711,7 +1766,7 @@ It outputs very little when things are working normally.
It outputs warnings and significant events.
.PP
\f[C]ERROR\f[] is equivalent to \f[C]\-q\f[].
-It only output error messages.
+It only outputs error messages.
.SS \-\-low\-level\-retries NUMBER
.PP
This controls the number of low level retries rclone does.
@@ -1722,8 +1777,9 @@ This might be uploading a chunk of a big file for example.
You will see low level retries in the log with the \f[C]\-v\f[] flag.
.PP
This shouldn\[aq]t need to be changed from the default in normal
-operations, however if you get a lot of low level retries you may wish
-to reduce the value so rclone moves on to a high level retry (see the
+operations.
+However, if you get a lot of low level retries you may wish to reduce
+the value so rclone moves on to a high level retry (see the
\f[C]\-\-retries\f[] flag) quicker.
.PP
Disable low level retries with \f[C]\-\-low\-level\-retries\ 1\f[].
@@ -1785,7 +1841,7 @@ If you set this flag it will make as little output as possible.
.PP
Retry the entire sync if it fails this many times it fails (default 3).
.PP
-Some remotes can be unreliable and a few retries helps pick up the files
+Some remotes can be unreliable and a few retries help pick up the files
which didn\[aq]t get transferred because of errors.
.PP
Disable retries with \f[C]\-\-retries\ 1\f[].
@@ -1795,7 +1851,7 @@ Normally rclone will look at modification time and size of files to see
if they are equal.
If you set this flag then rclone will check only the size.
.PP
-This can be useful transferring files from dropbox which have been
+This can be useful transferring files from Dropbox which have been
modified by the desktop sync client which doesn\[aq]t set checksums of
modification times in the same way as rclone.
.SS \-\-stats=TIME
@@ -1809,12 +1865,28 @@ This sets the interval.
The default is \f[C]1m\f[].
Use 0 to disable.
.PP
-If you set the stats interval then all command can show stats.
+If you set the stats interval then all commands can show stats.
This can be useful when running other commands, \f[C]check\f[] or
\f[C]mount\f[] for example.
+.PP
+Stats are logged at \f[C]INFO\f[] level by default which means they
+won\[aq]t show at default log level \f[C]NOTICE\f[].
+Use \f[C]\-\-stats\-log\-level\ NOTICE\f[] or \f[C]\-v\f[] to make them
+show.
+See the Logging section (#logging) for more info on log levels.
+.SS \-\-stats\-log\-level string
+.PP
+Log level to show \f[C]\-\-stats\f[] output at.
+This can be \f[C]DEBUG\f[], \f[C]INFO\f[], \f[C]NOTICE\f[], or
+\f[C]ERROR\f[].
+The default is \f[C]INFO\f[].
+This means at the default level of logging which is \f[C]NOTICE\f[] the
+stats won\[aq]t show \- if you want them to then use
+\f[C]\-stats\-log\-level\ NOTICE\f[].
+See the Logging section (#logging) for more info on log levels.
.SS \-\-stats\-unit=bits|bytes
.PP
-By default data transfer rates will be printed in bytes/second.
+By default, data transfer rates will be printed in bytes/second.
.PP
This option allows the data rate to be printed in bits/second.
.PP
@@ -1836,7 +1908,7 @@ See \f[C]\-\-backup\-dir\f[] for more info.
.PP
On capable OSes (not Windows or Plan9) send all log output to syslog.
.PP
-This can be useful for running rclone in script or
+This can be useful for running rclone in a script or
\f[C]rclone\ mount\f[].
.SS \-\-syslog\-facility string
.PP
@@ -1844,9 +1916,44 @@ If using \f[C]\-\-syslog\f[] this sets the syslog facility (eg
\f[C]KERN\f[], \f[C]USER\f[]).
See \f[C]man\ syslog\f[] for a list of possible facilities.
The default facility is \f[C]DAEMON\f[].
+.SS \-\-tpslimit float
+.PP
+Limit HTTP transactions per second to this.
+Default is 0 which is used to mean unlimited transactions per second.
+.PP
+For example to limit rclone to 10 HTTP transactions per second use
+\f[C]\-\-tpslimit\ 10\f[], or to 1 transaction every 2 seconds use
+\f[C]\-\-tpslimit\ 0.5\f[].
+.PP
+Use this when the number of transactions per second from rclone is
+causing a problem with the cloud storage provider (eg getting you banned
+or rate limited).
+.PP
+This can be very useful for \f[C]rclone\ mount\f[] to control the
+behaviour of applications using it.
+.PP
+See also \f[C]\-\-tpslimit\-burst\f[].
+.SS \-\-tpslimit\-burst int
+.PP
+Max burst of transactions for \f[C]\-\-tpslimit\f[].
+(default 1)
+.PP
+Normally \f[C]\-\-tpslimit\f[] will do exactly the number of transaction
+per second specified.
+However if you supply \f[C]\-\-tps\-burst\f[] then rclone can save up
+some transactions from when it was idle giving a burst of up to the
+parameter supplied.
+.PP
+For example if you provide \f[C]\-\-tpslimit\-burst\ 10\f[] then if
+rclone has been idle for more than 10*\f[C]\-\-tpslimit\f[] then it can
+do 10 transactions very quickly before they are limited again.
+.PP
+This may be used to increase performance of \f[C]\-\-tpslimit\f[]
+without changing the long term average number of transactions per
+second.
.SS \-\-track\-renames
.PP
-By default rclone doesn\[aq]t not keep track of renamed files, so if you
+By default, rclone doesn\[aq]t keep track of renamed files, so if you
rename a file locally then sync it to a remote, rclone will delete the
old file on the remote and upload a new copy.
.PP
@@ -1904,10 +2011,10 @@ using more directory lists to process any subdirectories.
This can be parallelised and works very quickly using the least amount
of memory.
.PP
-However some remotes have a way of listing all files beneath a directory
-in one (or a small number) of transactions.
-These tend to be the bucket based remotes (eg s3, b2, gcs, swift,
-hubic).
+However, some remotes have a way of listing all files beneath a
+directory in one (or a small number) of transactions.
+These tend to be the bucket based remotes (eg S3, B2, GCS, Swift,
+Hubic).
.PP
If you use the \f[C]\-\-fast\-list\f[] flag then rclone will use this
method for listing directories.
@@ -2032,7 +2139,7 @@ c/u/q>
.PP
Your configuration is now encrypted, and every time you start rclone you
will now be asked for the password.
-In the same menu you can change the password or completely remove
+In the same menu, you can change the password or completely remove
encryption from your configuration.
.PP
There is no way to recover the configuration if you lose your password.
@@ -2141,7 +2248,7 @@ If you are only copying a small number of files and/or have a large
number of files on the destination then \f[C]\-\-no\-traverse\f[] will
stop rclone listing the destination and save time.
.PP
-However if you are copying a large number of files, especially if you
+However, if you are copying a large number of files, especially if you
are doing a copy where lots of the files haven\[aq]t changed and
won\[aq]t need copying then you shouldn\[aq]t use
\f[C]\-\-no\-traverse\f[].
@@ -2186,11 +2293,11 @@ See the filtering section (https://rclone.org/filtering/).
rclone has 4 levels of logging, \f[C]Error\f[], \f[C]Notice\f[],
\f[C]Info\f[] and \f[C]Debug\f[].
.PP
-By default rclone logs to standard error.
+By default, rclone logs to standard error.
This means you can redirect standard error and still see the normal
output of rclone commands (eg \f[C]rclone\ ls\f[]).
.PP
-By default rclone will produce \f[C]Error\f[] and \f[C]Notice\f[] level
+By default, rclone will produce \f[C]Error\f[] and \f[C]Notice\f[] level
messages.
.PP
If you use the \f[C]\-q\f[] flag, rclone will only produce
@@ -2217,17 +2324,17 @@ which makes it easy to grep the log file for different kinds of
information.
.SS Exit Code
.PP
-If any errors occurred during the command, rclone will exit with a
-non\-zero exit code.
+If any errors occur during the command execution, rclone will exit with
+a non\-zero exit code.
This allows scripts to detect when rclone operations have failed.
.PP
-During the startup phase rclone will exit immediately if an error is
+During the startup phase, rclone will exit immediately if an error is
detected in the configuration.
There will always be a log message immediately before exiting.
.PP
When rclone is running it will accumulate errors as it goes along, and
-only exit with an non\-zero exit code if (after retries) there were
-still failed transfers.
+only exit with a non\-zero exit code if (after retries) there were still
+failed transfers.
For every error counted there will be a high priority log message
(visible with \f[C]\-q\f[]) showing the message and which file caused
the problem.
@@ -2244,11 +2351,11 @@ These can be used to set defaults for options or config file entries.
.PP
Every option in rclone can have its default set by environment variable.
.PP
-To find the name of the environment variable, first take the long option
-name, strip the leading \f[C]\-\-\f[], change \f[C]\-\f[] to \f[C]_\f[],
-make upper case and prepend \f[C]RCLONE_\f[].
+To find the name of the environment variable, first, take the long
+option name, strip the leading \f[C]\-\-\f[], change \f[C]\-\f[] to
+\f[C]_\f[], make upper case and prepend \f[C]RCLONE_\f[].
.PP
-For example to always set \f[C]\-\-stats\ 5s\f[], set the environment
+For example, to always set \f[C]\-\-stats\ 5s\f[], set the environment
variable \f[C]RCLONE_STATS=5s\f[].
If you set stats on the command line this will override the environment
variable setting.
@@ -2273,7 +2380,7 @@ To find the name of the environment variable, you need to set, take
\f[C]RCLONE_\f[] + name of remote + \f[C]_\f[] + name of config file
option and make it all uppercase.
.PP
-For example to configure an S3 remote named \f[C]mys3:\f[] without a
+For example, to configure an S3 remote named \f[C]mys3:\f[] without a
config file (using unix ways of setting environment variables):
.IP
.nf
@@ -3108,6 +3215,19 @@ T}@T{
\-
T}
T{
+HTTP
+T}@T{
+\-
+T}@T{
+No
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+R
+T}
+T{
The local filesystem
T}@T{
All
@@ -3404,6 +3524,21 @@ T}@T{
No
T}
T{
+HTTP
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}@T{
+No
+T}
+T{
The local filesystem
T}@T{
Yes
@@ -3658,6 +3793,9 @@ If sending them to the trash is required instead then use the
.SS Specific options
.PP
Here are the command line options specific to this cloud storage system.
+.SS \-\-drive\-auth\-owner\-only
+.PP
+Only consider files owned by the authenticated user.
.SS \-\-drive\-chunk\-size=SIZE
.PP
Upload chunk size.
@@ -3668,21 +3806,9 @@ Making this larger will improve performance, but note that each chunk is
buffered in memory one per transfer.
.PP
Reducing this will reduce memory usage but decrease performance.
-.SS \-\-drive\-full\-list
-.PP
-No longer does anything \- kept for backwards compatibility.
-.SS \-\-drive\-upload\-cutoff=SIZE
-.PP
-File size cutoff for switching to chunked upload.
-Default is 8 MB.
-.SS \-\-drive\-use\-trash
-.PP
-Send files to the trash instead of deleting permanently.
-Defaults to off, namely deleting files permanently.
.SS \-\-drive\-auth\-owner\-only
.PP
Only consider files owned by the authenticated user.
-Requires that \-\-drive\-full\-list=true (default).
.SS \-\-drive\-formats
.PP
Google documents can only be exported from Google drive.
@@ -3860,10 +3986,30 @@ T}@T{
A ZIP file of HTML, Images CSS
T}
.TE
+.SS \-\-drive\-list\-chunk int
+.PP
+Size of listing chunk 100\-1000.
+0 to disable.
+(default 1000)
+.SS \-\-drive\-shared\-with\-me
+.PP
+Only show files that are shared with me
.SS \-\-drive\-skip\-gdocs
.PP
Skip google documents in all listings.
If given, gdocs practically become invisible to rclone.
+.SS \-\-drive\-trashed\-only
+.PP
+Only show files that are in the trash.
+This will show trashed files in their original directory structure.
+.SS \-\-drive\-upload\-cutoff=SIZE
+.PP
+File size cutoff for switching to chunked upload.
+Default is 8 MB.
+.SS \-\-drive\-use\-trash
+.PP
+Send files to the trash instead of deleting permanently.
+Defaults to off, namely deleting files permanently.
.SS Limitations
.PP
Drive has quite a lot of rate limiting.
@@ -3871,6 +4017,41 @@ This causes rclone to be limited to transferring about 2 files per
second only.
Individual files may be transferred much faster at 100s of MBytes/s but
lots of small files can take a long time.
+.SS Duplicated files
+.PP
+Sometimes, for no reason I\[aq]ve been able to track down, drive will
+duplicate a file that rclone uploads.
+Drive unlike all the other remotes can have duplicated files.
+.PP
+Duplicated files cause problems with the syncing and you will see
+messages in the log about duplicates.
+.PP
+Use \f[C]rclone\ dedupe\f[] to fix duplicated files.
+.PP
+Note that this isn\[aq]t just a problem with rclone, even Google Photos
+on Android duplicates files on drive sometimes.
+.SS Rclone appears to be re\-copying files it shouldn\[aq]t
+.PP
+There are two possible reasons for rclone to recopy files which
+haven\[aq]t changed to Google Drive.
+.PP
+The first is the duplicated file issue above \- run
+\f[C]rclone\ dedupe\f[] and check your logs for duplicate object or
+directory messages.
+.PP
+The second is that sometimes Google reports different sizes for the
+Google Docs exports which will cause rclone to re\-download Google Docs
+for no apparent reason.
+\f[C]\-\-ignore\-size\f[] is a not very satisfactory work\-around for
+this if it is causing you a lot of problems.
+.SS Google docs downloads sometimes fail with "Failed to copy: read X
+bytes expecting Y"
+.PP
+This is the same problem as above.
+Google reports the google doc is one size, but rclone downloads a
+different size.
+Work\-around with the \f[C]\-\-ignore\-size\f[] flag or wait for rclone
+to retry the download which it will.
.SS Making your own client_id
.PP
When you use rclone with Google drive in its default configuration you
@@ -4374,21 +4555,33 @@ application developers and devops.
It is very easy to install and provides an S3 compatible server which
can be used by rclone.
.PP
-To use it, install Minio following the instructions from the web site.
+To use it, install Minio following the instructions
+here (https://docs.minio.io/docs/minio-quickstart-guide).
.PP
When it configures itself Minio will print something like this
.IP
.nf
\f[C]
-AccessKey:\ WLGDGYAQYIGI833EV05A\ \ SecretKey:\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF\ Region:\ us\-east\-1
+Endpoint:\ \ http://192.168.1.106:9000\ \ http://172.23.0.1:9000
+AccessKey:\ USWUXHGYZQYFYFFIT3RE
+SecretKey:\ MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+Region:\ \ \ \ us\-east\-1
+SQS\ ARNs:\ \ arn:minio:sqs:us\-east\-1:1:redis\ arn:minio:sqs:us\-east\-1:2:redis
-Minio\ Object\ Storage:
-\ \ \ \ \ http://127.0.0.1:9000
-\ \ \ \ \ http://10.0.0.3:9000
+Browser\ Access:
+\ \ \ http://192.168.1.106:9000\ \ http://172.23.0.1:9000
-Minio\ Browser:
-\ \ \ \ \ http://127.0.0.1:9000
-\ \ \ \ \ http://10.0.0.3:9000
+Command\-line\ Access:\ https://docs.minio.io/docs/minio\-client\-quickstart\-guide
+\ \ \ $\ mc\ config\ host\ add\ myminio\ http://192.168.1.106:9000\ USWUXHGYZQYFYFFIT3RE\ MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
+
+Object\ API\ (Amazon\ S3\ compatible):
+\ \ \ Go:\ \ \ \ \ \ \ \ \ https://docs.minio.io/docs/golang\-client\-quickstart\-guide
+\ \ \ Java:\ \ \ \ \ \ \ https://docs.minio.io/docs/java\-client\-quickstart\-guide
+\ \ \ Python:\ \ \ \ \ https://docs.minio.io/docs/python\-client\-quickstart\-guide
+\ \ \ JavaScript:\ https://docs.minio.io/docs/javascript\-client\-quickstart\-guide
+\ \ \ .NET:\ \ \ \ \ \ \ https://docs.minio.io/docs/dotnet\-client\-quickstart\-guide
+
+Drive\ Capacity:\ 26\ GiB\ Free,\ 165\ GiB\ Total
\f[]
.fi
.PP
@@ -4398,10 +4591,10 @@ Note that it is important to put the region in as stated above.
.nf
\f[C]
env_auth>\ 1
-access_key_id>\ WLGDGYAQYIGI833EV05A
-secret_access_key>\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF\ \ \
+access_key_id>\ USWUXHGYZQYFYFFIT3RE
+secret_access_key>\ MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region>\ us\-east\-1
-endpoint>\ http://10.0.0.3:9000
+endpoint>\ http://192.168.1.106:9000
location_constraint>
server_side_encryption>
\f[]
@@ -4413,26 +4606,20 @@ Which makes the config file look like this
\f[C]
[minio]
env_auth\ =\ false
-access_key_id\ =\ WLGDGYAQYIGI833EV05A
-secret_access_key\ =\ BYvgJM101sHngl2uzjXS/OBF/aMxAN06JrJ3qJlF
+access_key_id\ =\ USWUXHGYZQYFYFFIT3RE
+secret_access_key\ =\ MOJRH0mkL1IPauahWITSVvyDrQbEEIwljvmxdq03
region\ =\ us\-east\-1
-endpoint\ =\ http://10.0.0.3:9000
+endpoint\ =\ http://192.168.1.106:9000
location_constraint\ =
server_side_encryption\ =
\f[]
.fi
.PP
-Minio doesn\[aq]t support all the features of S3 yet.
-In particular it doesn\[aq]t support MD5 checksums (ETags) or metadata.
-This means rclone can\[aq]t check MD5SUMs or store the modified date.
-However you can work around this with the \f[C]\-\-size\-only\f[] flag
-of rclone.
-.PP
So once set up, for example to copy files into a bucket
.IP
.nf
\f[C]
-rclone\ \-\-size\-only\ copy\ /path/to/files\ minio:bucket
+rclone\ copy\ /path/to/files\ minio:bucket
\f[]
.fi
.SS Swift
@@ -4897,6 +5084,52 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 5\ /\ Project\ team\ owners\ get\ OWNER\ access,\ and\ all\ Users\ get\ WRITER\ access.
\ \ \ \\\ "publicReadWrite"
bucket_acl>\ 2
+Location\ for\ the\ newly\ created\ buckets.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Empty\ for\ default\ location\ (US).
+\ \ \ \\\ ""
+\ 2\ /\ Multi\-regional\ location\ for\ Asia.
+\ \ \ \\\ "asia"
+\ 3\ /\ Multi\-regional\ location\ for\ Europe.
+\ \ \ \\\ "eu"
+\ 4\ /\ Multi\-regional\ location\ for\ United\ States.
+\ \ \ \\\ "us"
+\ 5\ /\ Taiwan.
+\ \ \ \\\ "asia\-east1"
+\ 6\ /\ Tokyo.
+\ \ \ \\\ "asia\-northeast1"
+\ 7\ /\ Singapore.
+\ \ \ \\\ "asia\-southeast1"
+\ 8\ /\ Sydney.
+\ \ \ \\\ "australia\-southeast1"
+\ 9\ /\ Belgium.
+\ \ \ \\\ "europe\-west1"
+10\ /\ London.
+\ \ \ \\\ "europe\-west2"
+11\ /\ Iowa.
+\ \ \ \\\ "us\-central1"
+12\ /\ South\ Carolina.
+\ \ \ \\\ "us\-east1"
+13\ /\ Northern\ Virginia.
+\ \ \ \\\ "us\-east4"
+14\ /\ Oregon.
+\ \ \ \\\ "us\-west1"
+location>\ 12
+The\ storage\ class\ to\ use\ when\ storing\ objects\ in\ Google\ Cloud\ Storage.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Default
+\ \ \ \\\ ""
+\ 2\ /\ Multi\-regional\ storage\ class
+\ \ \ \\\ "MULTI_REGIONAL"
+\ 3\ /\ Regional\ storage\ class
+\ \ \ \\\ "REGIONAL"
+\ 4\ /\ Nearline\ storage\ class
+\ \ \ \\\ "NEARLINE"
+\ 5\ /\ Coldline\ storage\ class
+\ \ \ \\\ "COLDLINE"
+\ 6\ /\ Durable\ reduced\ availability\ storage\ class
+\ \ \ \\\ "DURABLE_REDUCED_AVAILABILITY"
+storage_class>\ 5
Remote\ config
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
@@ -5838,7 +6071,7 @@ Here are the command line options specific to this cloud storage system.
When uploading large files chunk the file into this size.
Note that these chunks are buffered in memory and there might a maximum
of \f[C]\-\-transfers\f[] chunks in progress at once.
-100,000,000 Bytes is the minimim size (default 96M).
+5,000,000 Bytes is the minimim size (default 96M).
.SS \-\-b2\-upload\-cutoff=SIZE
.PP
Cutoff for switching to chunked upload (default 190.735 MiB == 200 MB).
@@ -6058,18 +6291,14 @@ rclone\ config
.fi
.PP
This will guide you through an interactive setup process.
-You will need your account number (a short hex number) and key (a long
-hex number) which you can get from the SFTP control panel.
.IP
.nf
\f[C]
No\ remotes\ found\ \-\ make\ a\ new\ one
n)\ New\ remote
-r)\ Rename\ remote
-c)\ Copy\ remote
s)\ Set\ configuration\ password
q)\ Quit\ config
-n/r/c/s/q>\ n
+n/s/q>\ n
name>\ remote
Type\ of\ storage\ to\ configure.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
@@ -6101,6 +6330,8 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "sftp"
14\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
+15\ /\ http\ Connection
+\ \ \ \\\ "http"
Storage>\ sftp
SSH\ host\ to\ connect\ to
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
@@ -6108,21 +6339,210 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "example.com"
host>\ example.com
SSH\ username,\ leave\ blank\ for\ current\ username,\ ncw
-user>\
+user>\ sftpuser
SSH\ port,\ leave\ blank\ to\ use\ default\ (22)
port>\
-SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent
+SSH\ password,\ leave\ blank\ to\ use\ ssh\-agent.
y)\ Yes\ type\ in\ my\ own\ password
g)\ Generate\ random\ password
n)\ No\ leave\ this\ optional\ password\ blank
y/g/n>\ n
+Path\ to\ unencrypted\ PEM\-encoded\ private\ key\ file,\ leave\ blank\ to\ use\ ssh\-agent.
+key_file>\
Remote\ config
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
[remote]
host\ =\ example.com
-user\ =\
+user\ =\ sftpuser
port\ =\
pass\ =\
+key_file\ =\
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+\f[]
+.fi
+.PP
+This remote is called \f[C]remote\f[] and can now be used like this
+.PP
+See all directories in the home directory
+.IP
+.nf
+\f[C]
+rclone\ lsd\ remote:
+\f[]
+.fi
+.PP
+Make a new directory
+.IP
+.nf
+\f[C]
+rclone\ mkdir\ remote:path/to/directory
+\f[]
+.fi
+.PP
+List the contents of a directory
+.IP
+.nf
+\f[C]
+rclone\ ls\ remote:path/to/directory
+\f[]
+.fi
+.PP
+Sync \f[C]/home/local/directory\f[] to the remote directory, deleting
+any excess files in the directory.
+.IP
+.nf
+\f[C]
+rclone\ sync\ /home/local/directory\ remote:directory
+\f[]
+.fi
+.SS SSH Authentication
+.PP
+The SFTP remote supports 3 authentication methods
+.IP \[bu] 2
+Password
+.IP \[bu] 2
+Key file
+.IP \[bu] 2
+ssh\-agent
+.PP
+Key files should be unencrypted PEM\-encoded private key files.
+For instance \f[C]/home/$USER/.ssh/id_rsa\f[].
+.PP
+If you don\[aq]t specify \f[C]pass\f[] or \f[C]key_file\f[] then it will
+attempt to contact an ssh\-agent.
+.SS ssh\-agent on macOS
+.PP
+Note that there seem to be various problems with using an ssh\-agent on
+macOS due to recent changes in the OS.
+The most effective work\-around seems to be to start an ssh\-agent in
+each session, eg
+.IP
+.nf
+\f[C]
+eval\ `ssh\-agent\ \-s`\ &&\ ssh\-add\ \-A
+\f[]
+.fi
+.PP
+And then at the end of the session
+.IP
+.nf
+\f[C]
+eval\ `ssh\-agent\ \-k`
+\f[]
+.fi
+.PP
+These commands can be used in scripts of course.
+.SS Modified time
+.PP
+Modified times are stored on the server to 1 second precision.
+.PP
+Modified times are used in syncing and are fully supported.
+.SS Limitations
+.PP
+SFTP does not support any checksums.
+.PP
+The only ssh agent supported under Windows is Putty\[aq]s pagent.
+.PP
+SFTP isn\[aq]t supported under plan9 until this
+issue (https://github.com/pkg/sftp/issues/156) is fixed.
+.PP
+Note that since SFTP isn\[aq]t HTTP based the following flags don\[aq]t
+work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[],
+\f[C]\-\-dump\-auth\f[]
+.PP
+Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but
+\f[C]\-\-contimeout\f[] is).
+.SS FTP
+.PP
+FTP is the File Transfer Protocol.
+FTP support is provided using the
+github.com/jlaffaye/ftp (https://godoc.org/github.com/jlaffaye/ftp)
+package.
+.PP
+Here is an example of making an FTP configuration.
+First run
+.IP
+.nf
+\f[C]
+rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process.
+An FTP remote only needs a host together with and a username and a
+password.
+With anonymous FTP server, you will need to use \f[C]anonymous\f[] as
+username and your email address as the password.
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/r/c/s/q>\ n
+name>\ remote
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Amazon\ Drive
+\ \ \ \\\ "amazon\ cloud\ drive"
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
+\ \ \ \\\ "s3"
+\ 3\ /\ Backblaze\ B2
+\ \ \ \\\ "b2"
+\ 4\ /\ Dropbox
+\ \ \ \\\ "dropbox"
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ FTP\ Connection\
+\ \ \ \\\ "ftp"
+\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ \ \ \\\ "google\ cloud\ storage"
+\ 8\ /\ Google\ Drive
+\ \ \ \\\ "drive"
+\ 9\ /\ Hubic
+\ \ \ \\\ "hubic"
+10\ /\ Local\ Disk
+\ \ \ \\\ "local"
+11\ /\ Microsoft\ OneDrive
+\ \ \ \\\ "onedrive"
+12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+\ \ \ \\\ "swift"
+13\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+14\ /\ Yandex\ Disk
+\ \ \ \\\ "yandex"
+Storage>\ ftp
+FTP\ host\ to\ connect\ to
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Connect\ to\ ftp.example.com
+\ \ \ \\\ "ftp.example.com"
+host>\ ftp.example.com
+FTP\ username,\ leave\ blank\ for\ current\ username,\ ncw
+user>
+FTP\ port,\ leave\ blank\ to\ use\ default\ (21)
+port>
+FTP\ password
+y)\ Yes\ type\ in\ my\ own\ password
+g)\ Generate\ random\ password
+y/g>\ y
+Enter\ the\ password:
+password:
+Confirm\ the\ password:
+password:
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[remote]
+host\ =\ ftp.example.com
+user\ =\
+port\ =
+pass\ =\ ***\ ENCRYPTED\ ***
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -6167,24 +6587,171 @@ rclone\ sync\ /home/local/directory\ remote:directory
.fi
.SS Modified time
.PP
-Modified times are stored on the server to 1 second precision.
+FTP does not support modified times.
+Any times you see on the server will be time of upload.
+.SS Checksums
.PP
-Modified times are used in syncing and are fully supported.
+FTP does not support any checksums.
.SS Limitations
.PP
-SFTP does not support any checksums.
-.PP
-The only ssh agent supported under Windows is Putty\[aq]s pagent.
-.PP
-SFTP isn\[aq]t supported under plan9 until this
-issue (https://github.com/pkg/sftp/issues/156) is fixed.
-.PP
-Note that since SFTP isn\[aq]t HTTP based the following flags don\[aq]t
+Note that since FTP isn\[aq]t HTTP based the following flags don\[aq]t
work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[],
\f[C]\-\-dump\-auth\f[]
.PP
Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but
\f[C]\-\-contimeout\f[] is).
+.PP
+FTP could support server side move but doesn\[aq]t yet.
+.SS HTTP
+.PP
+The HTTP remote is a read only remote for reading files of a webserver.
+The webserver should provide file listings which rclone will read and
+turn into a remote.
+This has been tested with common webservers such as Apache/Nginx/Caddy
+and will likely work with file listings from most web servers.
+(If it doesn\[aq]t then please file an issue, or send a pull request!)
+.PP
+Paths are specified as \f[C]remote:\f[] or \f[C]remote:path/to/dir\f[].
+.PP
+Here is an example of how to make a remote called \f[C]remote\f[].
+First run:
+.IP
+.nf
+\f[C]
+\ rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/s/q>\ n
+name>\ remote
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Amazon\ Drive
+\ \ \ \\\ "amazon\ cloud\ drive"
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
+\ \ \ \\\ "s3"
+\ 3\ /\ Backblaze\ B2
+\ \ \ \\\ "b2"
+\ 4\ /\ Dropbox
+\ \ \ \\\ "dropbox"
+\ 5\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 6\ /\ FTP\ Connection
+\ \ \ \\\ "ftp"
+\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ \ \ \\\ "google\ cloud\ storage"
+\ 8\ /\ Google\ Drive
+\ \ \ \\\ "drive"
+\ 9\ /\ Hubic
+\ \ \ \\\ "hubic"
+10\ /\ Local\ Disk
+\ \ \ \\\ "local"
+11\ /\ Microsoft\ OneDrive
+\ \ \ \\\ "onedrive"
+12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+\ \ \ \\\ "swift"
+13\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+14\ /\ Yandex\ Disk
+\ \ \ \\\ "yandex"
+15\ /\ http\ Connection
+\ \ \ \\\ "http"
+Storage>\ http
+URL\ of\ http\ host\ to\ connect\ to
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Connect\ to\ example.com
+\ \ \ \\\ "https://example.com"
+url>\ https://beta.rclone.org
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[remote]
+url\ =\ https://beta.rclone.org
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+Current\ remotes:
+
+Name\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Type
+====\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ====
+remote\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ http
+
+e)\ Edit\ existing\ remote
+n)\ New\ remote
+d)\ Delete\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+e/n/d/r/c/s/q>\ q
+\f[]
+.fi
+.PP
+This remote is called \f[C]remote\f[] and can now be used like this
+.PP
+See all the top level directories
+.IP
+.nf
+\f[C]
+rclone\ lsd\ remote:
+\f[]
+.fi
+.PP
+List the contents of a directory
+.IP
+.nf
+\f[C]
+rclone\ ls\ remote:directory
+\f[]
+.fi
+.PP
+Sync the remote \f[C]directory\f[] to \f[C]/home/local/directory\f[],
+deleting any excess files.
+.IP
+.nf
+\f[C]
+rclone\ sync\ remote:directory\ /home/local/directory
+\f[]
+.fi
+.SS Read only
+.PP
+This remote is read only \- you can\[aq]t upload files to an HTTP
+server.
+.SS Modified time
+.PP
+Most HTTP servers store time accurate to 1 second.
+.SS Checksum
+.PP
+No checksums are stored.
+.SS Usage without a config file
+.PP
+Note that since only two environment variable need to be set, it is easy
+to use without a config file like this.
+.IP
+.nf
+\f[C]
+RCLONE_CONFIG_ZZ_TYPE=http\ RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org\ rclone\ lsd\ zz:
+\f[]
+.fi
+.PP
+Or if you prefer
+.IP
+.nf
+\f[C]
+export\ RCLONE_CONFIG_ZZ_TYPE=http
+export\ RCLONE_CONFIG_ZZ_URL=https://beta.rclone.org
+rclone\ lsd\ zz:
+\f[]
+.fi
.SS Crypt
.PP
The \f[C]crypt\f[] remote encrypts and decrypts another remote.
@@ -6630,152 +7197,6 @@ If the user doesn\[aq]t supply a salt then rclone uses an internal one.
\f[C]scrypt\f[] makes it impractical to mount a dictionary attack on
rclone encrypted data.
For full protection agains this you should always use a salt.
-.SS FTP
-.PP
-FTP is the File Transfer Protocol.
-FTP support is provided using the
-github.com/jlaffaye/ftp (https://godoc.org/github.com/jlaffaye/ftp)
-package.
-.PP
-Here is an example of making an FTP configuration.
-First run
-.IP
-.nf
-\f[C]
-rclone\ config
-\f[]
-.fi
-.PP
-This will guide you through an interactive setup process.
-An FTP remote only needs a host together with and a username and a
-password.
-With anonymous FTP server, you will need to use \f[C]anonymous\f[] as
-username and your email address as the password.
-.IP
-.nf
-\f[C]
-No\ remotes\ found\ \-\ make\ a\ new\ one
-n)\ New\ remote
-r)\ Rename\ remote
-c)\ Copy\ remote
-s)\ Set\ configuration\ password
-q)\ Quit\ config
-n/r/c/s/q>\ n
-name>\ remote
-Type\ of\ storage\ to\ configure.
-Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
-\ 1\ /\ Amazon\ Drive
-\ \ \ \\\ "amazon\ cloud\ drive"
-\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
-\ \ \ \\\ "s3"
-\ 3\ /\ Backblaze\ B2
-\ \ \ \\\ "b2"
-\ 4\ /\ Dropbox
-\ \ \ \\\ "dropbox"
-\ 5\ /\ Encrypt/Decrypt\ a\ remote
-\ \ \ \\\ "crypt"
-\ 6\ /\ FTP\ Connection\
-\ \ \ \\\ "ftp"
-\ 7\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
-\ \ \ \\\ "google\ cloud\ storage"
-\ 8\ /\ Google\ Drive
-\ \ \ \\\ "drive"
-\ 9\ /\ Hubic
-\ \ \ \\\ "hubic"
-10\ /\ Local\ Disk
-\ \ \ \\\ "local"
-11\ /\ Microsoft\ OneDrive
-\ \ \ \\\ "onedrive"
-12\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
-\ \ \ \\\ "swift"
-13\ /\ SSH/SFTP\ Connection
-\ \ \ \\\ "sftp"
-14\ /\ Yandex\ Disk
-\ \ \ \\\ "yandex"
-Storage>\ ftp
-FTP\ host\ to\ connect\ to
-Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
-\ 1\ /\ Connect\ to\ ftp.example.com
-\ \ \ \\\ "ftp.example.com"
-host>\ ftp.example.com
-FTP\ username,\ leave\ blank\ for\ current\ username,\ ncw
-user>
-FTP\ port,\ leave\ blank\ to\ use\ default\ (21)
-port>
-FTP\ password
-y)\ Yes\ type\ in\ my\ own\ password
-g)\ Generate\ random\ password
-y/g>\ y
-Enter\ the\ password:
-password:
-Confirm\ the\ password:
-password:
-Remote\ config
-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
-[remote]
-host\ =\ ftp.example.com
-user\ =\
-port\ =
-pass\ =\ ***\ ENCRYPTED\ ***
-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
-y)\ Yes\ this\ is\ OK
-e)\ Edit\ this\ remote
-d)\ Delete\ this\ remote
-y/e/d>\ y
-\f[]
-.fi
-.PP
-This remote is called \f[C]remote\f[] and can now be used like this
-.PP
-See all directories in the home directory
-.IP
-.nf
-\f[C]
-rclone\ lsd\ remote:
-\f[]
-.fi
-.PP
-Make a new directory
-.IP
-.nf
-\f[C]
-rclone\ mkdir\ remote:path/to/directory
-\f[]
-.fi
-.PP
-List the contents of a directory
-.IP
-.nf
-\f[C]
-rclone\ ls\ remote:path/to/directory
-\f[]
-.fi
-.PP
-Sync \f[C]/home/local/directory\f[] to the remote directory, deleting
-any excess files in the directory.
-.IP
-.nf
-\f[C]
-rclone\ sync\ /home/local/directory\ remote:directory
-\f[]
-.fi
-.SS Modified time
-.PP
-FTP does not support modified times.
-Any times you see on the server will be time of upload.
-.SS Checksums
-.PP
-FTP does not support any checksums.
-.SS Limitations
-.PP
-Note that since FTP isn\[aq]t HTTP based the following flags don\[aq]t
-work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[],
-\f[C]\-\-dump\-auth\f[]
-.PP
-Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but
-\f[C]\-\-contimeout\f[] is).
-.PP
-FTP could support server side move but doesn\[aq]t yet.
.SS Local Filesystem
.PP
Local paths are specified as normal filesystem paths, eg
@@ -6973,6 +7394,236 @@ On systems where it isn\[aq]t supported (eg Windows) it will not appear
as an valid flag.
.SS Changelog
.IP \[bu] 2
+v1.37 \- 2017\-07\-22
+.RS 2
+.IP \[bu] 2
+New backends
+.IP \[bu] 2
+FTP \- thanks to Antonio Messina
+.IP \[bu] 2
+HTTP \- thanks to Vasiliy Tolstov
+.IP \[bu] 2
+New commands
+.IP \[bu] 2
+rclone ncdu \- for exploring a remote with a text based user interface.
+.IP \[bu] 2
+rclone lsjson \- for listing with a machine readable output
+.IP \[bu] 2
+rclone dbhashsum \- to show Dropbox style hashes of files (local or
+Dropbox)
+.IP \[bu] 2
+New Features
+.IP \[bu] 2
+Implement \-\-fast\-list flag
+.RS 2
+.IP \[bu] 2
+This allows remotes to list recursively if they can
+.IP \[bu] 2
+This uses less transactions (important if you pay for them)
+.IP \[bu] 2
+This may or may not be quicker
+.IP \[bu] 2
+This will user more memory as it has to hold the listing in memory
+.IP \[bu] 2
+\-\-old\-sync\-method deprecated \- the remaining uses are covered by
+\-\-fast\-list
+.IP \[bu] 2
+This involved a major re\-write of all the listing code
+.RE
+.IP \[bu] 2
+Add \-\-tpslimit and \-\-tpslimit\-burst to limit transactions per
+second
+.RS 2
+.IP \[bu] 2
+this is useful in conjuction with \f[C]rclone\ mount\f[] to limit
+external apps
+.RE
+.IP \[bu] 2
+Add \-\-stats\-log\-level so can see \-\-stats without \-v
+.IP \[bu] 2
+Print password prompts to stderr \- Hraban Luyat
+.IP \[bu] 2
+Warn about duplicate files when syncing
+.IP \[bu] 2
+Oauth improvements
+.RS 2
+.IP \[bu] 2
+allow auth_url and token_url to be set in the config file
+.IP \[bu] 2
+Print redirection URI if using own credentials.
+.RE
+.IP \[bu] 2
+Don\[aq]t Mkdir at the start of sync to save transactions
+.IP \[bu] 2
+Compile
+.IP \[bu] 2
+Update build to go1.8.3
+.IP \[bu] 2
+Require go1.6 for building rclone
+.IP \[bu] 2
+Compile 386 builds with "GO386=387" for maximum compatibility
+.IP \[bu] 2
+Bug Fixes
+.IP \[bu] 2
+Fix menu selection when no remotes
+.IP \[bu] 2
+Config saving reworked to not kill the file if disk gets full
+.IP \[bu] 2
+Don\[aq]t delete remote if name does not change while renaming
+.IP \[bu] 2
+moveto, copyto: report transfers and checks as per move and copy
+.IP \[bu] 2
+Local
+.IP \[bu] 2
+Add \-\-local\-no\-unicode\-normalization flag \- Bob Potter
+.IP \[bu] 2
+Mount
+.IP \[bu] 2
+Now supported on Windows using cgofuse and WinFsp \- thanks to Bill
+Zissimopoulos for much help
+.IP \[bu] 2
+Compare checksums on upload/download via FUSE
+.IP \[bu] 2
+Unmount when program ends with SIGINT (Ctrl+C) or SIGTERM \- Jérôme
+Vizcaino
+.IP \[bu] 2
+On read only open of file, make open pending until first read
+.IP \[bu] 2
+Make \-\-read\-only reject modify operations
+.IP \[bu] 2
+Implement ModTime via FUSE for remotes that support it
+.IP \[bu] 2
+Allow modTime to be changed even before all writers are closed
+.IP \[bu] 2
+Fix panic on renames
+.IP \[bu] 2
+Fix hang on errored upload
+.IP \[bu] 2
+Crypt
+.IP \[bu] 2
+Report the name:root as specified by the user
+.IP \[bu] 2
+Add an "obfuscate" option for filename encryption \- Stephen Harris
+.IP \[bu] 2
+Amazon Drive
+.IP \[bu] 2
+Fix initialization order for token renewer
+.IP \[bu] 2
+Remove revoked credentials, allow oauth proxy config and update docs
+.IP \[bu] 2
+B2
+.IP \[bu] 2
+Reduce minimum chunk size to 5MB
+.IP \[bu] 2
+Drive
+.IP \[bu] 2
+Add team drive support
+.IP \[bu] 2
+Reduce bandwidth by adding fields for partial responses \- Martin
+Kristensen
+.IP \[bu] 2
+Implement \-\-drive\-shared\-with\-me flag to view shared with me files
+\- Danny Tsai
+.IP \[bu] 2
+Add \-\-drive\-trashed\-only to read only the files in the trash
+.IP \[bu] 2
+Remove obsolete \-\-drive\-full\-list
+.IP \[bu] 2
+Add missing seek to start on retries of chunked uploads
+.IP \[bu] 2
+Fix stats accounting for upload
+.IP \[bu] 2
+Convert / in names to a unicode equivalent (/)
+.IP \[bu] 2
+Poll for Google Drive changes when mounted
+.IP \[bu] 2
+OneDrive
+.IP \[bu] 2
+Fix the uploading of files with spaces
+.IP \[bu] 2
+Fix initialization order for token renewer
+.IP \[bu] 2
+Display speeds accurately when uploading \- Yoni Jah
+.IP \[bu] 2
+Swap to using http://localhost:53682/ as redirect URL \- Michael Ledin
+.IP \[bu] 2
+Retry on token expired error, reset upload body on retry \- Yoni Jah
+.IP \[bu] 2
+Google Cloud Storage
+.IP \[bu] 2
+Add ability to specify location and storage class via config and command
+line \- thanks gdm85
+.IP \[bu] 2
+Create container if necessary on server side copy
+.IP \[bu] 2
+Increase directory listing chunk to 1000 to increase performance
+.IP \[bu] 2
+Obtain a refresh token for GCS \- Steven Lu
+.IP \[bu] 2
+Yandex
+.IP \[bu] 2
+Fix the name reported in log messages (was empty)
+.IP \[bu] 2
+Correct error return for listing empty directory
+.IP \[bu] 2
+Dropbox
+.IP \[bu] 2
+Rewritten to use the v2 API
+.RS 2
+.IP \[bu] 2
+Now supports ModTime
+.IP \[bu] 2
+Can only set by uploading the file again
+.IP \[bu] 2
+If you uploaded with an old rclone, rclone may upload everything again
+.IP \[bu] 2
+Use \f[C]\-\-size\-only\f[] or \f[C]\-\-checksum\f[] to avoid this
+.IP \[bu] 2
+Now supports the Dropbox content hashing scheme
+.IP \[bu] 2
+Now supports low level retries
+.RE
+.IP \[bu] 2
+S3
+.IP \[bu] 2
+Work around eventual consistency in bucket creation
+.IP \[bu] 2
+Create container if necessary on server side copy
+.IP \[bu] 2
+Add us\-east\-2 (Ohio) and eu\-west\-2 (London) S3 regions \- Zahiar
+Ahmed
+.IP \[bu] 2
+Swift, Hubic
+.IP \[bu] 2
+Fix zero length directory markers showing in the subdirectory listing
+.RS 2
+.IP \[bu] 2
+this caused lots of duplicate transfers
+.RE
+.IP \[bu] 2
+Fix paged directory listings
+.RS 2
+.IP \[bu] 2
+this caused duplicate directory errors
+.RE
+.IP \[bu] 2
+Create container if necessary on server side copy
+.IP \[bu] 2
+Increase directory listing chunk to 1000 to increase performance
+.IP \[bu] 2
+Make sensible error if the user forgets the container
+.IP \[bu] 2
+SFTP
+.IP \[bu] 2
+Add support for using ssh key files
+.IP \[bu] 2
+Fix under Windows
+.IP \[bu] 2
+Fix ssh agent on Windows
+.IP \[bu] 2
+Adapt to latest version of library \- Igor Kharin
+.RE
+.IP \[bu] 2
v1.36 \- 2017\-03\-18
.RS 2
.IP \[bu] 2
@@ -8763,6 +9414,16 @@ Fabian Möller
.IP \[bu] 2
Edward Q.
Bridges
+.IP \[bu] 2
+Vasiliy Tolstov
+.IP \[bu] 2
+Harshavardhana
+.IP \[bu] 2
+sainaen
+.IP \[bu] 2
+gdm85
+.IP \[bu] 2
+Yaroslav Halchenko
.SH Contact the rclone project
.SS Forum
.PP
diff --git a/snapcraft.yaml b/snapcraft.yaml
index 9a7548b69..c26fa4ff0 100644
--- a/snapcraft.yaml
+++ b/snapcraft.yaml
@@ -1,5 +1,5 @@
name: rclone
-version: 1.36
+version: 1.37
summary: rsync for cloud storage
description:
Rclone is a command line program to sync files to and from cloud storage providers such as