diff --git a/MANUAL.html b/MANUAL.html
index f5e77b908..008d79094 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -12,7 +12,7 @@
Rclone
@@ -23,6 +23,7 @@
Backblaze B2
Box
Ceph
+DigitalOcean Spaces
Dreamhost
Dropbox
FTP
@@ -34,13 +35,18 @@
Microsoft Azure Blob Storage
Microsoft OneDrive
Minio
+Nextloud
OVH
Openstack Swift
Oracle Cloud Storage
+Ownloud
+pCloud
+put.io
QingStor
Rackspace Cloud Files
SFTP
Wasabi
+WebDAV
Yandex Disk
The local filesystem
@@ -54,6 +60,7 @@
Check mode to check for file hash equality
Can sync to and from network, eg two different cloud accounts
Optional encryption (Crypt)
+Optional cache (Cache)
Optional FUSE mount (rclone mount)
Links
@@ -74,6 +81,12 @@
See below for some expanded Linux / macOS instructions.
See the Usage section of the docs for how to use rclone, or run rclone -h
.
+Script installation
+To install rclone on Linux/MacOs/BSD systems, run:
+curl https://rclone.org/install.sh | sudo bash
+For beta installation, run:
+curl https://rclone.org/install.sh | sudo bash -s beta
+Note that this script checks the version of rclone installed first and won't re-download if not needed.
Linux installation from precompiled binary
Fetch and unpack
curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
@@ -127,7 +140,9 @@ sudo mv rclone /usr/local/bin/
Amazon S3
Backblaze B2
Box
+Cache
Crypt - to encrypt other remotes
+DigitalOcean Spaces
Dropbox
FTP
Google Cloud Storage
@@ -137,8 +152,10 @@ sudo mv rclone /usr/local/bin/
Microsoft Azure Blob Storage
Microsoft OneDrive
Openstack Swift / Rackspace Cloudfiles / Memset Memstore
+Pcloud
QingStor
SFTP
+WebDAV
Yandex Disk
The local filesystem
@@ -156,14 +173,8 @@ rclone sync /local/path remote:path # syncs /local/path to the remoterclone config
Enter an interactive configuration session.
Synopsis
-rclone config
enters an interactive configuration sessions where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
-Additional functions:
-
-rclone config edit
– same as above
-rclone config file
– show path of configuration file in use
-rclone config show
– print (decrypted) config file
-
-rclone config [function] [flags]
+Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
+rclone config [flags]
Options
-h, --help help for config
rclone copy
@@ -205,10 +216,12 @@ destpath/sourcepath/two.txt
Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation.
If no filters are in use and if possible this will server side move source:path
into dest:path
. After this source:path
will no longer longer exist.
Otherwise for each file in source:path
selected by the filters (if any) this will move it into dest:path
. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path
then delete the original (if no errors on copy) in source:path
.
+If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
Important: Since this can cause data loss, test first with the --dry-run flag.
rclone move source:path dest:path [flags]
Options
- -h, --help help for move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for move
rclone delete
Remove the contents of path.
Synopsis
@@ -311,7 +324,7 @@ rclone --dry-run --min-size 100M delete remote:path
Options
-h, --help help for cleanup
rclone dedupe
-Interactively find duplicate files delete/rename them.
+Interactively find duplicate files and delete/rename them.
Synopsis
By default dedupe
interactively finds duplicate files and offers to delete all but one or rename them to be different. Only useful with Google Drive which can have duplicate file names.
In the first pass it will merge directories with the same name. It will do this iteratively until all the identical directories have been merged.
@@ -382,9 +395,16 @@ two-3.txt: renamed from: two.txt
rclone authorize [flags]
Options
-h, --help help for authorize
+rclone cachestats
+Print cache stats for a remote
+Synopsis
+Print cache stats for a remote in JSON format
+rclone cachestats source: [flags]
+Options
+ -h, --help help for cachestats
rclone cat
Concatenates any files and sends them to stdout.
-Synopsis
+Synopsis
rclone cat sends any files to standard output.
You can use it like this to output a single file
rclone cat remote:path/to/file
@@ -394,16 +414,85 @@ two-3.txt: renamed from: two.txt
rclone --include "*.txt" cat remote:path/to/dir
Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.
rclone cat remote:path [flags]
-Options
+Options
--count int Only print N characters. (default -1)
--discard Discard the output instead of printing.
--head int Only print the first N characters.
-h, --help help for cat
--offset int Start printing at offset N (or from end if -ve).
--tail int Only print the last N characters.
+rclone config create
+Create a new remote with name, type and options.
+Synopsis
+Create a new remote of with and options. The options should be passed in in pairs of .
+For example to make a swift remote of name myremote using auto config you would do:
+rclone config create myremote swift env_auth true
+rclone config create <name> <type> [<key> <value>]* [flags]
+Options
+ -h, --help help for create
+rclone config delete
+Delete an existing remote .
+Synopsis
+Delete an existing remote .
+rclone config delete <name> [flags]
+Options
+ -h, --help help for delete
+rclone config dump
+Dump the config file as JSON.
+Synopsis
+Dump the config file as JSON.
+rclone config dump [flags]
+Options
+ -h, --help help for dump
+rclone config edit
+Enter an interactive configuration session.
+Synopsis
+Enter an interactive configuration session where you can setup new remotes and manage existing ones. You may also set or remove a password to protect your configuration.
+rclone config edit [flags]
+Options
+ -h, --help help for edit
+rclone config file
+Show path of configuration file in use.
+Synopsis
+Show path of configuration file in use.
+rclone config file [flags]
+Options
+ -h, --help help for file
+rclone config password
+Update password in an existing remote.
+Synopsis
+Update an existing remote's password. The password should be passed in in pairs of .
+For example to set password of a remote of name myremote you would do:
+rclone config password myremote fieldname mypassword
+rclone config password <name> [<key> <value>]+ [flags]
+Options
+ -h, --help help for password
+rclone config providers
+List in JSON format all the providers and options.
+Synopsis
+List in JSON format all the providers and options.
+rclone config providers [flags]
+Options
+ -h, --help help for providers
+rclone config show
+Print (decrypted) config file, or the config for a single remote.
+Synopsis
+Print (decrypted) config file, or the config for a single remote.
+rclone config show [<remote>] [flags]
+Options
+ -h, --help help for show
+rclone config update
+Update options in an existing remote.
+Synopsis
+Update an existing remote's options. The options should be passed in in pairs of .
+For example to update the env_auth field of a remote of name myremote you would do:
+rclone config update myremote swift env_auth true
+rclone config update <name> [<key> <value>]+ [flags]
+Options
+ -h, --help help for update
rclone copyto
Copy files from source to dest, skipping already copied
-Synopsis
+Synopsis
If source:path is a file or directory then it copies it to a file or directory named dest:path.
This can be used to upload single files to other than their current name. If the source is a directory then it acts exactly like the copy command.
So
@@ -417,11 +506,11 @@ if src is directory
see copy command for full details
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
rclone copyto source:path dest:path [flags]
-Options
+Options
-h, --help help for copyto
rclone cryptcheck
Cryptcheck checks the integrity of a crypted remote.
-Synopsis
+Synopsis
rclone cryptcheck checks a remote against a crypted remote. This is the equivalent of running rclone check, but able to check the checksums of the crypted remote.
For it to work the underlying remote of the cryptedremote must support some kind of checksum.
It works by reading the nonce from each file on the cryptedremote: and using that to encrypt each file on the remote:. It then checks the checksum of the underlying file on the cryptedremote: against the checksum of the file it has just encrypted.
@@ -431,33 +520,33 @@ if src is directory
rclone cryptcheck remote:path encryptedremote:path
After it has run it will log the status of the encryptedremote:.
rclone cryptcheck remote:path cryptedremote:path [flags]
-Options
+Options
-h, --help help for cryptcheck
rclone cryptdecode
Cryptdecode returns unencrypted file names.
-Synopsis
+Synopsis
rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
rclone cryptdecode encryptedremote: encryptedfilename [flags]
-Options
+Options
-h, --help help for cryptdecode
rclone dbhashsum
-Produces a Dropbbox hash file for all the objects in the path.
-Synopsis
+Produces a Dropbox hash file for all the objects in the path.
+Synopsis
Produces a Dropbox hash file for all the objects in the path. The hashes are calculated according to Dropbox content hash rules. The output is in the same format as md5sum and sha1sum.
rclone dbhashsum remote:path [flags]
-Options
+Options
-h, --help help for dbhashsum
rclone genautocomplete
Output completion script for a given shell.
-Synopsis
+Synopsis
Generates a shell completion script for rclone. Run with --help to list the supported shells.
-Options
+Options
-h, --help help for genautocomplete
rclone genautocomplete bash
Output bash completion script for rclone.
-Synopsis
+Synopsis
Generates a bash shell autocompletion script for rclone.
This writes to /etc/bash_completion.d/rclone by default so will probably need to be run with sudo or as root, eg
sudo rclone genautocomplete bash
@@ -465,11 +554,11 @@ if src is directory
. /etc/bash_completion
If you supply a command line argument the script will be written there.
rclone genautocomplete bash [output_file] [flags]
-Options
+Options
-h, --help help for bash
rclone genautocomplete zsh
Output zsh completion script for rclone.
-Synopsis
+Synopsis
Generates a zsh autocompletion script for rclone.
This writes to /usr/share/zsh/vendor-completions/_rclone by default so will probably need to be run with sudo or as root, eg
sudo rclone genautocomplete zsh
@@ -477,27 +566,27 @@ if src is directory
autoload -U compinit && compinit
If you supply a command line argument the script will be written there.
rclone genautocomplete zsh [output_file] [flags]
-Options
+Options
-h, --help help for zsh
rclone gendocs
Output markdown docs for rclone to the directory supplied.
-Synopsis
+Synopsis
This produces markdown docs for the rclone commands to the directory supplied. These are in a format suitable for hugo to render into the rclone.org website.
rclone gendocs output_directory [flags]
-Options
+Options
-h, --help help for gendocs
rclone listremotes
List all the remotes in the config file.
-Synopsis
+Synopsis
rclone listremotes lists all the available remotes from the config file.
When uses with the -l flag it lists the types too.
rclone listremotes [flags]
-Options
+Options
-h, --help help for listremotes
-l, --long Show the type as well as names.
rclone lsjson
List directories and objects in the path in JSON format.
-Synopsis
+Synopsis
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "IsDir" : false, "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Path" : "full/path/goes/here/file.txt", "Size" : 6 }
@@ -506,14 +595,14 @@ if src is directory
The time is in RFC3339 format with nanosecond precision.
The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.
rclone lsjson remote:path [flags]
-Options
+Options
--hash Include hashes in the output (may take longer).
-h, --help help for lsjson
--no-modtime Don't read the modification time (can speed things up).
-R, --recursive Recurse into the listing.
rclone mount
Mount the remote as a mountpoint. EXPERIMENTAL
-Synopsis
+Synopsis
rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
This is EXPERIMENTAL - use with care.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
@@ -541,34 +630,82 @@ umount /path/to/local/mount
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. This might happen in the future, but for the moment rclone mount won't do that, so will be less reliable than the rclone command.
Filters
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
+systemd
+When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.
Directory Cache
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+File Caching
+NB File caching is EXPERIMENTAL - use with care!
+These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage systm work more like a normal file system.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
+--vfs-cache-dir string Directory rclone will use for caching.
+--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
+The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
+Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+--vfs-cache-mode off
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
+This mode should support all normal file system operations.
+If an upload fails it will be retried up to --low-level-retries times.
+--vfs-cache-mode full
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
+This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory heirachies and chunks of files.q
+In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
+This mode should support all normal file system operations.
+If an upload or download fails it will be retried up to --low-level-retries times.
rclone mount remote:path /path/to/mountpoint [flags]
-Options
- --allow-non-empty Allow mounting over a non-empty directory.
- --allow-other Allow access to other users.
- --allow-root Allow access to root user.
- --debug-fuse Debug the FUSE internals - needs -v.
- --default-permissions Makes kernel enforce access control based on the file mode.
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for mount
- --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem.
- --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
+Options
+ --allow-non-empty Allow mounting over a non-empty directory.
+ --allow-other Allow access to other users.
+ --allow-root Allow access to root user.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for mount
+ --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
rclone moveto
Move file or directory from source to dest.
-Synopsis
+Synopsis
If source:path is a file or directory then it moves it to a file or directory named dest:path.
This can be used to rename files or upload single files to other than their existing name. If the source is a directory then it acts exacty like the move command.
So
@@ -583,11 +720,11 @@ if src is directory
This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
Important: Since this can cause data loss, test first with the --dry-run flag.
rclone moveto source:path dest:path [flags]
-Options
+Options
-h, --help help for moveto
rclone ncdu
Explore a remote with a text based user interface.
-Synopsis
+Synopsis
This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
Here are the keys - press '?' to toggle the help on and off
@@ -601,18 +738,18 @@ if src is directory
q/ESC/c-C to quit
This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment, most importantly deleting files, but is useful as it stands.
rclone ncdu remote:path [flags]
-Options
+Options
-h, --help help for ncdu
rclone obscure
Obscure password for use in the rclone.conf
-Synopsis
+Synopsis
Obscure password for use in the rclone.conf
rclone obscure password [flags]
-Options
+Options
-h, --help help for obscure
rclone rcat
Copies standard input to file on remote.
-Synopsis
+Synopsis
rclone rcat reads from standard input (stdin) and copies it to a single remote file.
echo "hello world" | rclone rcat remote:path/to/file
ffmpeg - | rclone rcat --checksum remote:path/to/file
@@ -620,19 +757,178 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff
. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
-Options
+Options
-h, --help help for rcat
rclone rmdirs
Remove empty directories under the path.
-Synopsis
+Synopsis
This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
+If you supply the --leave-root flag, it will not remove the root directory.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path [flags]
-Options
- -h, --help help for rmdirs
+Options
+ -h, --help help for rmdirs
+ --leave-root Do not remove root directory if empty
+rclone serve
+Serve a remote over a protocol.
+Synopsis
+rclone serve is used to serve a remote over a given protocol. This command requires the use of a subcommand to specify the protocol, eg
+rclone serve http remote:
+Each subcommand has its own options which you can see in their help.
+rclone serve <protocol> [opts] <remote> [flags]
+Options
+ -h, --help help for serve
+rclone serve http
+Serve the remote over HTTP.
+Synopsis
+rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost.
+You can use the filter flags (eg --include, --exclude) to control what is served.
+The server will log errors. Use -v to see access logs.
+--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+Directory Cache
+Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
+Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
+kill -SIGHUP $(pidof rclone)
+File Caching
+NB File caching is EXPERIMENTAL - use with care!
+These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage systm work more like a normal file system.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
+--vfs-cache-dir string Directory rclone will use for caching.
+--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
+The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
+Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+--vfs-cache-mode off
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
+This mode should support all normal file system operations.
+If an upload fails it will be retried up to --low-level-retries times.
+--vfs-cache-mode full
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
+This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory heirachies and chunks of files.q
+In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
+This mode should support all normal file system operations.
+If an upload or download fails it will be retried up to --low-level-retries times.
+rclone serve http remote:path [flags]
+Options
+ --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for http
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+rclone serve webdav
+Serve remote:path over webdav.
+Synopsis
+rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client or you can make a remote of type webdav to read and write it.
+NB at the moment each directory listing reads the start of each file which is undesirable: see https://github.com/golang/go/issues/22577
+Directory Cache
+Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires.
+Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
+kill -SIGHUP $(pidof rclone)
+File Caching
+NB File caching is EXPERIMENTAL - use with care!
+These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage systm work more like a normal file system.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
+--vfs-cache-dir string Directory rclone will use for caching.
+--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+--vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+--vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
+The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
+Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+--vfs-cache-mode off
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
+This mode should support all normal file system operations.
+If an upload fails it will be retried up to --low-level-retries times.
+--vfs-cache-mode full
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
+This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory heirachies and chunks of files.q
+In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
+This mode should support all normal file system operations.
+If an upload or download fails it will be retried up to --low-level-retries times.
+rclone serve webdav remote:path [flags]
+Options
+ --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for webdav
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+rclone touch
+Create new file or change file modification time.
+Synopsis
+Create new file or change file modification time.
+rclone touch remote:path [flags]
+Options
+ -h, --help help for touch
+ -C, --no-create Do not create the file if it does not exist.
+ -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
rclone tree
List the contents of the remote in a tree like fashion.
-Synopsis
+Synopsis
rclone tree lists the contents of a remote in a similar way to the unix tree command.
For example
$ rclone tree remote:path
@@ -648,7 +944,7 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.
The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
rclone tree remote:path [flags]
-Options
+Options
-a, --all All files are listed (list . files too).
-C, --color Turn colorization on always.
-d, --dirs-only List directories only.
@@ -712,7 +1008,7 @@ ffmpeg - | rclone rcat --checksum remote:path/to/file
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
rclone sync /path/to/files remote:current-backup
-Options
+Options
Rclone has a number of options to control its behaviour.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However, a suffix of b
for bytes, k
for kBytes, M
for MBytes and G
for GBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
@@ -725,7 +1021,7 @@ rclone sync /path/to/files remote:current-backup
will sync /path/to/local
to remote:current
, but for any files which would have been updated or deleted will be stored in remote:old
.
If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir
to store the old files, or you might want to pass --suffix
with today's date.
--bind string
-Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resoves to more than one IP address it will give an error.
+Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error.
--bwlimit=BANDWIDTH_SPEC
This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.
Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0
which means to not limit bandwidth.
@@ -788,10 +1084,12 @@ rclone sync /path/to/files remote:current-backup
With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified
.
Note that only commands which transfer files (e.g. sync
, copy
, move
) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete
, purge
) or implicitly (e.g. sync
, move
). Use copy --immutable
if it is desired to avoid deletion as well as modification.
This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.
+--leave-root
+During rmdirs it will not remove root directory, even if it's empty.
--log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
--log-level LEVEL
-This sets the log level for rclone. The default log level is INFO
.
+This sets the log level for rclone. The default log level is NOTICE
.
DEBUG
is equivalent to -vv
. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.
INFO
is equivalent to -v
. It outputs information about each transfer and prints stats once a minute by default.
NOTICE
is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.
@@ -946,16 +1244,22 @@ export RCLONE_CONFIG_PASS
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
--cpuprofile=FILE
Write CPU profile to file. This can be analysed with go tool pprof
.
---dump-auth
-Dump HTTP headers - will contain sensitive info such as Authorization:
headers - use --dump-headers
to dump without Authorization:
headers. Can be very verbose. Useful for debugging only.
---dump-bodies
+--dump flag,flag,flag
+The --dump
flag takes a comma separated list of flags to dump info about. These are:
+
+Dump HTTP headers with Authorization:
lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.
+Use --dump auth
if you do want the Authorization:
headers.
+--dump bodies
Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.
Note that the bodies are buffered in memory so don't use this for enormous files.
---dump-filters
+--dump requests
+Like --dump bodies
but dumps the request bodies and the response headers. Useful for debugging download problems.
+--dump responses
+Like --dump bodies
but dumps the response bodies and the request headers. Useful for debugging upload problems.
+--dump auth
+Dump HTTP headers - will contain sensitive info such as Authorization:
headers - use --dump headers
to dump without Authorization:
headers. Can be very verbose. Useful for debugging only.
+--dump filters
Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.
-
-Dump HTTP headers with Authorization:
lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.
-Use --dump-auth
if you do want the Authorization:
headers.
--memprofile=FILE
Write memory profile to file. This can be analysed with go tool pprof
.
--no-check-certificate=true/false
@@ -982,7 +1286,7 @@ export RCLONE_CONFIG_PASS
--max-size
--min-age
--max-age
---dump-filters
+--dump filters
See the filtering section.
Logging
@@ -1000,9 +1304,20 @@ export RCLONE_CONFIG_PASS
If any errors occur during the command execution, rclone will exit with a non-zero exit code. This allows scripts to detect when rclone operations have failed.
During the startup phase, rclone will exit immediately if an error is detected in the configuration. There will always be a log message immediately before exiting.
When rclone is running it will accumulate errors as it goes along, and only exit with a non-zero exit code if (after retries) there were still failed transfers. For every error counted there will be a high priority log message (visible with -q
) showing the message and which file caused the problem. A high priority message is also shown when starting a retry so the user can see that any previous error messages may not be valid after the retry. If rclone has done a retry it will log a high priority message if the retry was successful.
+List of exit codes
+
+0
- success
+1
- Syntax or usage error
+2
- Error not otherwise categorised
+3
- Directory not found
+4
- File not found
+5
- Temporary error (one that more retries might fix) (Retry errors)
+6
- Less serious errors (like 461 errors from dropbox) (NoRetry errors)
+7
- Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)
+
Environment Variables
Rclone can be configured entirely using environment variables. These can be used to set defaults for options or config file entries.
-Options
+Options
Every option in rclone can have its default set by environment variable.
To find the name of the environment variable, first, take the long option name, strip the leading --
, change -
to _
, make upper case and prepend RCLONE_
.
For example, to always set --stats 5s
, set the environment variable RCLONE_STATS=5s
. If you set stats on the command line this will override the environment variable setting.
@@ -1010,7 +1325,7 @@ export RCLONE_CONFIG_PASS
The same parser is used for the options and the environment variables so they take exactly the same form.
Config file
You can set defaults for values in the config file on an individual remote basis. If you want to use this feature, you will need to discover the name of the config items that you want. The easiest way is to run through rclone config
by hand, then look in the config file to see what the values are (the config file can be found by looking at the help for --config
in rclone help
).
-To find the name of the environment variable, you need to set, take RCLONE_
+ name of remote + _
+ name of config file option and make it all uppercase.
+To find the name of the environment variable, you need to set, take RCLONE_CONFIG_
+ name of remote + _
+ name of config file option and make it all uppercase.
For example, to configure an S3 remote named mys3:
without a config file (using unix ways of setting environment variables):
$ export RCLONE_CONFIG_MYS3_TYPE=s3
$ export RCLONE_CONFIG_MYS3_ACCESS_KEY_ID=XXX
@@ -1111,7 +1426,7 @@ y/e/d>
h[ae]llo - matches "hello"
- matches "hallo"
- doesn't match "hullo"
-A {
and }
define a choice between elements. It should contain a comma seperated list of patterns, any of which might match. These patterns can contain wildcards.
+A {
and }
define a choice between elements. It should contain a comma separated list of patterns, any of which might match. These patterns can contain wildcards.
{one,two}_potato - matches "one_potato"
- matches "two_potato"
- doesn't match "three_potato"
@@ -1166,6 +1481,7 @@ y/e/d>
--filter
--filter-from
+Important You should not use --include*
together with --exclude*
. It may produce different results than you expected. In that case try to use: --filter*
.
Note that all the options of the same type are processed together in the order above, regardless of what order they were placed on the command line.
So all --include
options are processed first in the order they appeared on the command line, then all --include-from
options etc.
To mix up the order includes and excludes, the --filter
flag can be used.
@@ -1274,7 +1590,7 @@ user2/stuff
rclone --min-size 50k --delete-excluded sync A: B:
This would delete all files on B
which are less than 50 kBytes as these are now excluded from the sync.
Always test first with --dry-run
and -v
before using this flag.
---dump-filters
- dump the filters to the output
+--dump filters
- dump the filters to the output
This dumps the defined filters to the output as regular expressions.
Useful for debugging.
@@ -1289,8 +1605,18 @@ user2/stuff
+Exclude directory based on a file
+It is possible to exclude a directory based on a file, which is present in this directory. Filename should be specified using the --exclude-if-present
flag. This flag has a priority over the other filtering flags.
+Imagine, you have the following directory structure:
+dir1/file1
+dir1/dir2/file2
+dir1/dir2/dir3/file3
+dir1/dir2/dir3/.ignore
+You can exclude dir3
from sync by running the following command:
+rclone sync --exclude-if-present .ignore dir1 remote:backup
+Currently only one filename is supported, i.e. --exclude-if-present
should not be used multiple times.
Overview of cloud storage systems
-Each cloud storage system is slighly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
+Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
Features
Here is an overview of the major features of each cloud storage system.
@@ -1410,6 +1736,14 @@ user2/stuff
R/W |
+pCloud |
+MD5, SHA1 |
+Yes |
+No |
+No |
+W |
+
+
QingStor |
MD5 |
No |
@@ -1417,7 +1751,7 @@ user2/stuff
No |
R/W |
-
+
SFTP |
MD5, SHA1 ‡ |
Yes |
@@ -1425,6 +1759,14 @@ user2/stuff
No |
- |
+
+WebDAV |
+- |
+Yes †† |
+Depends |
+No |
+- |
+
Yandex Disk |
MD5 |
@@ -1448,6 +1790,7 @@ user2/stuff
To use the verify checksums when transferring between cloud storage systems they must support a common hash type.
† Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.
‡ SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH.
+†† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
ModTime
The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum
flag.
All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.
@@ -1616,6 +1959,16 @@ user2/stuff
Yes |
+pCloud |
+Yes |
+Yes |
+Yes |
+Yes |
+Yes |
+No |
+No |
+
+
QingStor |
No |
Yes |
@@ -1625,7 +1978,7 @@ user2/stuff
Yes |
No |
-
+
SFTP |
No |
No |
@@ -1635,6 +1988,16 @@ user2/stuff
No |
Yes |
+
+WebDAV |
+Yes |
+Yes |
+Yes |
+Yes |
+No |
+No |
+Yes ‡ |
+
Yandex Disk |
Yes |
@@ -1660,6 +2023,7 @@ user2/stuff
Purge
This deletes a directory quicker than just deleting all the files in the directory.
† Note Swift and Hubic implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
+‡ StreamUpload is not supported with Nextcloud
Copy
Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn't support Move
directly.
If the server doesn't support Copy
directly then for copy operations the file is downloaded then re-uploaded.
@@ -1828,7 +2192,7 @@ Choose a number from below, or type in your own value
13 / Yandex Disk
\ "yandex"
Storage> 2
-Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
@@ -1996,6 +2360,7 @@ y/e/d> y
Secret Access Key: AWS_SECRET_ACCESS_KEY
or AWS_SECRET_KEY
Session Token: AWS_SESSION_TOKEN
+Running rclone
in an ECS task with an IAM role
Running rclone
on an EC2 instance with an IAM role
If none of these option actually end up providing rclone
with AWS credentials then S3 interaction will be non-authenticated (see below).
@@ -2075,7 +2440,7 @@ Choose a number from below
10) s3
11) yandex
type> 10
-Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
* Enter AWS credentials in the next step
1) false
@@ -2111,6 +2476,35 @@ region = other-v2-signature
],
}
Because this is a json dump, it is encoding the /
as \/
, so if you use the secret key as xxxxxx/xxxx
it will work fine.
+DigitalOcean Spaces
+Spaces is an S3-interoperable object storage service from cloud provider DigitalOcean.
+To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "Applications & API" page of the DigitalOcean control panel. They will be needed when promted by rclone config
for your access_key_id
and secret_access_key
.
+When prompted for a region
or location_constraint
, press enter to use the default value. The region must be included in the endpoint
setting (e.g. nyc3.digitaloceanspaces.com
). The defualt values can be used for other settings.
+Going through the whole process of creating a new remote by running rclone config
, each prompt should be answered as shown below:
+Storage> 2
+env_auth> 1
+access_key_id> YOUR_ACCESS_KEY
+secret_access_key> YOUR_SECRET_KEY
+region>
+endpoint> nyc3.digitaloceanspaces.com
+location_constraint>
+acl>
+storage_class>
+The resulting configuration file should look like:
+[spaces]
+type = s3
+env_auth = false
+access_key_id = YOUR_ACCESS_KEY
+secret_access_key = YOUR_SECRET_KEY
+region =
+endpoint = nyc3.digitaloceanspaces.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
+Once configured, you can create a new Space and begin copying files. For example:
+rclone mkdir spaces:my-new-space
+rclone copy /path/to/files spaces:my-new-space
Minio
Minio is an object storage server built for cloud application developers and devops.
It is very easy to install and provides an S3 compatible server which can be used by rclone.
@@ -2171,7 +2565,7 @@ Choose a number from below, or type in your own value
\ "s3"
[snip]
Storage> s3
-Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
@@ -2326,9 +2720,12 @@ y/e/d> y
Modified times are used in syncing and are fully supported except in the case of updating a modification time on an existing object. In this case the object will be uploaded again as B2 doesn't have an API method to set the modification time independent of doing an upload.
SHA1 checksums
The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process.
-Large files which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1
as recommended by Backblaze.
+Large files (bigger than the limit in --b2-upload-cutoff
) which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1
as recommended by Backblaze.
+For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See the overview for exactly which remotes support SHA1.
+Sources which don't support SHA1, in particular crypt
will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).
+Files sizes below --b2-upload-cutoff
will always have an SHA1 regardless of the source.
Transfers
-Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equiped laptop the optimum setting is about --transfers 32
though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4
is definitely too low for Backblaze B2 though.
+Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32
though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4
is definitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers
of these in use at any moment, so this sets the upper limit on the memory used.
Versions
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
@@ -2336,7 +2733,7 @@ y/e/d> y
If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket
command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff
.
When you purge
a bucket, the current and the old versions will be deleted then the bucket will be deleted.
However delete
will cause the current versions of the files to become hidden old versions.
-Here is a session showing the listing and and retreival of an old version followed by a cleanup
of the old versions.
+Here is a session showing the listing and retrieval of an old version followed by a cleanup
of the old versions.
Show current version and all the versions with --b2-versions
flag.
$ rclone -q ls b2:cleanup-test
9 one.txt
@@ -2346,7 +2743,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
8 one-v2016-07-04-141032-000.txt
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
-Retreive an old verson
+Retrieve an old version
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
$ ls -l /tmp/one-v2016-07-04-141003-000.txt
@@ -2375,8 +2772,6 @@ $ rclone -q --b2-versions ls b2:cleanup-test
/b2api/v1/b2_get_upload_part_url
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
-B2 with crypt
-When using B2 with crypt
files are encrypted into a temporary location and streamed from there. This is required to calculate the encrypted file's checksum before beginning the upload. On Windows the %TMPDIR% environment variable is used as the temporary location. If the file requires chunking, both the chunking and encryption will take place in memory.
Specific options
Here are the command line options specific to this cloud storage system.
--b2-chunk-size valuee=SIZE
@@ -2575,6 +2970,177 @@ y/e/d> y
Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Box file names can't have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
.
Box only supports filenames up to 255 characters in length.
+Cache (BETA)
+The cache
remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount
.
+To get started you just need to have an existing remote which can be configured with cache
.
+Here is an example of how to make a remote called test-cache
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> test-cache
+Type of storage to configure.
+Choose a number from below, or type in your own value
+...
+ 5 / Cache a remote
+ \ "cache"
+...
+Storage> 5
+Remote to cache.
+Normally should contain a ':' and a path, eg "myremote:path/to/dir",
+"myremote:bucket" or maybe "myremote:" (not recommended).
+remote> local:/test
+Optional: The URL of the Plex server
+plex_url> http://127.0.0.1:32400
+Optional: The username of the Plex user
+plex_username> dummyusername
+Optional: The password of the Plex user
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+The size of a chunk. Lower value good for slow connections but can affect seamless reading.
+Default: 5M
+Choose a number from below, or type in your own value
+ 1 / 1MB
+ \ "1m"
+ 2 / 5 MB
+ \ "5M"
+ 3 / 10 MB
+ \ "10M"
+chunk_size> 2
+How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
+Accepted units are: "s", "m", "h".
+Default: 5m
+Choose a number from below, or type in your own value
+ 1 / 1 hour
+ \ "1h"
+ 2 / 24 hours
+ \ "24h"
+ 3 / 24 hours
+ \ "48h"
+info_age> 2
+The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
+Default: 10G
+Choose a number from below, or type in your own value
+ 1 / 500 MB
+ \ "500M"
+ 2 / 1 GB
+ \ "1G"
+ 3 / 10 GB
+ \ "10G"
+chunk_total_size> 3
+Remote config
+--------------------
+[test-cache]
+remote = local:/test
+plex_url = http://127.0.0.1:32400
+plex_username = dummyusername
+plex_password = *** ENCRYPTED ***
+chunk_size = 5M
+info_age = 48h
+chunk_total_size = 10G
+You can then use it like this,
+List directories in top level of your drive
+rclone lsd test-cache:
+List all the files in your drive
+rclone ls test-cache:
+To start a cached mount
+rclone mount --allow-other test-cache: /var/tmp/test-cache
+Write Support
+Writes are supported through cache
. One caveat is that a mounted cache remote does not add any retry or fallback mechanism to the upload operation. This will depend on the implementation of the wrapped remote.
+One special case is covered with cache-writes
which will cache the file data at the same time as the upload when it is enabled making it available from the cache store immediately once the upload is finished.
+Read Features
+Multiple connections
+To counter the high latency between a local PC where rclone is running and cloud providers, the cache remote can split multiple requests to the cloud provider for smaller file chunks and combines them together locally where they can be available almost immediately before the reader usually needs them.
+This is similar to buffering when media files are played online. Rclone will stay around the current marker but always try its best to stay ahead and prepare the data before.
+Plex Integration
+There is a direct integration with Plex which allows cache to detect during reading if the file is in playback or not. This helps cache to adapt how it queries the cloud provider depending on what is needed for.
+Scans will have a minimum amount of workers (1) while in a confirmed playback cache will deploy the configured number of workers.
+This integration opens the doorway to additional performance improvements which will be explored in the near future.
+Note: If Plex options are not configured, cache
will function with its configured options without adapting any of its settings.
+How to enable? Run rclone config
and add all the Plex options (endpoint, username and password) in your remote and it will be automatically enabled.
+Affected settings: - cache-workers
: Configured value during confirmed playback or 1 all the other times
+Known issues
+Windows support - Experimental
+There are a couple of issues with Windows mount
functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS.
+Most of the issues seem to be related to the difference between filesystems on Linux flavors and Windows as cache is heavily dependant on them.
+Any reports or feedback on how cache behaves on this OS is greatly appreciated.
+
+- https://github.com/ncw/rclone/issues/1935
+- https://github.com/ncw/rclone/issues/1907
+- https://github.com/ncw/rclone/issues/1834
+
+Risk of throttling
+Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures.
+There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts.
+Some recommendations: - don't use a very small interval for entry informations (--cache-info-age
) - while writes aren't yet optimised, you can still write through cache
which gives you the advantage of adding the file in the cache at the same time if configured to do so.
+Future enhancements:
+
+- https://github.com/ncw/rclone/issues/1937
+- https://github.com/ncw/rclone/issues/1936
+
+cache and crypt
+One common scenario is to keep your data encrypted in the cloud provider using the crypt
remote. crypt
uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.
+There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache
+During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yelds better results: cloud remote -> cache -> crypt
+Specific options
+Here are the command line options specific to this cloud storage system.
+--cache-chunk-path=PATH
+Path to where partial file data (chunks) is stored locally. The remote name is appended to the final path.
+This config follows the --cache-db-path
. If you specify a custom location for --cache-db-path
and don't specify one for --cache-chunk-path
then --cache-chunk-path
will use the same path as --cache-db-path
.
+Default: /cache-backend/ Example: /.cache/cache-backend/test-cache
+--cache-db-path=PATH
+Path to where the file structure metadata (DB) is stored locally. The remote name is used as the DB file name.
+Default: /cache-backend/ Example: /.cache/cache-backend/test-cache
+--cache-db-purge
+Flag to clear all the cached data for this remote before.
+Default: not set
+--cache-chunk-size=SIZE
+The size of a chunk (partial file data). Use lower numbers for slower connections.
+Default: 5M
+--cache-total-chunk-size=SIZE
+The total size that the chunks can take up on the local disk. If cache
exceeds this value then it will start to the delete the oldest chunks until it goes under this value.
+Default: 10G
+--cache-chunk-clean-interval=DURATION
+How often should cache
perform cleanups of the chunk storage. The default value should be ok for most people. If you find that cache
goes over cache-total-chunk-size
too often then try to lower this value to force it to perform cleanups more often.
+Default: 1m
+--cache-info-age=DURATION
+How long to keep file structure information (directory listings, file size, mod times etc) locally.
+If all write operations are done through cache
then you can safely make this value very large as the cache store will also be updated in real time.
+Default: 6h
+--cache-read-retries=RETRIES
+How many times to retry a read from a cache storage.
+Since reading from a cache
stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache
isn't able to provide file data anymore.
+For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.
+Default: 10
+--cache-workers=WORKERS
+How many workers should run in parallel to download chunks.
+Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers.
+Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use. Default: 4
+--cache-chunk-no-memory
+By default, cache
will keep file data during streaming in RAM as well to provide it to readers as fast as possible.
+This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like cache-chunk-size
and cache-workers
this footprint can increase if there are parallel streams too (multiple files being read at the same time).
+If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.
+Default: not set
+--cache-rps=NUMBER
+This setting places a hard limit on the number of requests per second that cache
will be doing to the cloud provider remote and try to respect that value by setting waits between reads.
+If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.
+A good balance of all the other settings should make this setting useless but it is available to set for more special cases.
+NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.
+Default: disabled
+--cache-writes
+If you need to read files immediately after you upload them through cache
you can enable this flag to have their data stored in the cache store at the same time during upload.
+Default: not set
Crypt
The crypt
remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
@@ -2628,6 +3194,13 @@ Choose a number from below, or type in your own value
3 / Very simple filename obfuscation.
\ "obfuscate"
filename_encryption> 2
+Option to either encrypt directory names or leave them intact.
+Choose a number from below, or type in your own value
+ 1 / Encrypt directory names.
+ \ "true"
+ 2 / Don't encrypt directory names, leave them intact.
+ \ "false"
+filename_encryption> 1
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
@@ -2725,7 +3298,7 @@ $ rclone -q ls secret:
file names encrypted
file names can't be as long (~156 characters)
can use sub paths and copy single files
-directory structure visibile
+directory structure visible
identical files names will have identical uploaded names
can use shortcuts to shorten the directory recursion
@@ -2737,16 +3310,22 @@ $ rclone -q ls secret:
file names very lightly obfuscated
file names can be longer than standard encryption
can use sub paths and copy single files
-directory structure visibile
+directory structure visible
identical files names will have identical uploaded names
Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future which will address the long file name problem.
+Directory name encryption
+Crypt offers the option of encrypting dir names or leaving them intact. There are two options:
+True
+Encrypts the whole file path including directory names Example: 1/12/123.txt
is encrypted to p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
+False
+Only encrypts file names, skips directory names Example: 1/12/123/txt
is encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
Modified time and hashes
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can't check the checksums properly.
-Specific options
+Specific options
Here are the command line options specific to this cloud storage system.
--crypt-show-mapping
If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.
@@ -2757,7 +3336,7 @@ $ rclone -q ls secret:
rclone sync
will check the checksums while copying
- you can use
rclone check
between the encrypted remotes
-- you don't decrypt and encrypt unecessarily
+- you don't decrypt and encrypt unnecessarily
For example, let's say you have your original remote at remote:
with the encrypted version at eremote:
with path remote:crypt
. You would then set up the new remote remote2:
and then the encrypted version eremote2:
with path remote2:crypt
using the same passwords as eremote:
.
To sync the two remotes you would do
@@ -2772,7 +3351,7 @@ $ rclone -q ls secret:
8 bytes magic string RCLONE\x00\x00
24 bytes Nonce (IV)
-The initial nonce is generated from the operating systems crypto strong random number genrator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is miniscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.
+The initial nonce is generated from the operating systems crypto strong random number generator. The nonce is incremented for each chunk read making sure each nonce is unique for each block written. The chance of a nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸ bytes) you would have a probability of approximately 2×10⁻³² of re-using a nonce.
Chunk
Each chunk will contain 64kB of data, except for the last one which may have less data. The data chunk is in standard NACL secretbox format. Secretbox uses XSalsa20 and Poly1305 to encrypt and authenticate messages.
Each chunk contains:
@@ -2799,7 +3378,7 @@ $ rclone -q ls secret:
File names are encrypted segment by segment - the path is broken up into /
separated strings and these are encrypted individually.
File segments are padded using using PKCS#7 to a multiple of 16 bytes before encryption.
They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
-This makes for determinstic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.
+This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.
This means that
- filenames with the same name will encrypt the same
@@ -2813,8 +3392,8 @@ $ rclone -q ls secret:
base32
is used rather than the more efficient base64
so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).
Key derivation
-Rclone uses scrypt
with parameters N=16384, r=8, p=1
with a an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
-scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection agains this you should always use a salt.
+Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
+scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
Dropbox
Paths are specified as remote:path
Dropbox paths may be as deep as required, eg remote:directory/subdirectory
.
@@ -2885,10 +3464,11 @@ y/e/d> y
Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.
This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only
or --checksum
flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
-Specific options
+Specific options
Here are the command line options specific to this cloud storage system.
--dropbox-chunk-size=SIZE
-Upload chunk size. Max 150M. The default is 128MB. Note that this isn't buffered into memory.
+Any files larger than this will be uploaded in chunks of this size. The default is 48MB. The maximum is 150MB.
+Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
Limitations
Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempt to upload one of those file names, but the sync won't fail.
@@ -3199,6 +3779,8 @@ Google Application Client Id - leave blank normally.
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
+Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
+service_account_file>
Remote config
Use auto config?
* Say Y if not sure
@@ -3232,6 +3814,10 @@ y/e/d> y
rclone ls remote:
To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
+Service Account support
+You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
+To create a service account and obtain its credentials, go to the Google Developer Console and use the "Create Credentials" button. After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machine. These credentials are what rclone will use for authentication.
+To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won't use the browser based authentication flow.
Team drives
If you want to configure the remote to point to a Google Team Drive then answer y
to the question Configure this as a team drive?
.
This will fetch the list of Team Drives from google and allow you to configure which one you want to use. You can also type in a team drive ID if you prefer.
@@ -3273,7 +3859,7 @@ y/e/d> y
By default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false
flag, or set the equivalent environment variable.
Emptying trash
If you wish to empty your trash you can use the rclone cleanup remote:
command which will permanently delete all your trashed files. This command does not take any path arguments.
-Specific options
+Specific options
Here are the command line options specific to this cloud storage system.
--drive-auth-owner-only
Only consider files owned by the authenticated user.
@@ -3437,8 +4023,8 @@ y/e/d> y
Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)
Select a project or create a new project.
-Under Overview, Google APIs, Google Apps APIs, click "Drive API", then "Enable".
-Click "Credentials" in the left-side panel (not "Go to credentials", which opens the wizard), then "Create credentials", then "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.
+Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the then "Google Drive API".
+Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials", then "OAuth client ID". It will prompt you to set the OAuth consent screen product name, if you haven't set one already.
Choose an application type of "other", and click "Create". (the default name is fine)
It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.
@@ -3705,7 +4291,7 @@ y/e/d> y
The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to --transfers
of them being uploaded at once.
Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M
.
Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks.
-Specific options
+Specific options
Here are the command line options specific to this cloud storage system.
--azureblob-upload-cutoff=SIZE
Cutoff for switching to chunked upload - must be <= 256MB. The default is 256MB.
@@ -3808,7 +4394,7 @@ b/p>
One drive supports SHA1 type hashes, so you can use --checksum
flag.
Deleting files
Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
-Specific options
+Specific options
Here are the command line options specific to this cloud storage system.
--onedrive-chunk-size=SIZE
Above this size files will be chunked - must be multiple of 320k. The default is 10MB. Note that the chunks will be buffered into memory.
@@ -3939,6 +4525,7 @@ y/e/d> y
Memset Memstore
OVH Object Storage
Oracle Cloud Storage
+IBM Bluemix Cloud ObjectStorage Swift
Paths are specified as remote:container
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:container/path/to/dir
.
Here is an example of making a swift configuration. First run
@@ -3960,33 +4547,39 @@ Choose a number from below, or type in your own value
\ "b2"
4 / Box
\ "box"
- 5 / Dropbox
+ 5 / Cache a remote
+ \ "cache"
+ 6 / Dropbox
\ "dropbox"
- 6 / Encrypt/Decrypt a remote
+ 7 / Encrypt/Decrypt a remote
\ "crypt"
- 7 / FTP Connection
+ 8 / FTP Connection
\ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
+ 9 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 9 / Google Drive
+10 / Google Drive
\ "drive"
-10 / Hubic
+11 / Hubic
\ "hubic"
-11 / Local Disk
+12 / Local Disk
\ "local"
-12 / Microsoft Azure Blob Storage
+13 / Microsoft Azure Blob Storage
\ "azureblob"
-13 / Microsoft OneDrive
+14 / Microsoft OneDrive
\ "onedrive"
-14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-15 / QingClound Object Storage
+16 / Pcloud
+ \ "pcloud"
+17 / QingCloud Object Storage
\ "qingstor"
-16 / SSH/SFTP Connection
+18 / SSH/SFTP Connection
\ "sftp"
-17 / Yandex Disk
+19 / Webdav
+ \ "webdav"
+20 / Yandex Disk
\ "yandex"
-18 / http Connection
+21 / http Connection
\ "http"
Storage> swift
Get swift credentials from environment variables in standard OpenStack form.
@@ -3995,12 +4588,12 @@ Choose a number from below, or type in your own value
\ "false"
2 / Get swift credentials from environment vars. Leave other fields blank if using this.
\ "true"
-env_auth> 1
-User name to log in.
-user> user_name
-API key or password.
-key> password_or_api_key
-Authentication URL for server.
+env_auth> true
+User name to log in (OS_USERNAME).
+user>
+API key or password (OS_PASSWORD).
+key>
+Authentication URL for server (OS_AUTH_URL).
Choose a number from below, or type in your own value
1 / Rackspace US
\ "https://auth.api.rackspacecloud.com/v1.0"
@@ -4014,20 +4607,26 @@ Choose a number from below, or type in your own value
\ "https://auth.storage.memset.com/v2.0"
6 / OVH
\ "https://auth.cloud.ovh.net/v2.0"
-auth> 1
-User domain - optional (v3 auth)
-domain> Default
-Tenant name - optional for v1 auth, required otherwise
-tenant> tenant_name
-Tenant domain - optional (v3 auth)
-tenant_domain>
-Region name - optional
-region>
-Storage URL - optional
-storage_url>
-AuthVersion - optional - set to (1,2,3) if your auth URL has no version
-auth_version>
-Endpoint type to choose from the service catalogue
+auth>
+User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+user_id>
+User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+domain>
+Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+tenant>
+Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+tenant_id>
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+tenant_domain>
+Region name - optional (OS_REGION_NAME)
+region>
+Storage URL - optional (OS_STORAGE_URL)
+storage_url>
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+auth_token>
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+auth_version>
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
Choose a number from below, or type in your own value
1 / Public (default, choose this if not sure)
\ "public"
@@ -4035,21 +4634,24 @@ Choose a number from below, or type in your own value
\ "internal"
3 / Admin
\ "admin"
-endpoint_type>
+endpoint_type>
Remote config
--------------------
-[remote]
-env_auth = false
-user = user_name
-key = password_or_api_key
-auth = https://auth.api.rackspacecloud.com/v1.0
-domain = Default
-tenant =
-tenant_domain =
-region =
-storage_url =
-auth_version =
-endpoint_type =
+[test]
+env_auth = true
+user =
+key =
+auth =
+user_id =
+domain =
+tenant =
+tenant_id =
+tenant_domain =
+region =
+storage_url =
+auth_token =
+auth_version =
+endpoint_type =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -4086,7 +4688,9 @@ tenant = $OS_TENANT_NAME
Configuration from the environment
If you prefer you can configure rclone to use swift using a standard set of OpenStack environment variables.
When you run through the config, make sure you choose true
for env_auth
and leave everything else blank.
-rclone will then set any empty config parameters from the enviroment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.
+rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.
+Using an alternate authentication method
+If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack
commands to get a token). Then, you just need to pass the two configuration variables auth_token
and storage_url
. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.
Using rclone without a config file
You can use rclone with swift without a config file, if desired, like this:
source openstack-credentials-file
@@ -4095,7 +4699,7 @@ export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
rclone lsd myremote:
--fast-list
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
-Specific options
+Specific options
Here are the command line options specific to this cloud storage system.
--swift-chunk-size=SIZE
Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
@@ -4111,11 +4715,104 @@ rclone lsd myremote:
This may also be caused by specifying the region when you shouldn't have (eg OVH).
Rclone gives Failed to create file system: Response didn't have storage storage url and auth token
This is most likely caused by forgetting to specify your tenant when setting up a swift remote.
+pCloud
+Paths are specified as remote:path
+Paths may be as deep as required, eg remote:directory/subdirectory
.
+The initial setup for pCloud involves getting a token from pCloud which you need to do in your browser. rclone config
walks you through it.
+Here is an example of how to make a remote called remote
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Box
+ \ "box"
+ 5 / Dropbox
+ \ "dropbox"
+ 6 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 7 / FTP Connection
+ \ "ftp"
+ 8 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 9 / Google Drive
+ \ "drive"
+10 / Hubic
+ \ "hubic"
+11 / Local Disk
+ \ "local"
+12 / Microsoft Azure Blob Storage
+ \ "azureblob"
+13 / Microsoft OneDrive
+ \ "onedrive"
+14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+15 / Pcloud
+ \ "pcloud"
+16 / QingCloud Object Storage
+ \ "qingstor"
+17 / SSH/SFTP Connection
+ \ "sftp"
+18 / Yandex Disk
+ \ "yandex"
+19 / http Connection
+ \ "http"
+Storage> pcloud
+Pcloud App Client Id - leave blank normally.
+client_id>
+Pcloud App Client Secret - leave blank normally.
+client_secret>
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[remote]
+client_id =
+client_secret =
+token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+See the remote setup docs for how to set it up on a machine with no Internet browser available.
+Note that rclone runs a webserver on your local machine to collect the token as returned from pCloud. This only runs from the moment it opens your browser to the moment you get back the verification code. This is on http://127.0.0.1:53682/
and this it may require you to unblock it temporarily if you are running a host firewall.
+Once configured you can then use rclone
like this,
+List directories in top level of your pCloud
+rclone lsd remote:
+List all the files in your pCloud
+rclone ls remote:
+To copy a local directory to an pCloud directory called backup
+rclone copy /home/source remote:backup
+Modified time and hashes
+pCloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not. In order to set a Modification time pCloud requires the object be re-uploaded.
+pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum
flag.
+Deleting files
+Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup
can be used to empty the trash.
SFTP
SFTP is the Secure (or SSH) File Transfer Protocol.
It runs over SSH v2 and is standard with most modern SSH installations.
-Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the users home directory.
-Here is an example of making a SFTP configuration. First run
+Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
+Here is an example of making an SFTP configuration. First run
rclone config
This will guide you through an interactive setup process.
No remotes found - make a new one
@@ -4186,7 +4883,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-This remote is called remote
and can now be used like this
+This remote is called remote
and can now be used like this:
See all directories in the home directory
rclone lsd remote:
Make a new directory
@@ -4196,14 +4893,14 @@ y/e/d> y
Sync /home/local/directory
to the remote directory, deleting any excess files in the directory.
rclone sync /home/local/directory remote:directory
SSH Authentication
-The SFTP remote supports 3 authentication methods
+The SFTP remote supports three authentication methods:
- Password
- Key file
- ssh-agent
Key files should be unencrypted PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa
.
-If you don't specify pass
or key_file
then it will attempt to contact an ssh-agent.
+If you don't specify pass
or key_file
then rclone will attempt to contact an ssh-agent.
ssh-agent on macOS
Note that there seem to be various problems with using an ssh-agent on macOS due to recent changes in the OS. The most effective work-around seems to be to start an ssh-agent in each session, eg
eval `ssh-agent -s` && ssh-add -A
@@ -4215,10 +4912,131 @@ y/e/d> y
Modified times are used in syncing and are fully supported.
Limitations
SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH.
-The only ssh agent supported under Windows is Putty's pagent.
+The only ssh agent supported under Windows is Putty's pageant.
+The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher
setting in the configuration file to true
. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn't supported under plan9 until this issue is fixed.
Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
+WebDAV
+Paths are specified as remote:path
+Paths may be as deep as required, eg remote:directory/subdirectory
.
+To configure the WebDAV remote you will need to have a URL for it, and a username and password. If you know what kind of system you are connecting to then rclone can enable extra features.
+Here is an example of how to make a remote called remote
. First run:
+ rclone config
+This will guide you through an interactive setup process:
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Box
+ \ "box"
+ 5 / Dropbox
+ \ "dropbox"
+ 6 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 7 / FTP Connection
+ \ "ftp"
+ 8 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 9 / Google Drive
+ \ "drive"
+10 / Hubic
+ \ "hubic"
+11 / Local Disk
+ \ "local"
+12 / Microsoft Azure Blob Storage
+ \ "azureblob"
+13 / Microsoft OneDrive
+ \ "onedrive"
+14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+15 / Pcloud
+ \ "pcloud"
+16 / QingCloud Object Storage
+ \ "qingstor"
+17 / SSH/SFTP Connection
+ \ "sftp"
+18 / WebDAV
+ \ "webdav"
+19 / Yandex Disk
+ \ "yandex"
+20 / http Connection
+ \ "http"
+Storage> webdav
+URL of http host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "https://example.com"
+url> https://example.com/remote.php/webdav/
+Name of the WebDAV site/service/software you are using
+Choose a number from below, or type in your own value
+ 1 / Nextcloud
+ \ "nextcloud"
+ 2 / Owncloud
+ \ "owncloud"
+ 3 / Other site/service or software
+ \ "other"
+vendor> 1
+User name
+user> user
+Password.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Remote config
+--------------------
+[remote]
+url = https://example.com/remote.php/webdav/
+vendor = nextcloud
+user = user
+pass = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+Once configured you can then use rclone
like this,
+List directories in top level of your WebDAV
+rclone lsd remote:
+List all the files in your WebDAV
+rclone ls remote:
+To copy a local directory to an WebDAV directory called backup
+rclone copy /home/source remote:backup
+Modified time and hashes
+Plain WebDAV does not support modified times. However when used with Owncloud or Nextcloud rclone will support modified times.
+Hashes are not supported.
+Owncloud
+Click on the settings cog in the bottom right of the page and this will show the WebDAV URL that rclone needs in the config step. It will look something like https://example.com/remote.php/webdav/
.
+Owncloud supports modified times using the X-OC-Mtime
header.
+Nextcloud
+This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (rcat
) whereas Owncloud does. This may be fixed in the future.
+Put.io
+put.io can be accessed in a read only way using webdav.
+Configure the url
as https://webdav.put.io
and use your normal account username and password for user
and pass
. Set the vendor
to other
.
+Your config file should end up looking like this:
+[putio]
+type = webdav
+url = https://webdav.put.io
+vendor = other
+user = YourUserName
+pass = encryptedpassword
+If you are using put.io
with rclone mount
then use the --read-only
flag to signal to the OS that it can't write to the mount.
+For more help see the put.io webdav docs.
Yandex Disk
Yandex Disk is a cloud storage solution created by Yandex.
Yandex paths may be as deep as required, eg remote:directory/subdirectory
.
@@ -4313,7 +5131,7 @@ y/e/d> y
Filenames
Filenames are expected to be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the convmv
tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers.
-If an invalid (non-UTF8) filename is read, the invalid caracters will be replaced with the unicode replacement character, '�'. rclone
will emit a debug message in this case (use -v
to see), eg
+If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with the unicode replacement character, '�'. rclone
will emit a debug message in this case (use -v
to see), eg
Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
Long paths on Windows
Rclone handles long paths automatically, by converting all paths to long UNC paths which allows paths up to 32,767 characters.
@@ -4328,7 +5146,7 @@ nounc = true
And use rclone like this:
rclone copy c:\src nounc:z:\dst
This will use UNC paths on c:\src
but not on z:\dst
. Of course this will cause problems if the absolute path length of a file exceeds 258 characters on z, so only use this option if you have to.
-Specific options
+Specific options
Here are the command line options specific to local storage
--copy-links, -L
Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
@@ -4357,7 +5175,7 @@ nounc = true
This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.
--one-file-system, -x
This tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.
-For example if you have a directory heirachy like this
+For example if you have a directory hierarchy like this
root
├── disk1 - disk1 mounted on the root
│ └── file3 - stored on disk1
@@ -4380,6 +5198,98 @@ nounc = true
This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.
Changelog
+- v1.39 - 2017-12-23
+
+- New backends
+- WebDAV
+
+- tested with nextcloud, owncloud, put.io and others!
+
+- Pcloud
+- cache - wraps a cache around other backends (Remus Bunduc)
+
+- useful in combination with mount
+- NB this feature is in beta so use with care
+
+- New commands
+- serve command with subcommands:
+
+- serve webdav: this implements a webdav server for any rclone remote.
+- serve http: command to serve a remote over HTTP
+
+- config: add sub commands for full config file management
+
+- create/delete/dump/edit/file/password/providers/show/update
+
+- touch: to create or update the timestamp of a file (Jakub Tasiemski)
+- New Features
+- curl install for rclone (Filip Bartodziej)
+- --stats now shows percentage, size, rate and ETA in condensed form (Ishuah Kariuki)
+- --exclude-if-present to exclude a directory if a file is present (Iakov Davydov)
+- rmdirs: add --leave-root flag (lewpam)
+- move: add --delete-empty-src-dirs flag to remove dirs after move (Ishuah Kariuki)
+- Add --dump flag, introduce --dump requests, responses and remove --dump-auth, --dump-filters
+
+- Obscure X-Auth-Token: from headers when dumping too
+
+- Document and implement exit codes for different failure modes (Ishuah Kariuki)
+- Compile
+- Bug Fixes
+- Retry lots more different types of errors to make multipart transfers more reliable
+- Save the config before asking for a token, fixes disappearing oauth config
+- Warn the user if --include and --exclude are used together (Ernest Borowski)
+- Fix duplicate files (eg on Google drive) causing spurious copies
+- Allow trailing and leading whitespace for passwords (Jason Rose)
+- ncdu: fix crashes on empty directories
+- rcat: fix goroutine leak
+- moveto/copyto: Fix to allow copying to the same name
+- Mount
+- --vfs-cache mode to make writes into mounts more reliable.
+
+- this requires caching files on the disk (see --cache-dir)
+- As this is a new feature, use with care
+
+- Use sdnotify to signal systemd the mount is ready (Fabian Möller)
+- Check if directory is not empty before mounting (Ernest Borowski)
+- Local
+- Add error message for cross file system moves
+- Fix equality check for times
+- Dropbox
+- Rework multipart upload
+
+- buffer the chunks when uploading large files so they can be retried
+- change default chunk size to 48MB now we are buffering them in memory
+- retry every error after the first chunk is done successfully
+
+- Fix error when renaming directories
+- Swift
+- Fix crash on bad authentication
+- Google Drive
+- Add service account support (Tim Cooijmans)
+- S3
+- Make it work properly with Digital Ocean Spaces (Andrew Starr-Bochicchio)
+- Fix crash if a bad listing is received
+- Add support for ECS task IAM roles (David Minor)
+- Backblaze B2
+- Fix multipart upload retries
+- Fix --hard-delete to make it work 100% of the time
+- Swift
+- Allow authentication with storage URL and auth key (Giovanni Pizzi)
+- Add new fields for swift configuration to support IBM Bluemix Swift (Pierre Carlson)
+- Add OS_TENANT_ID and OS_USER_ID to config
+- Allow configs with user id instead of user name
+- Check if swift segments container exists before creating (John Leach)
+- Fix memory leak in swift transfers (upstream fix)
+- SFTP
+- Add option to enable the use of aes128-cbc cipher (Jon Fautley)
+- Amazon cloud drive
+- Fix download of large files failing with "Only one auth mechanism allowed"
+- crypt
+- Option to encrypt directory names or leave them intact
+- Implement DirChangeNotify (Fabian Möller)
+- onedrive
+- Add option to choose resourceURL during setup of OneDrive Business account if more than one is available for user
+
- v1.38 - 2017-09-30
Forum
@@ -5517,7 +6449,7 @@ THE SOFTWARE.
- Google+ page for general comments
-You can also follow me on twitter for rclone announcments
+You can also follow me on twitter for rclone announcements
- [@njcw](https://twitter.com/njcw)
diff --git a/MANUAL.md b/MANUAL.md
index b0906a083..c82059a48 100644
--- a/MANUAL.md
+++ b/MANUAL.md
@@ -1,6 +1,6 @@
% rclone(1) User Manual
% Nick Craig-Wood
-% Sep 30, 2017
+% Dec 23, 2017
Rclone
======
@@ -14,6 +14,7 @@ Rclone is a command line program to sync files and directories to and from:
* Backblaze B2
* Box
* Ceph
+* DigitalOcean Spaces
* Dreamhost
* Dropbox
* FTP
@@ -25,13 +26,18 @@ Rclone is a command line program to sync files and directories to and from:
* Microsoft Azure Blob Storage
* Microsoft OneDrive
* Minio
+* Nextloud
* OVH
* Openstack Swift
* Oracle Cloud Storage
+* Ownloud
+* pCloud
+* put.io
* QingStor
* Rackspace Cloud Files
* SFTP
* Wasabi
+* WebDAV
* Yandex Disk
* The local filesystem
@@ -45,6 +51,7 @@ Features
* [Check](https://rclone.org/commands/rclone_check/) mode to check for file hash equality
* Can sync to and from network, eg two different cloud accounts
* Optional encryption ([Crypt](https://rclone.org/crypt/))
+ * Optional cache ([Cache](https://rclone.org/cache/))
* Optional FUSE mount ([rclone mount](https://rclone.org/commands/rclone_mount/))
Links
@@ -70,6 +77,19 @@ See below for some expanded Linux / macOS instructions.
See the [Usage section](https://rclone.org/docs/) of the docs for how to use rclone, or
run `rclone -h`.
+## Script installation ##
+
+To install rclone on Linux/MacOs/BSD systems, run:
+
+ curl https://rclone.org/install.sh | sudo bash
+
+For beta installation, run:
+
+ curl https://rclone.org/install.sh | sudo bash -s beta
+
+Note that this script checks the version of rclone installed first and
+won't re-download if not needed.
+
## Linux installation from precompiled binary ##
Fetch and unpack
@@ -167,7 +187,9 @@ See the following for detailed instructions for
* [Amazon S3](https://rclone.org/s3/)
* [Backblaze B2](https://rclone.org/b2/)
* [Box](https://rclone.org/box/)
+ * [Cache](https://rclone.org/cache/)
* [Crypt](https://rclone.org/crypt/) - to encrypt other remotes
+ * [DigitalOcean Spaces](/s3/#digitalocean-spaces)
* [Dropbox](https://rclone.org/dropbox/)
* [FTP](https://rclone.org/ftp/)
* [Google Cloud Storage](https://rclone.org/googlecloudstorage/)
@@ -177,8 +199,10 @@ See the following for detailed instructions for
* [Microsoft Azure Blob Storage](https://rclone.org/azureblob/)
* [Microsoft OneDrive](https://rclone.org/onedrive/)
* [Openstack Swift / Rackspace Cloudfiles / Memset Memstore](https://rclone.org/swift/)
+ * [Pcloud](https://rclone.org/pcloud/)
* [QingStor](https://rclone.org/qingstor/)
* [SFTP](https://rclone.org/sftp/)
+ * [WebDAV](https://rclone.org/webdav/)
* [Yandex Disk](https://rclone.org/yandex/)
* [The local filesystem](https://rclone.org/local/)
@@ -213,20 +237,13 @@ Enter an interactive configuration session.
### Synopsis
-`rclone config`
- enters an interactive configuration sessions where you can setup
-new remotes and manage existing ones. You may also set or remove a password to
-protect your configuration.
-
-Additional functions:
-
- * `rclone config edit` – same as above
- * `rclone config file` – show path of configuration file in use
- * `rclone config show` – print (decrypted) config file
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones. You may also set or remove a
+password to protect your configuration.
```
-rclone config [function] [flags]
+rclone config [flags]
```
### Options
@@ -353,6 +370,8 @@ move will be used, otherwise it will copy it (server side if possible)
into `dest:path` then delete the original (if no errors on copy) in
`source:path`.
+If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
+
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
@@ -364,7 +383,8 @@ rclone move source:path dest:path [flags]
### Options
```
- -h, --help help for move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for move
```
## rclone delete
@@ -661,7 +681,7 @@ rclone cleanup remote:path [flags]
## rclone dedupe
-Interactively find duplicate files delete/rename them.
+Interactively find duplicate files and delete/rename them.
### Synopsis
@@ -780,6 +800,27 @@ rclone authorize [flags]
-h, --help help for authorize
```
+## rclone cachestats
+
+Print cache stats for a remote
+
+### Synopsis
+
+
+
+Print cache stats for a remote in JSON format
+
+
+```
+rclone cachestats source: [flags]
+```
+
+### Options
+
+```
+ -h, --help help for cachestats
+```
+
## rclone cat
Concatenates any files and sends them to stdout.
@@ -823,6 +864,202 @@ rclone cat remote:path [flags]
--tail int Only print the last N characters.
```
+## rclone config create
+
+Create a new remote with name, type and options.
+
+### Synopsis
+
+
+
+Create a new remote of with and options. The options
+should be passed in in pairs of .
+
+For example to make a swift remote of name myremote using auto config
+you would do:
+
+ rclone config create myremote swift env_auth true
+
+
+```
+rclone config create [ ]* [flags]
+```
+
+### Options
+
+```
+ -h, --help help for create
+```
+
+## rclone config delete
+
+Delete an existing remote .
+
+### Synopsis
+
+
+Delete an existing remote .
+
+```
+rclone config delete [flags]
+```
+
+### Options
+
+```
+ -h, --help help for delete
+```
+
+## rclone config dump
+
+Dump the config file as JSON.
+
+### Synopsis
+
+
+Dump the config file as JSON.
+
+```
+rclone config dump [flags]
+```
+
+### Options
+
+```
+ -h, --help help for dump
+```
+
+## rclone config edit
+
+Enter an interactive configuration session.
+
+### Synopsis
+
+
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones. You may also set or remove a
+password to protect your configuration.
+
+
+```
+rclone config edit [flags]
+```
+
+### Options
+
+```
+ -h, --help help for edit
+```
+
+## rclone config file
+
+Show path of configuration file in use.
+
+### Synopsis
+
+
+Show path of configuration file in use.
+
+```
+rclone config file [flags]
+```
+
+### Options
+
+```
+ -h, --help help for file
+```
+
+## rclone config password
+
+Update password in an existing remote.
+
+### Synopsis
+
+
+
+Update an existing remote's password. The password
+should be passed in in pairs of .
+
+For example to set password of a remote of name myremote you would do:
+
+ rclone config password myremote fieldname mypassword
+
+
+```
+rclone config password [ ]+ [flags]
+```
+
+### Options
+
+```
+ -h, --help help for password
+```
+
+## rclone config providers
+
+List in JSON format all the providers and options.
+
+### Synopsis
+
+
+List in JSON format all the providers and options.
+
+```
+rclone config providers [flags]
+```
+
+### Options
+
+```
+ -h, --help help for providers
+```
+
+## rclone config show
+
+Print (decrypted) config file, or the config for a single remote.
+
+### Synopsis
+
+
+Print (decrypted) config file, or the config for a single remote.
+
+```
+rclone config show [] [flags]
+```
+
+### Options
+
+```
+ -h, --help help for show
+```
+
+## rclone config update
+
+Update options in an existing remote.
+
+### Synopsis
+
+
+
+Update an existing remote's options. The options should be passed in
+in pairs of .
+
+For example to update the env_auth field of a remote of name myremote you would do:
+
+ rclone config update myremote swift env_auth true
+
+
+```
+rclone config update [ ]+ [flags]
+```
+
+### Options
+
+```
+ -h, --help help for update
+```
+
## rclone copyto
Copy files from source to dest, skipping already copied
@@ -938,7 +1175,7 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
## rclone dbhashsum
-Produces a Dropbbox hash file for all the objects in the path.
+Produces a Dropbox hash file for all the objects in the path.
### Synopsis
@@ -1232,6 +1469,14 @@ mount won't do that, so will be less reliable than the rclone command.
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
+### systemd ###
+
+When running rclone mount as a systemd service, it is possible
+to use Type=notify. In this case the service will enter the started state
+after the mountpoint has been successfully set up.
+Units having the rclone mount service specified as a requirement
+will see all files and folders immediately in this mode.
+
### Directory Cache ###
Using the `--dir-cache-time` flag, you can set how long a
@@ -1247,6 +1492,95 @@ like this:
kill -SIGHUP $(pidof rclone)
+### File Caching ###
+
+**NB** File caching is **EXPERIMENTAL** - use with care!
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage systm work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off ####
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal ####
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes ####
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full ####
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
```
rclone mount remote:path /path/to/mountpoint [flags]
@@ -1255,25 +1589,28 @@ rclone mount remote:path /path/to/mountpoint [flags]
### Options
```
- --allow-non-empty Allow mounting over a non-empty directory.
- --allow-other Allow access to other users.
- --allow-root Allow access to root user.
- --debug-fuse Debug the FUSE internals - needs -v.
- --default-permissions Makes kernel enforce access control based on the file mode.
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for mount
- --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem.
- --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
+ --allow-non-empty Allow mounting over a non-empty directory.
+ --allow-other Allow access to other users.
+ --allow-root Allow access to root user.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for mount
+ --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
```
## rclone moveto
@@ -1438,6 +1775,8 @@ This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
+If you supply the --leave-root flag, it will not remove the root directory.
+
This is useful for tidying up remotes that rclone has left a lot of
empty directories in.
@@ -1450,7 +1789,350 @@ rclone rmdirs remote:path [flags]
### Options
```
- -h, --help help for rmdirs
+ -h, --help help for rmdirs
+ --leave-root Do not remove root directory if empty
+```
+
+## rclone serve
+
+Serve a remote over a protocol.
+
+### Synopsis
+
+
+rclone serve is used to serve a remote over a given protocol. This
+command requires the use of a subcommand to specify the protocol, eg
+
+ rclone serve http remote:
+
+Each subcommand has its own options which you can see in their help.
+
+
+```
+rclone serve [opts] [flags]
+```
+
+### Options
+
+```
+ -h, --help help for serve
+```
+
+## rclone serve http
+
+Serve the remote over HTTP.
+
+### Synopsis
+
+
+rclone serve http implements a basic web server to serve the remote
+over HTTP. This can be viewed in a web browser or you can make a
+remote of type http read from it.
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+You can use the filter flags (eg --include, --exclude) to control what
+is served.
+
+The server will log errors. Use -v to see access logs.
+
+--bwlimit will be respected for file transfers. Use --stats to
+control the stats printing.
+
+### Directory Cache ###
+
+Using the `--dir-cache-time` flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made locally in the mount may appear immediately or
+invalidate the cache. However, changes done on the remote will only
+be picked up once the cache expires.
+
+Alternatively, you can send a `SIGHUP` signal to rclone for
+it to flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+### File Caching ###
+
+**NB** File caching is **EXPERIMENTAL** - use with care!
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage systm work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off ####
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal ####
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes ####
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full ####
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+
+```
+rclone serve http remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for http
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+```
+
+## rclone serve webdav
+
+Serve remote:path over webdav.
+
+### Synopsis
+
+
+
+rclone serve webdav implements a basic webdav server to serve the
+remote over HTTP via the webdav protocol. This can be viewed with a
+webdav client or you can make a remote of type webdav to read and
+write it.
+
+NB at the moment each directory listing reads the start of each file
+which is undesirable: see https://github.com/golang/go/issues/22577
+
+
+### Directory Cache ###
+
+Using the `--dir-cache-time` flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made locally in the mount may appear immediately or
+invalidate the cache. However, changes done on the remote will only
+be picked up once the cache expires.
+
+Alternatively, you can send a `SIGHUP` signal to rclone for
+it to flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+### File Caching ###
+
+**NB** File caching is **EXPERIMENTAL** - use with care!
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage systm work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off ####
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal ####
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes ####
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full ####
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+
+```
+rclone serve webdav remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for webdav
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+```
+
+## rclone touch
+
+Create new file or change file modification time.
+
+### Synopsis
+
+
+Create new file or change file modification time.
+
+```
+rclone touch remote:path [flags]
+```
+
+### Options
+
+```
+ -h, --help help for touch
+ -C, --no-create Do not create the file if it does not exist.
+ -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
```
## rclone tree
@@ -1675,7 +2357,7 @@ you might want to pass `--suffix` with today's date.
Local address to bind to for outgoing connections. This can be an
IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If
-the host name doesn't resolve or resoves to more than one IP address
+the host name doesn't resolve or resolves to more than one IP address
it will give an error.
### --bwlimit=BANDWIDTH_SPEC ###
@@ -1873,6 +2555,10 @@ This can be useful as an additional layer of protection for immutable
or append-only data sets (notably backup archives), where modification
implies corruption and should not be propagated.
+## --leave-root ###
+
+During rmdirs it will not remove root directory, even if it's empty.
+
### --log-file=FILE ###
Log all of rclone's output to FILE. This is not active by default.
@@ -1882,7 +2568,7 @@ for more info.
### --log-level LEVEL ###
-This sets the log level for rclone. The default log level is `INFO`.
+This sets the log level for rclone. The default log level is `NOTICE`.
`DEBUG` is equivalent to `-vv`. It outputs lots of debug info - useful
for bug reports and really finding out what rclone is doing.
@@ -2304,14 +2990,20 @@ here which are used for testing. These start with remote name eg
Write CPU profile to file. This can be analysed with `go tool pprof`.
-### --dump-auth ###
+#### --dump flag,flag,flag ####
-Dump HTTP headers - will contain sensitive info such as
-`Authorization:` headers - use `--dump-headers` to dump without
-`Authorization:` headers. Can be very verbose. Useful for debugging
+The `--dump` flag takes a comma separated list of flags to dump info
+about. These are:
+
+#### --dump headers ####
+
+Dump HTTP headers with `Authorization:` lines removed. May still
+contain sensitive info. Can be very verbose. Useful for debugging
only.
-### --dump-bodies ###
+Use `--dump auth` if you do want the `Authorization:` headers.
+
+#### --dump bodies ####
Dump HTTP headers and bodies - may contain sensitive info. Can be
very verbose. Useful for debugging only.
@@ -2319,19 +3011,28 @@ very verbose. Useful for debugging only.
Note that the bodies are buffered in memory so don't use this for
enormous files.
-### --dump-filters ###
+#### --dump requests ####
+
+Like `--dump bodies` but dumps the request bodies and the response
+headers. Useful for debugging download problems.
+
+#### --dump responses ####
+
+Like `--dump bodies` but dumps the response bodies and the request
+headers. Useful for debugging upload problems.
+
+#### --dump auth ####
+
+Dump HTTP headers - will contain sensitive info such as
+`Authorization:` headers - use `--dump headers` to dump without
+`Authorization:` headers. Can be very verbose. Useful for debugging
+only.
+
+#### --dump filters ####
Dump the filters to the output. Useful to see exactly what include
and exclude options are filtering on.
-### --dump-headers ###
-
-Dump HTTP headers with `Authorization:` lines removed. May still
-contain sensitive info. Can be very verbose. Useful for debugging
-only.
-
-Use `--dump-auth` if you do want the `Authorization:` headers.
-
### --memprofile=FILE ###
Write memory profile to file. This can be analysed with `go tool pprof`.
@@ -2385,7 +3086,7 @@ For the filtering options
* `--max-size`
* `--min-age`
* `--max-age`
- * `--dump-filters`
+ * `--dump filters`
See the [filtering section](https://rclone.org/filtering/).
@@ -2440,6 +3141,16 @@ when starting a retry so the user can see that any previous error
messages may not be valid after the retry. If rclone has done a retry
it will log a high priority message if the retry was successful.
+### List of exit codes ###
+ * `0` - success
+ * `1` - Syntax or usage error
+ * `2` - Error not otherwise categorised
+ * `3` - Directory not found
+ * `4` - File not found
+ * `5` - Temporary error (one that more retries might fix) (Retry errors)
+ * `6` - Less serious errors (like 461 errors from dropbox) (NoRetry errors)
+ * `7` - Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)
+
Environment Variables
---------------------
@@ -2475,8 +3186,8 @@ file to see what the values are (the config file can be found by
looking at the help for `--config` in `rclone help`).
To find the name of the environment variable, you need to set, take
-`RCLONE_` + name of remote + `_` + name of config file option and make
-it all uppercase.
+`RCLONE_CONFIG_` + name of remote + `_` + name of config file option
+and make it all uppercase.
For example, to configure an S3 remote named `mys3:` without a config
file (using unix ways of setting environment variables):
@@ -2649,7 +3360,7 @@ docs](https://golang.org/pkg/regexp/syntax/) for more info on these.
- doesn't match "hullo"
A `{` and `}` define a choice between elements. It should contain a
-comma seperated list of patterns, any of which might match. These
+comma separated list of patterns, any of which might match. These
patterns can contain wildcards.
{one,two}_potato - matches "one_potato"
@@ -2747,6 +3458,9 @@ type.
* `--filter`
* `--filter-from`
+**Important** You should not use `--include*` together with `--exclude*`.
+It may produce different results than you expected. In that case try to use: `--filter*`.
+
Note that all the options of the same type are processed together in
the order above, regardless of what order they were placed on the
command line.
@@ -2980,7 +3694,7 @@ these are now excluded from the sync.
Always test first with `--dry-run` and `-v` before using this flag.
-### `--dump-filters` - dump the filters to the output ###
+### `--dump filters` - dump the filters to the output ###
This dumps the defined filters to the output as regular expressions.
@@ -3002,9 +3716,30 @@ should work fine
* `--include *.jpg`
+## Exclude directory based on a file ##
+
+It is possible to exclude a directory based on a file, which is
+present in this directory. Filename should be specified using the
+`--exclude-if-present` flag. This flag has a priority over the other
+filtering flags.
+
+Imagine, you have the following directory structure:
+
+ dir1/file1
+ dir1/dir2/file2
+ dir1/dir2/dir3/file3
+ dir1/dir2/dir3/.ignore
+
+You can exclude `dir3` from sync by running the following command:
+
+ rclone sync --exclude-if-present .ignore dir1 remote:backup
+
+Currently only one filename is supported, i.e. `--exclude-if-present`
+should not be used multiple times.
+
# Overview of cloud storage systems #
-Each cloud storage system is slighly different. Rclone attempts to
+Each cloud storage system is slightly different. Rclone attempts to
provide a unified interface to them, but some underlying differences
show through.
@@ -3027,8 +3762,10 @@ Here is an overview of the major features of each cloud storage system.
| Microsoft Azure Blob Storage | MD5 | Yes | No | No | R/W |
| Microsoft OneDrive | SHA1 | Yes | Yes | No | R |
| Openstack Swift | MD5 | Yes | No | No | R/W |
+| pCloud | MD5, SHA1 | Yes | No | No | W |
| QingStor | MD5 | No | No | No | R/W |
| SFTP | MD5, SHA1 ‡ | Yes | Depends | No | - |
+| WebDAV | - | Yes †† | Depends | No | - |
| Yandex Disk | MD5 | Yes | No | No | R/W |
| The local filesystem | All | Yes | Depends | No | - |
@@ -3049,6 +3786,8 @@ This is an SHA256 sum of all the 4MB block SHA256s.
‡ SFTP supports checksums if the same login has shell access and `md5sum`
or `sha1sum` as well as `echo` are in the remote's PATH.
+†† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
+
### ModTime ###
The cloud storage system supports setting modification times on
@@ -3127,8 +3866,10 @@ operations more efficient.
| Microsoft Azure Blob Storage | Yes | Yes | No | No | No | Yes | No |
| Microsoft OneDrive | Yes | Yes | Yes | No [#197](https://github.com/ncw/rclone/issues/197) | No [#575](https://github.com/ncw/rclone/issues/575) | No | No |
| Openstack Swift | Yes † | Yes | No | No | No | Yes | Yes |
+| pCloud | Yes | Yes | Yes | Yes | Yes | No | No |
| QingStor | No | Yes | No | No | No | Yes | No |
| SFTP | No | No | Yes | Yes | No | No | Yes |
+| WebDAV | Yes | Yes | Yes | Yes | No | No | Yes ‡ |
| Yandex Disk | Yes | No | No | No | Yes | Yes | Yes |
| The local filesystem | Yes | No | Yes | Yes | No | No | Yes |
@@ -3141,6 +3882,8 @@ the directory.
markers but they don't actually have a quicker way of deleting files
other than deleting them individually.
+‡ StreamUpload is not supported with Nextcloud
+
### Copy ###
Used when copying an object to and from the same remote. This known
@@ -3450,7 +4193,7 @@ Choose a number from below, or type in your own value
13 / Yandex Disk
\ "yandex"
Storage> 2
-Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
@@ -3647,6 +4390,7 @@ credentials. In order of precedence:
- Access Key ID: `AWS_ACCESS_KEY_ID` or `AWS_ACCESS_KEY`
- Secret Access Key: `AWS_SECRET_ACCESS_KEY` or `AWS_SECRET_KEY`
- Session Token: `AWS_SESSION_TOKEN`
+ - Running `rclone` in an ECS task with an IAM role
- Running `rclone` on an EC2 instance with an IAM role
If none of these option actually end up providing `rclone` with AWS
@@ -3757,7 +4501,7 @@ Choose a number from below
10) s3
11) yandex
type> 10
-Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
* Enter AWS credentials in the next step
1) false
@@ -3816,6 +4560,51 @@ removed).
Because this is a json dump, it is encoding the `/` as `\/`, so if you
use the secret key as `xxxxxx/xxxx` it will work fine.
+### DigitalOcean Spaces ###
+
+[Spaces](https://www.digitalocean.com/products/object-storage/) is an [S3-interoperable](https://developers.digitalocean.com/documentation/spaces/) object storage service from cloud provider DigitalOcean.
+
+To connect to DigitalOcean Spaces you will need an access key and secret key. These can be retrieved on the "[Applications & API](https://cloud.digitalocean.com/settings/api/tokens)" page of the DigitalOcean control panel. They will be needed when promted by `rclone config` for your `access_key_id` and `secret_access_key`.
+
+When prompted for a `region` or `location_constraint`, press enter to use the default value. The region must be included in the `endpoint` setting (e.g. `nyc3.digitaloceanspaces.com`). The defualt values can be used for other settings.
+
+Going through the whole process of creating a new remote by running `rclone config`, each prompt should be answered as shown below:
+
+```
+Storage> 2
+env_auth> 1
+access_key_id> YOUR_ACCESS_KEY
+secret_access_key> YOUR_SECRET_KEY
+region>
+endpoint> nyc3.digitaloceanspaces.com
+location_constraint>
+acl>
+storage_class>
+```
+
+The resulting configuration file should look like:
+
+```
+[spaces]
+type = s3
+env_auth = false
+access_key_id = YOUR_ACCESS_KEY
+secret_access_key = YOUR_SECRET_KEY
+region =
+endpoint = nyc3.digitaloceanspaces.com
+location_constraint =
+acl =
+server_side_encryption =
+storage_class =
+```
+
+Once configured, you can create a new Space and begin copying files. For example:
+
+```
+rclone mkdir spaces:my-new-space
+rclone copy /path/to/files spaces:my-new-space
+```
+
### Minio ###
[Minio](https://minio.io/) is an object storage server built for cloud application developers and devops.
@@ -3905,7 +4694,7 @@ Choose a number from below, or type in your own value
\ "s3"
[snip]
Storage> s3
-Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
@@ -4105,13 +4894,27 @@ method to set the modification time independent of doing an upload.
The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process.
-Large files which are uploaded in chunks will store their SHA1 on the
-object as `X-Bz-Info-large_file_sha1` as recommended by Backblaze.
+Large files (bigger than the limit in `--b2-upload-cutoff`) which are
+uploaded in chunks will store their SHA1 on the object as
+`X-Bz-Info-large_file_sha1` as recommended by Backblaze.
+
+For a large file to be uploaded with an SHA1 checksum, the source
+needs to support SHA1 checksums. The local disk supports SHA1
+checksums so large file transfers from local disk will have an SHA1.
+See [the overview](/overview/#features) for exactly which remotes
+support SHA1.
+
+Sources which don't support SHA1, in particular `crypt` will upload
+large files without SHA1 checksums. This may be fixed in the future
+(see [#1767](https://github.com/ncw/rclone/issues/1767)).
+
+Files sizes below `--b2-upload-cutoff` will always have an SHA1
+regardless of the source.
### Transfers ###
Backblaze recommends that you do lots of transfers simultaneously for
-maximum speed. In tests from my SSD equiped laptop the optimum
+maximum speed. In tests from my SSD equipped laptop the optimum
setting is about `--transfers 32` though higher numbers may be used
for a slight speed improvement. The optimum number for you may vary
depending on your hardware, how big the files are, how much you want
@@ -4147,7 +4950,7 @@ deleted then the bucket will be deleted.
However `delete` will cause the current versions of the files to
become hidden old versions.
-Here is a session showing the listing and and retreival of an old
+Here is a session showing the listing and retrieval of an old
version followed by a `cleanup` of the old versions.
Show current version and all the versions with `--b2-versions` flag.
@@ -4163,7 +4966,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
15 one-v2016-07-02-155621-000.txt
```
-Retreive an old verson
+Retrieve an old version
```
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
@@ -4222,15 +5025,6 @@ start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_finish_large_file
```
-### B2 with crypt ###
-
-When using B2 with `crypt` files are encrypted into a temporary
-location and streamed from there. This is required to calculate the
-encrypted file's checksum before beginning the upload. On Windows the
-%TMPDIR% environment variable is used as the temporary location. If
-the file requires chunking, both the chunking and encryption will take
-place in memory.
-
### Specific options ###
Here are the command line options specific to this cloud storage
@@ -4529,6 +5323,344 @@ and from an identical looking unicode equivalent `\`.
Box only supports filenames up to 255 characters in length.
+Cache (BETA)
+-----------------------------------------
+
+The `cache` remote wraps another existing remote and stores file structure
+and its data for long running tasks like `rclone mount`.
+
+To get started you just need to have an existing remote which can be configured
+with `cache`.
+
+Here is an example of how to make a remote called `test-cache`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found - make a new one
+n) New remote
+r) Rename remote
+c) Copy remote
+s) Set configuration password
+q) Quit config
+n/r/c/s/q> n
+name> test-cache
+Type of storage to configure.
+Choose a number from below, or type in your own value
+...
+ 5 / Cache a remote
+ \ "cache"
+...
+Storage> 5
+Remote to cache.
+Normally should contain a ':' and a path, eg "myremote:path/to/dir",
+"myremote:bucket" or maybe "myremote:" (not recommended).
+remote> local:/test
+Optional: The URL of the Plex server
+plex_url> http://127.0.0.1:32400
+Optional: The username of the Plex user
+plex_username> dummyusername
+Optional: The password of the Plex user
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+The size of a chunk. Lower value good for slow connections but can affect seamless reading.
+Default: 5M
+Choose a number from below, or type in your own value
+ 1 / 1MB
+ \ "1m"
+ 2 / 5 MB
+ \ "5M"
+ 3 / 10 MB
+ \ "10M"
+chunk_size> 2
+How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
+Accepted units are: "s", "m", "h".
+Default: 5m
+Choose a number from below, or type in your own value
+ 1 / 1 hour
+ \ "1h"
+ 2 / 24 hours
+ \ "24h"
+ 3 / 24 hours
+ \ "48h"
+info_age> 2
+The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
+Default: 10G
+Choose a number from below, or type in your own value
+ 1 / 500 MB
+ \ "500M"
+ 2 / 1 GB
+ \ "1G"
+ 3 / 10 GB
+ \ "10G"
+chunk_total_size> 3
+Remote config
+--------------------
+[test-cache]
+remote = local:/test
+plex_url = http://127.0.0.1:32400
+plex_username = dummyusername
+plex_password = *** ENCRYPTED ***
+chunk_size = 5M
+info_age = 48h
+chunk_total_size = 10G
+```
+
+You can then use it like this,
+
+List directories in top level of your drive
+
+ rclone lsd test-cache:
+
+List all the files in your drive
+
+ rclone ls test-cache:
+
+To start a cached mount
+
+ rclone mount --allow-other test-cache: /var/tmp/test-cache
+
+### Write Support ###
+
+Writes are supported through `cache`.
+One caveat is that a mounted cache remote does not add any retry or fallback
+mechanism to the upload operation. This will depend on the implementation
+of the wrapped remote.
+
+One special case is covered with `cache-writes` which will cache the file
+data at the same time as the upload when it is enabled making it available
+from the cache store immediately once the upload is finished.
+
+### Read Features ###
+
+#### Multiple connections ####
+
+To counter the high latency between a local PC where rclone is running
+and cloud providers, the cache remote can split multiple requests to the
+cloud provider for smaller file chunks and combines them together locally
+where they can be available almost immediately before the reader usually
+needs them.
+
+This is similar to buffering when media files are played online. Rclone
+will stay around the current marker but always try its best to stay ahead
+and prepare the data before.
+
+#### Plex Integration ####
+
+There is a direct integration with Plex which allows cache to detect during reading
+if the file is in playback or not. This helps cache to adapt how it queries
+the cloud provider depending on what is needed for.
+
+Scans will have a minimum amount of workers (1) while in a confirmed playback cache
+will deploy the configured number of workers.
+
+This integration opens the doorway to additional performance improvements
+which will be explored in the near future.
+
+**Note:** If Plex options are not configured, `cache` will function with its
+configured options without adapting any of its settings.
+
+How to enable? Run `rclone config` and add all the Plex options (endpoint, username
+and password) in your remote and it will be automatically enabled.
+
+Affected settings:
+- `cache-workers`: _Configured value_ during confirmed playback or _1_ all the other times
+
+### Known issues ###
+
+#### Windows support - Experimental ####
+
+There are a couple of issues with Windows `mount` functionality that still require some investigations.
+It should be considered as experimental thus far as fixes come in for this OS.
+
+Most of the issues seem to be related to the difference between filesystems
+on Linux flavors and Windows as cache is heavily dependant on them.
+
+Any reports or feedback on how cache behaves on this OS is greatly appreciated.
+
+- https://github.com/ncw/rclone/issues/1935
+- https://github.com/ncw/rclone/issues/1907
+- https://github.com/ncw/rclone/issues/1834
+
+#### Risk of throttling ####
+
+Future iterations of the cache backend will make use of the pooling functionality
+of the cloud provider to synchronize and at the same time make writing through it
+more tolerant to failures.
+
+There are a couple of enhancements in track to add these but in the meantime
+there is a valid concern that the expiring cache listings can lead to cloud provider
+throttles or bans due to repeated queries on it for very large mounts.
+
+Some recommendations:
+- don't use a very small interval for entry informations (`--cache-info-age`)
+- while writes aren't yet optimised, you can still write through `cache` which gives you the advantage
+of adding the file in the cache at the same time if configured to do so.
+
+Future enhancements:
+
+- https://github.com/ncw/rclone/issues/1937
+- https://github.com/ncw/rclone/issues/1936
+
+#### cache and crypt ####
+
+One common scenario is to keep your data encrypted in the cloud provider
+using the `crypt` remote. `crypt` uses a similar technique to wrap around
+an existing remote and handles this translation in a seamless way.
+
+There is an issue with wrapping the remotes in this order:
+**cloud remote** -> **crypt** -> **cache**
+
+During testing, I experienced a lot of bans with the remotes in this order.
+I suspect it might be related to how crypt opens files on the cloud provider
+which makes it think we're downloading the full file instead of small chunks.
+Organizing the remotes in this order yelds better results:
+**cloud remote** -> **cache** -> **crypt**
+
+### Specific options ###
+
+Here are the command line options specific to this cloud storage
+system.
+
+#### --cache-chunk-path=PATH ####
+
+Path to where partial file data (chunks) is stored locally. The remote
+name is appended to the final path.
+
+This config follows the `--cache-db-path`. If you specify a custom
+location for `--cache-db-path` and don't specify one for `--cache-chunk-path`
+then `--cache-chunk-path` will use the same path as `--cache-db-path`.
+
+**Default**: /cache-backend/
+**Example**: /.cache/cache-backend/test-cache
+
+#### --cache-db-path=PATH ####
+
+Path to where the file structure metadata (DB) is stored locally. The remote
+name is used as the DB file name.
+
+**Default**: /cache-backend/
+**Example**: /.cache/cache-backend/test-cache
+
+#### --cache-db-purge ####
+
+Flag to clear all the cached data for this remote before.
+
+**Default**: not set
+
+#### --cache-chunk-size=SIZE ####
+
+The size of a chunk (partial file data). Use lower numbers for slower
+connections.
+
+**Default**: 5M
+
+#### --cache-total-chunk-size=SIZE ####
+
+The total size that the chunks can take up on the local disk. If `cache`
+exceeds this value then it will start to the delete the oldest chunks until
+it goes under this value.
+
+**Default**: 10G
+
+#### --cache-chunk-clean-interval=DURATION ####
+
+How often should `cache` perform cleanups of the chunk storage. The default value
+should be ok for most people. If you find that `cache` goes over `cache-total-chunk-size`
+too often then try to lower this value to force it to perform cleanups more often.
+
+**Default**: 1m
+
+#### --cache-info-age=DURATION ####
+
+How long to keep file structure information (directory listings, file size,
+mod times etc) locally.
+
+If all write operations are done through `cache` then you can safely make
+this value very large as the cache store will also be updated in real time.
+
+**Default**: 6h
+
+#### --cache-read-retries=RETRIES ####
+
+How many times to retry a read from a cache storage.
+
+Since reading from a `cache` stream is independent from downloading file data,
+readers can get to a point where there's no more data in the cache.
+Most of the times this can indicate a connectivity issue if `cache` isn't
+able to provide file data anymore.
+
+For really slow connections, increase this to a point where the stream is
+able to provide data but your experience will be very stuttering.
+
+**Default**: 10
+
+#### --cache-workers=WORKERS ####
+
+How many workers should run in parallel to download chunks.
+
+Higher values will mean more parallel processing (better CPU needed) and
+more concurrent requests on the cloud provider.
+This impacts several aspects like the cloud provider API limits, more stress
+on the hardware that rclone runs on but it also means that streams will
+be more fluid and data will be available much more faster to readers.
+
+**Note**: If the optional Plex integration is enabled then this setting
+will adapt to the type of reading performed and the value specified here will be used
+as a maximum number of workers to use.
+**Default**: 4
+
+#### --cache-chunk-no-memory ####
+
+By default, `cache` will keep file data during streaming in RAM as well
+to provide it to readers as fast as possible.
+
+This transient data is evicted as soon as it is read and the number of
+chunks stored doesn't exceed the number of workers. However, depending
+on other settings like `cache-chunk-size` and `cache-workers` this footprint
+can increase if there are parallel streams too (multiple files being read
+at the same time).
+
+If the hardware permits it, use this feature to provide an overall better
+performance during streaming but it can also be disabled if RAM is not
+available on the local machine.
+
+**Default**: not set
+
+#### --cache-rps=NUMBER ####
+
+This setting places a hard limit on the number of requests per second that `cache`
+will be doing to the cloud provider remote and try to respect that value
+by setting waits between reads.
+
+If you find that you're getting banned or limited on the cloud provider
+through cache and know that a smaller number of requests per second will
+allow you to work with it then you can use this setting for that.
+
+A good balance of all the other settings should make this
+setting useless but it is available to set for more special cases.
+
+**NOTE**: This will limit the number of requests during streams but other
+API calls to the cloud provider like directory listings will still pass.
+
+**Default**: disabled
+
+#### --cache-writes ####
+
+If you need to read files immediately after you upload them through `cache`
+you can enable this flag to have their data stored in the cache store at the
+same time during upload.
+
+**Default**: not set
+
Crypt
----------------------------------------
@@ -4599,6 +5731,13 @@ Choose a number from below, or type in your own value
3 / Very simple filename obfuscation.
\ "obfuscate"
filename_encryption> 2
+Option to either encrypt directory names or leave them intact.
+Choose a number from below, or type in your own value
+ 1 / Encrypt directory names.
+ \ "true"
+ 2 / Don't encrypt directory names, leave them intact.
+ \ "false"
+filename_encryption> 1
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
@@ -4748,7 +5887,7 @@ Standard
* file names encrypted
* file names can't be as long (~156 characters)
* can use sub paths and copy single files
- * directory structure visibile
+ * directory structure visible
* identical files names will have identical uploaded names
* can use shortcuts to shorten the directory recursion
@@ -4770,7 +5909,7 @@ equivalents. You can not rely on this for strong protection.
* file names very lightly obfuscated
* file names can be longer than standard encryption
* can use sub paths and copy single files
- * directory structure visibile
+ * directory structure visible
* identical files names will have identical uploaded names
Cloud storage systems have various limits on file name length and
@@ -4781,6 +5920,25 @@ characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the
future which will address the long file name problem.
+### Directory name encryption ###
+Crypt offers the option of encrypting dir names or leaving them intact.
+There are two options:
+
+True
+
+Encrypts the whole file path including directory names
+Example:
+`1/12/123.txt` is encrypted to
+`p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0`
+
+False
+
+Only encrypts file names, skips directory names
+Example:
+`1/12/123/txt` is encrypted to
+`1/12/qgm4avr35m5loi1th53ato71v0`
+
+
### Modified time and hashes ###
Crypt stores modification times using the underlying remote so support
@@ -4818,7 +5976,7 @@ This will have the following advantages
* `rclone sync` will check the checksums while copying
* you can use `rclone check` between the encrypted remotes
- * you don't decrypt and encrypt unecessarily
+ * you don't decrypt and encrypt unnecessarily
For example, let's say you have your original remote at `remote:` with
the encrypted version at `eremote:` with path `remote:crypt`. You
@@ -4847,9 +6005,9 @@ has a header and is divided into chunks.
* 24 bytes Nonce (IV)
The initial nonce is generated from the operating systems crypto
-strong random number genrator. The nonce is incremented for each
+strong random number generator. The nonce is incremented for each
chunk read making sure each nonce is unique for each block written.
-The chance of a nonce being re-used is miniscule. If you wrote an
+The chance of a nonce being re-used is minuscule. If you wrote an
exabyte of data (10¹⁸ bytes) you would have a probability of
approximately 2×10⁻³² of re-using a nonce.
@@ -4901,7 +6059,7 @@ They are then encrypted with EME using AES with 256 bit key. EME
(ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003
paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
-This makes for determinstic encryption which is what we want - the
+This makes for deterministic encryption which is what we want - the
same filename must encrypt to the same thing otherwise we can't find
it on the cloud storage system.
@@ -4925,13 +6083,13 @@ used on case insensitive remotes (eg Windows, Amazon Drive).
### Key derivation ###
-Rclone uses `scrypt` with parameters `N=16384, r=8, p=1` with a an
+Rclone uses `scrypt` with parameters `N=16384, r=8, p=1` with an
optional user supplied salt (password2) to derive the 32+32+16 = 80
bytes of key material required. If the user doesn't supply a salt
then rclone uses an internal one.
`scrypt` makes it impractical to mount a dictionary attack on rclone
-encrypted data. For full protection agains this you should always use
+encrypted data. For full protection against this you should always use
a salt.
Dropbox
@@ -5043,8 +6201,13 @@ system.
#### --dropbox-chunk-size=SIZE ####
-Upload chunk size. Max 150M. The default is 128MB. Note that this
-isn't buffered into memory.
+Any files larger than this will be uploaded in chunks of this
+size. The default is 48MB. The maximum is 150MB.
+
+Note that chunks are buffered in memory (one at a time) so rclone can
+deal with retries. Setting this larger will increase the speed
+slightly (at most 10% for 128MB in tests) at the cost of using more
+memory. It can be set smaller if you are tight on memory.
### Limitations ###
@@ -5471,6 +6634,8 @@ Google Application Client Id - leave blank normally.
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
+Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
+service_account_file>
Remote config
Use auto config?
* Say Y if not sure
@@ -5519,6 +6684,25 @@ To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
+### Service Account support ###
+
+You can set up rclone with Google Drive in an unattended mode,
+i.e. not tied to a specific end-user Google account. This is useful
+when you want to synchronise files onto machines that don't have
+actively logged-in users, for example build machines.
+
+To create a service account and obtain its credentials, go to the
+[Google Developer Console](https://console.developers.google.com) and
+use the "Create Credentials" button. After creating an account, a JSON
+file containing the Service Account's credentials will be downloaded
+onto your machine. These credentials are what rclone will use for
+authentication.
+
+To use a Service Account instead of OAuth2 token flow, enter the path
+to your Service Account credentials at the `service_account_file`
+prompt and rclone won't use the browser based authentication
+flow.
+
### Team drives ###
If you want to configure the remote to point to a Google Team Drive
@@ -5746,10 +6930,10 @@ be the same account as the Google Drive you want to access)
2. Select a project or create a new project.
-3. Under Overview, Google APIs, Google Apps APIs, click "Drive API",
-then "Enable".
+3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the
+then "Google Drive API".
-4. Click "Credentials" in the left-side panel (not "Go to
+4. Click "Credentials" in the left-side panel (not "Create
credentials", which opens the wizard), then "Create credentials", then
"OAuth client ID". It will prompt you to set the OAuth consent screen
product name, if you haven't set one already.
@@ -6508,6 +7692,7 @@ Commercial implementations of that being:
* [Memset Memstore](https://www.memset.com/cloud/storage/)
* [OVH Object Storage](https://www.ovh.co.uk/public-cloud/storage/object-storage/)
* [Oracle Cloud Storage](https://cloud.oracle.com/storage-opc)
+ * [IBM Bluemix Cloud ObjectStorage Swift](https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
Paths are specified as `remote:container` (or `remote:` for the `lsd`
command.) You may put subdirectories in too, eg `remote:container/path/to/dir`.
@@ -6535,33 +7720,39 @@ Choose a number from below, or type in your own value
\ "b2"
4 / Box
\ "box"
- 5 / Dropbox
+ 5 / Cache a remote
+ \ "cache"
+ 6 / Dropbox
\ "dropbox"
- 6 / Encrypt/Decrypt a remote
+ 7 / Encrypt/Decrypt a remote
\ "crypt"
- 7 / FTP Connection
+ 8 / FTP Connection
\ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
+ 9 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 9 / Google Drive
+10 / Google Drive
\ "drive"
-10 / Hubic
+11 / Hubic
\ "hubic"
-11 / Local Disk
+12 / Local Disk
\ "local"
-12 / Microsoft Azure Blob Storage
+13 / Microsoft Azure Blob Storage
\ "azureblob"
-13 / Microsoft OneDrive
+14 / Microsoft OneDrive
\ "onedrive"
-14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
-15 / QingClound Object Storage
+16 / Pcloud
+ \ "pcloud"
+17 / QingCloud Object Storage
\ "qingstor"
-16 / SSH/SFTP Connection
+18 / SSH/SFTP Connection
\ "sftp"
-17 / Yandex Disk
+19 / Webdav
+ \ "webdav"
+20 / Yandex Disk
\ "yandex"
-18 / http Connection
+21 / http Connection
\ "http"
Storage> swift
Get swift credentials from environment variables in standard OpenStack form.
@@ -6570,12 +7761,12 @@ Choose a number from below, or type in your own value
\ "false"
2 / Get swift credentials from environment vars. Leave other fields blank if using this.
\ "true"
-env_auth> 1
-User name to log in.
-user> user_name
-API key or password.
-key> password_or_api_key
-Authentication URL for server.
+env_auth> true
+User name to log in (OS_USERNAME).
+user>
+API key or password (OS_PASSWORD).
+key>
+Authentication URL for server (OS_AUTH_URL).
Choose a number from below, or type in your own value
1 / Rackspace US
\ "https://auth.api.rackspacecloud.com/v1.0"
@@ -6589,20 +7780,26 @@ Choose a number from below, or type in your own value
\ "https://auth.storage.memset.com/v2.0"
6 / OVH
\ "https://auth.cloud.ovh.net/v2.0"
-auth> 1
-User domain - optional (v3 auth)
-domain> Default
-Tenant name - optional for v1 auth, required otherwise
-tenant> tenant_name
-Tenant domain - optional (v3 auth)
-tenant_domain>
-Region name - optional
-region>
-Storage URL - optional
-storage_url>
-AuthVersion - optional - set to (1,2,3) if your auth URL has no version
-auth_version>
-Endpoint type to choose from the service catalogue
+auth>
+User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+user_id>
+User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+domain>
+Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+tenant>
+Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+tenant_id>
+Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+tenant_domain>
+Region name - optional (OS_REGION_NAME)
+region>
+Storage URL - optional (OS_STORAGE_URL)
+storage_url>
+Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+auth_token>
+AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+auth_version>
+Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
Choose a number from below, or type in your own value
1 / Public (default, choose this if not sure)
\ "public"
@@ -6610,21 +7807,24 @@ Choose a number from below, or type in your own value
\ "internal"
3 / Admin
\ "admin"
-endpoint_type>
+endpoint_type>
Remote config
--------------------
-[remote]
-env_auth = false
-user = user_name
-key = password_or_api_key
-auth = https://auth.api.rackspacecloud.com/v1.0
-domain = Default
-tenant =
-tenant_domain =
-region =
-storage_url =
-auth_version =
-endpoint_type =
+[test]
+env_auth = true
+user =
+key =
+auth =
+user_id =
+domain =
+tenant =
+tenant_id =
+tenant_domain =
+region =
+storage_url =
+auth_token =
+auth_version =
+endpoint_type =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -6691,12 +7891,23 @@ set of OpenStack environment variables.
When you run through the config, make sure you choose `true` for
`env_auth` and leave everything else blank.
-rclone will then set any empty config parameters from the enviroment
+rclone will then set any empty config parameters from the environment
using standard OpenStack environment variables. There is [a list of
the
variables](https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment)
in the docs for the swift library.
+### Using an alternate authentication method ###
+
+If your OpenStack installation uses a non-standard authentication method
+that might not be yet supported by rclone or the underlying swift library,
+you can authenticate externally (e.g. calling manually the `openstack`
+commands to get a token). Then, you just need to pass the two
+configuration variables ``auth_token`` and ``storage_url``.
+If they are both provided, the other variables are ignored. rclone will
+not try to authenticate but instead assume it is already authenticated
+and use these two variables to access the OpenStack installation.
+
#### Using rclone without a config file ####
You can use rclone with swift without a config file, if desired, like
@@ -6759,6 +7970,136 @@ have (eg OVH).
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
+pCloud
+-----------------------------------------
+
+Paths are specified as `remote:path`
+
+Paths may be as deep as required, eg `remote:directory/subdirectory`.
+
+The initial setup for pCloud involves getting a token from pCloud which you
+need to do in your browser. `rclone config` walks you through it.
+
+Here is an example of how to make a remote called `remote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Box
+ \ "box"
+ 5 / Dropbox
+ \ "dropbox"
+ 6 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 7 / FTP Connection
+ \ "ftp"
+ 8 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 9 / Google Drive
+ \ "drive"
+10 / Hubic
+ \ "hubic"
+11 / Local Disk
+ \ "local"
+12 / Microsoft Azure Blob Storage
+ \ "azureblob"
+13 / Microsoft OneDrive
+ \ "onedrive"
+14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+15 / Pcloud
+ \ "pcloud"
+16 / QingCloud Object Storage
+ \ "qingstor"
+17 / SSH/SFTP Connection
+ \ "sftp"
+18 / Yandex Disk
+ \ "yandex"
+19 / http Connection
+ \ "http"
+Storage> pcloud
+Pcloud App Client Id - leave blank normally.
+client_id>
+Pcloud App Client Secret - leave blank normally.
+client_secret>
+Remote config
+Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+y) Yes
+n) No
+y/n> y
+If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+Log in and authorize rclone for access
+Waiting for code...
+Got code
+--------------------
+[remote]
+client_id =
+client_secret =
+token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+See the [remote setup docs](https://rclone.org/remote_setup/) for how to set it up on a
+machine with no Internet browser available.
+
+Note that rclone runs a webserver on your local machine to collect the
+token as returned from pCloud. This only runs from the moment it opens
+your browser to the moment you get back the verification code. This
+is on `http://127.0.0.1:53682/` and this it may require you to unblock
+it temporarily if you are running a host firewall.
+
+Once configured you can then use `rclone` like this,
+
+List directories in top level of your pCloud
+
+ rclone lsd remote:
+
+List all the files in your pCloud
+
+ rclone ls remote:
+
+To copy a local directory to an pCloud directory called backup
+
+ rclone copy /home/source remote:backup
+
+### Modified time and hashes ###
+
+pCloud allows modification times to be set on objects accurate to 1
+second. These will be used to detect whether objects need syncing or
+not. In order to set a Modification time pCloud requires the object
+be re-uploaded.
+
+pCloud supports MD5 and SHA1 type hashes, so you can use the
+`--checksum` flag.
+
+### Deleting files ###
+
+Deleted files will be moved to the trash. Your subscription level
+will determine how long items stay in the trash. `rclone cleanup` can
+be used to empty the trash.
+
SFTP
----------------------------------------
@@ -6770,9 +8111,9 @@ installations.
Paths are specified as `remote:path`. If the path does not begin with
a `/` it is relative to the home directory of the user. An empty path
-`remote:` refers to the users home directory.
+`remote:` refers to the user's home directory.
-Here is an example of making a SFTP configuration. First run
+Here is an example of making an SFTP configuration. First run
rclone config
@@ -6849,7 +8190,7 @@ d) Delete this remote
y/e/d> y
```
-This remote is called `remote` and can now be used like this
+This remote is called `remote` and can now be used like this:
See all directories in the home directory
@@ -6870,7 +8211,7 @@ excess files in the directory.
### SSH Authentication ###
-The SFTP remote supports 3 authentication methods
+The SFTP remote supports three authentication methods:
* Password
* Key file
@@ -6879,7 +8220,7 @@ The SFTP remote supports 3 authentication methods
Key files should be unencrypted PEM-encoded private key files. For
instance `/home/$USER/.ssh/id_rsa`.
-If you don't specify `pass` or `key_file` then it will attempt to
+If you don't specify `pass` or `key_file` then rclone will attempt to
contact an ssh-agent.
### ssh-agent on macOS ###
@@ -6907,7 +8248,14 @@ Modified times are used in syncing and are fully supported.
SFTP supports checksums if the same login has shell access and `md5sum`
or `sha1sum` as well as `echo` are in the remote's PATH.
-The only ssh agent supported under Windows is Putty's pagent.
+The only ssh agent supported under Windows is Putty's pageant.
+
+The Go SSH library disables the use of the aes128-cbc cipher by
+default, due to security concerns. This can be re-enabled on a
+per-connection basis by setting the `use_insecure_cipher` setting in
+the configuration file to `true`. Further details on the insecurity of
+this cipher can be found [in this paper]
+(http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn't supported under plan9 until [this
issue](https://github.com/pkg/sftp/issues/156) is fixed.
@@ -6917,6 +8265,174 @@ with it: `--dump-headers`, `--dump-bodies`, `--dump-auth`
Note that `--timeout` isn't supported (but `--contimeout` is).
+WebDAV
+-----------------------------------------
+
+Paths are specified as `remote:path`
+
+Paths may be as deep as required, eg `remote:directory/subdirectory`.
+
+To configure the WebDAV remote you will need to have a URL for it, and
+a username and password. If you know what kind of system you are
+connecting to then rclone can enable extra features.
+
+Here is an example of how to make a remote called `remote`. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+```
+No remotes found - make a new one
+n) New remote
+s) Set configuration password
+q) Quit config
+n/s/q> n
+name> remote
+Type of storage to configure.
+Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Box
+ \ "box"
+ 5 / Dropbox
+ \ "dropbox"
+ 6 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 7 / FTP Connection
+ \ "ftp"
+ 8 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 9 / Google Drive
+ \ "drive"
+10 / Hubic
+ \ "hubic"
+11 / Local Disk
+ \ "local"
+12 / Microsoft Azure Blob Storage
+ \ "azureblob"
+13 / Microsoft OneDrive
+ \ "onedrive"
+14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+15 / Pcloud
+ \ "pcloud"
+16 / QingCloud Object Storage
+ \ "qingstor"
+17 / SSH/SFTP Connection
+ \ "sftp"
+18 / WebDAV
+ \ "webdav"
+19 / Yandex Disk
+ \ "yandex"
+20 / http Connection
+ \ "http"
+Storage> webdav
+URL of http host to connect to
+Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "https://example.com"
+url> https://example.com/remote.php/webdav/
+Name of the WebDAV site/service/software you are using
+Choose a number from below, or type in your own value
+ 1 / Nextcloud
+ \ "nextcloud"
+ 2 / Owncloud
+ \ "owncloud"
+ 3 / Other site/service or software
+ \ "other"
+vendor> 1
+User name
+user> user
+Password.
+y) Yes type in my own password
+g) Generate random password
+n) No leave this optional password blank
+y/g/n> y
+Enter the password:
+password:
+Confirm the password:
+password:
+Remote config
+--------------------
+[remote]
+url = https://example.com/remote.php/webdav/
+vendor = nextcloud
+user = user
+pass = *** ENCRYPTED ***
+--------------------
+y) Yes this is OK
+e) Edit this remote
+d) Delete this remote
+y/e/d> y
+```
+
+Once configured you can then use `rclone` like this,
+
+List directories in top level of your WebDAV
+
+ rclone lsd remote:
+
+List all the files in your WebDAV
+
+ rclone ls remote:
+
+To copy a local directory to an WebDAV directory called backup
+
+ rclone copy /home/source remote:backup
+
+### Modified time and hashes ###
+
+Plain WebDAV does not support modified times. However when used with
+Owncloud or Nextcloud rclone will support modified times.
+
+Hashes are not supported.
+
+### Owncloud ###
+
+Click on the settings cog in the bottom right of the page and this
+will show the WebDAV URL that rclone needs in the config step. It
+will look something like `https://example.com/remote.php/webdav/`.
+
+Owncloud supports modified times using the `X-OC-Mtime` header.
+
+### Nextcloud ###
+
+This is configured in an identical way to Owncloud. Note that
+Nextcloud does not support streaming of files (`rcat`) whereas
+Owncloud does. This [may be
+fixed](https://github.com/nextcloud/nextcloud-snap/issues/365) in the
+future.
+
+## Put.io ##
+
+put.io can be accessed in a read only way using webdav.
+
+Configure the `url` as `https://webdav.put.io` and use your normal
+account username and password for `user` and `pass`. Set the `vendor`
+to `other`.
+
+Your config file should end up looking like this:
+
+```
+[putio]
+type = webdav
+url = https://webdav.put.io
+vendor = other
+user = YourUserName
+pass = encryptedpassword
+```
+
+If you are using `put.io` with `rclone mount` then use the
+`--read-only` flag to signal to the OS that it can't write to the
+mount.
+
+For more help see [the put.io webdav docs](http://help.put.io/apps-and-integrations/ftp-and-webdav).
+
Yandex Disk
----------------------------------------
@@ -7070,7 +8586,7 @@ old Linux filesystem with non UTF-8 file names (eg latin1) then you
can use the `convmv` tool to convert the filesystem to UTF-8. This
tool is available in most distributions' package managers.
-If an invalid (non-UTF8) filename is read, the invalid caracters will
+If an invalid (non-UTF8) filename is read, the invalid characters will
be replaced with the unicode replacement character, '�'. `rclone`
will emit a debug message in this case (use `-v` to see), eg
@@ -7168,7 +8684,7 @@ routine instead.
This tells rclone to stay in the filesystem specified by the root and
not to recurse into different file systems.
-For example if you have a directory heirachy like this
+For example if you have a directory hierarchy like this
```
root
@@ -7212,6 +8728,82 @@ points, as you explicitly acknowledge that they should be skipped.
Changelog
---------
+ * v1.39 - 2017-12-23
+ * New backends
+ * WebDAV
+ * tested with nextcloud, owncloud, put.io and others!
+ * Pcloud
+ * cache - wraps a cache around other backends (Remus Bunduc)
+ * useful in combination with mount
+ * NB this feature is in beta so use with care
+ * New commands
+ * serve command with subcommands:
+ * serve webdav: this implements a webdav server for any rclone remote.
+ * serve http: command to serve a remote over HTTP
+ * config: add sub commands for full config file management
+ * create/delete/dump/edit/file/password/providers/show/update
+ * touch: to create or update the timestamp of a file (Jakub Tasiemski)
+ * New Features
+ * curl install for rclone (Filip Bartodziej)
+ * --stats now shows percentage, size, rate and ETA in condensed form (Ishuah Kariuki)
+ * --exclude-if-present to exclude a directory if a file is present (Iakov Davydov)
+ * rmdirs: add --leave-root flag (lewpam)
+ * move: add --delete-empty-src-dirs flag to remove dirs after move (Ishuah Kariuki)
+ * Add --dump flag, introduce --dump requests, responses and remove --dump-auth, --dump-filters
+ * Obscure X-Auth-Token: from headers when dumping too
+ * Document and implement exit codes for different failure modes (Ishuah Kariuki)
+ * Compile
+ * Bug Fixes
+ * Retry lots more different types of errors to make multipart transfers more reliable
+ * Save the config before asking for a token, fixes disappearing oauth config
+ * Warn the user if --include and --exclude are used together (Ernest Borowski)
+ * Fix duplicate files (eg on Google drive) causing spurious copies
+ * Allow trailing and leading whitespace for passwords (Jason Rose)
+ * ncdu: fix crashes on empty directories
+ * rcat: fix goroutine leak
+ * moveto/copyto: Fix to allow copying to the same name
+ * Mount
+ * --vfs-cache mode to make writes into mounts more reliable.
+ * this requires caching files on the disk (see --cache-dir)
+ * As this is a new feature, use with care
+ * Use sdnotify to signal systemd the mount is ready (Fabian Möller)
+ * Check if directory is not empty before mounting (Ernest Borowski)
+ * Local
+ * Add error message for cross file system moves
+ * Fix equality check for times
+ * Dropbox
+ * Rework multipart upload
+ * buffer the chunks when uploading large files so they can be retried
+ * change default chunk size to 48MB now we are buffering them in memory
+ * retry every error after the first chunk is done successfully
+ * Fix error when renaming directories
+ * Swift
+ * Fix crash on bad authentication
+ * Google Drive
+ * Add service account support (Tim Cooijmans)
+ * S3
+ * Make it work properly with Digital Ocean Spaces (Andrew Starr-Bochicchio)
+ * Fix crash if a bad listing is received
+ * Add support for ECS task IAM roles (David Minor)
+ * Backblaze B2
+ * Fix multipart upload retries
+ * Fix --hard-delete to make it work 100% of the time
+ * Swift
+ * Allow authentication with storage URL and auth key (Giovanni Pizzi)
+ * Add new fields for swift configuration to support IBM Bluemix Swift (Pierre Carlson)
+ * Add OS_TENANT_ID and OS_USER_ID to config
+ * Allow configs with user id instead of user name
+ * Check if swift segments container exists before creating (John Leach)
+ * Fix memory leak in swift transfers (upstream fix)
+ * SFTP
+ * Add option to enable the use of aes128-cbc cipher (Jon Fautley)
+ * Amazon cloud drive
+ * Fix download of large files failing with "Only one auth mechanism allowed"
+ * crypt
+ * Option to encrypt directory names or leave them intact
+ * Implement DirChangeNotify (Fabian Möller)
+ * onedrive
+ * Add option to choose resourceURL during setup of OneDrive Business account if more than one is available for user
* v1.38 - 2017-09-30
* New backends
* Azure Blob Storage (thanks Andrei Dragomir)
@@ -8294,6 +9886,28 @@ Contributors
* Girish Ramakrishnan
* LingMan
* Jacob McNamee
+ * jersou
+ * thierry
+ * Simon Leinen
+ * Dan Dascalescu
+ * Jason Rose
+ * Andrew Starr-Bochicchio
+ * John Leach
+ * Corban Raun
+ * Pierre Carlson
+ * Ernest Borowski
+ * Remus Bunduc
+ * Iakov Davydov
+ * Fabian Möller
+ * Jakub Tasiemski
+ * David Minor
+ * Tim Cooijmans
+ * Laurence
+ * Giovanni Pizzi
+ * Filip Bartodziej
+ * Jon Fautley
+ * lewapm <32110057+lewapm@users.noreply.github.com>
+ * Yassine Imounachen
# Contact the rclone project #
@@ -8320,7 +9934,7 @@ Rclone has a Google+ page which announcements are posted to
## Twitter ##
-You can also follow me on twitter for rclone announcments
+You can also follow me on twitter for rclone announcements
* [@njcw](https://twitter.com/njcw)
diff --git a/MANUAL.txt b/MANUAL.txt
index 53b0a1fdb..2ce9d0808 100644
--- a/MANUAL.txt
+++ b/MANUAL.txt
@@ -1,6 +1,6 @@
rclone(1) User Manual
Nick Craig-Wood
-Sep 30, 2017
+Dec 23, 2017
@@ -17,6 +17,7 @@ from:
- Backblaze B2
- Box
- Ceph
+- DigitalOcean Spaces
- Dreamhost
- Dropbox
- FTP
@@ -28,13 +29,18 @@ from:
- Microsoft Azure Blob Storage
- Microsoft OneDrive
- Minio
+- Nextloud
- OVH
- Openstack Swift
- Oracle Cloud Storage
+- Ownloud
+- pCloud
+- put.io
- QingStor
- Rackspace Cloud Files
- SFTP
- Wasabi
+- WebDAV
- Yandex Disk
- The local filesystem
@@ -48,6 +54,7 @@ Features
- Check mode to check for file hash equality
- Can sync to and from network, eg two different cloud accounts
- Optional encryption (Crypt)
+- Optional cache (Cache)
- Optional FUSE mount (rclone mount)
Links
@@ -78,6 +85,20 @@ See the Usage section of the docs for how to use rclone, or run
rclone -h.
+Script installation
+
+To install rclone on Linux/MacOs/BSD systems, run:
+
+ curl https://rclone.org/install.sh | sudo bash
+
+For beta installation, run:
+
+ curl https://rclone.org/install.sh | sudo bash -s beta
+
+Note that this script checks the version of rclone installed first and
+won't re-download if not needed.
+
+
Linux installation from precompiled binary
Fetch and unpack
@@ -176,7 +197,9 @@ See the following for detailed instructions for
- Amazon S3
- Backblaze B2
- Box
+- Cache
- Crypt - to encrypt other remotes
+- DigitalOcean Spaces
- Dropbox
- FTP
- Google Cloud Storage
@@ -186,8 +209,10 @@ See the following for detailed instructions for
- Microsoft Azure Blob Storage
- Microsoft OneDrive
- Openstack Swift / Rackspace Cloudfiles / Memset Memstore
+- Pcloud
- QingStor
- SFTP
+- WebDAV
- Yandex Disk
- The local filesystem
@@ -222,17 +247,11 @@ Enter an interactive configuration session.
Synopsis
-rclone config enters an interactive configuration sessions where you can
-setup new remotes and manage existing ones. You may also set or remove a
-password to protect your configuration.
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones. You may also set or remove a password
+to protect your configuration.
-Additional functions:
-
-- rclone config edit – same as above
-- rclone config file – show path of configuration file in use
-- rclone config show – print (decrypted) config file
-
- rclone config [function] [flags]
+ rclone config [flags]
Options
@@ -341,6 +360,9 @@ this will move it into dest:path. If possible a server side move will be
used, otherwise it will copy it (server side if possible) into dest:path
then delete the original (if no errors on copy) in source:path.
+If you want to delete empty source directories after move, use the
+--delete-empty-src-dirs flag.
+
IMPORTANT: Since this can cause data loss, test first with the --dry-run
flag.
@@ -348,7 +370,8 @@ flag.
Options
- -h, --help help for move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for move
rclone delete
@@ -580,7 +603,7 @@ Options
rclone dedupe
-Interactively find duplicate files delete/rename them.
+Interactively find duplicate files and delete/rename them.
Synopsis
@@ -694,6 +717,21 @@ Options
-h, --help help for authorize
+rclone cachestats
+
+Print cache stats for a remote
+
+Synopsis
+
+Print cache stats for a remote in JSON format
+
+ rclone cachestats source: [flags]
+
+Options
+
+ -h, --help help for cachestats
+
+
rclone cat
Concatenates any files and sends them to stdout.
@@ -731,6 +769,160 @@ Options
--tail int Only print the last N characters.
+rclone config create
+
+Create a new remote with name, type and options.
+
+Synopsis
+
+Create a new remote of with and options. The options should be passed in
+in pairs of .
+
+For example to make a swift remote of name myremote using auto config
+you would do:
+
+ rclone config create myremote swift env_auth true
+
+ rclone config create [ ]* [flags]
+
+Options
+
+ -h, --help help for create
+
+
+rclone config delete
+
+Delete an existing remote .
+
+Synopsis
+
+Delete an existing remote .
+
+ rclone config delete [flags]
+
+Options
+
+ -h, --help help for delete
+
+
+rclone config dump
+
+Dump the config file as JSON.
+
+Synopsis
+
+Dump the config file as JSON.
+
+ rclone config dump [flags]
+
+Options
+
+ -h, --help help for dump
+
+
+rclone config edit
+
+Enter an interactive configuration session.
+
+Synopsis
+
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones. You may also set or remove a password
+to protect your configuration.
+
+ rclone config edit [flags]
+
+Options
+
+ -h, --help help for edit
+
+
+rclone config file
+
+Show path of configuration file in use.
+
+Synopsis
+
+Show path of configuration file in use.
+
+ rclone config file [flags]
+
+Options
+
+ -h, --help help for file
+
+
+rclone config password
+
+Update password in an existing remote.
+
+Synopsis
+
+Update an existing remote's password. The password should be passed in
+in pairs of .
+
+For example to set password of a remote of name myremote you would do:
+
+ rclone config password myremote fieldname mypassword
+
+ rclone config password [ ]+ [flags]
+
+Options
+
+ -h, --help help for password
+
+
+rclone config providers
+
+List in JSON format all the providers and options.
+
+Synopsis
+
+List in JSON format all the providers and options.
+
+ rclone config providers [flags]
+
+Options
+
+ -h, --help help for providers
+
+
+rclone config show
+
+Print (decrypted) config file, or the config for a single remote.
+
+Synopsis
+
+Print (decrypted) config file, or the config for a single remote.
+
+ rclone config show [] [flags]
+
+Options
+
+ -h, --help help for show
+
+
+rclone config update
+
+Update options in an existing remote.
+
+Synopsis
+
+Update an existing remote's options. The options should be passed in in
+pairs of .
+
+For example to update the env_auth field of a remote of name myremote
+you would do:
+
+ rclone config update myremote swift env_auth true
+
+ rclone config update [ ]+ [flags]
+
+Options
+
+ -h, --help help for update
+
+
rclone copyto
Copy files from source to dest, skipping already copied
@@ -827,7 +1019,7 @@ Options
rclone dbhashsum
-Produces a Dropbbox hash file for all the objects in the path.
+Produces a Dropbox hash file for all the objects in the path.
Synopsis
@@ -1068,6 +1260,14 @@ Filters
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
+systemd
+
+When running rclone mount as a systemd service, it is possible to use
+Type=notify. In this case the service will enter the started state after
+the mountpoint has been successfully set up. Units having the rclone
+mount service specified as a requirement will see all files and folders
+immediately in this mode.
+
Directory Cache
Using the --dir-cache-time flag, you can set how long a directory should
@@ -1082,29 +1282,120 @@ rclone instance is running, you can reset the cache like this:
kill -SIGHUP $(pidof rclone)
+File Caching
+
+NB File caching is EXPERIMENTAL - use with care!
+
+These flags control the VFS file caching options. The VFS layer is used
+by rclone mount to make a cloud storage systm work more like a normal
+file system.
+
+You'll need to enable VFS caching if you want, for example, to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with -vv rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with --cache-dir or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by --vfs-cache-mode. The higher
+the cache mode the more compatible rclone becomes at the cost of using
+disk space.
+
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won't get
+written back to the remote. However they will still be in the on disk
+cache.
+
+--vfs-cache-mode off
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for write
+will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+--vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When a
+file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote. It will be
+purged on a schedule according to --vfs-cache-max-age.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
rclone mount remote:path /path/to/mountpoint [flags]
Options
- --allow-non-empty Allow mounting over a non-empty directory.
- --allow-other Allow access to other users.
- --allow-root Allow access to root user.
- --debug-fuse Debug the FUSE internals - needs -v.
- --default-permissions Makes kernel enforce access control based on the file mode.
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for mount
- --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem.
- --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
+ --allow-non-empty Allow mounting over a non-empty directory.
+ --allow-other Allow access to other users.
+ --allow-root Allow access to root user.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for mount
+ --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
rclone moveto
@@ -1245,6 +1536,9 @@ This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
+If you supply the --leave-root flag, it will not remove the root
+directory.
+
This is useful for tidying up remotes that rclone has left a lot of
empty directories in.
@@ -1252,7 +1546,324 @@ empty directories in.
Options
- -h, --help help for rmdirs
+ -h, --help help for rmdirs
+ --leave-root Do not remove root directory if empty
+
+
+rclone serve
+
+Serve a remote over a protocol.
+
+Synopsis
+
+rclone serve is used to serve a remote over a given protocol. This
+command requires the use of a subcommand to specify the protocol, eg
+
+ rclone serve http remote:
+
+Each subcommand has its own options which you can see in their help.
+
+ rclone serve [opts] [flags]
+
+Options
+
+ -h, --help help for serve
+
+
+rclone serve http
+
+Serve the remote over HTTP.
+
+Synopsis
+
+rclone serve http implements a basic web server to serve the remote over
+HTTP. This can be viewed in a web browser or you can make a remote of
+type http read from it.
+
+Use --addr to specify which IP address and port the server should listen
+on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By
+default it only listens on localhost.
+
+You can use the filter flags (eg --include, --exclude) to control what
+is served.
+
+The server will log errors. Use -v to see access logs.
+
+--bwlimit will be respected for file transfers. Use --stats to control
+the stats printing.
+
+Directory Cache
+
+Using the --dir-cache-time flag, you can set how long a directory should
+be considered up to date and not refreshed from the backend. Changes
+made locally in the mount may appear immediately or invalidate the
+cache. However, changes done on the remote will only be picked up once
+the cache expires.
+
+Alternatively, you can send a SIGHUP signal to rclone for it to flush
+all directory caches, regardless of how old they are. Assuming only one
+rclone instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+File Caching
+
+NB File caching is EXPERIMENTAL - use with care!
+
+These flags control the VFS file caching options. The VFS layer is used
+by rclone mount to make a cloud storage systm work more like a normal
+file system.
+
+You'll need to enable VFS caching if you want, for example, to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with -vv rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with --cache-dir or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by --vfs-cache-mode. The higher
+the cache mode the more compatible rclone becomes at the cost of using
+disk space.
+
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won't get
+written back to the remote. However they will still be in the on disk
+cache.
+
+--vfs-cache-mode off
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for write
+will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+--vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When a
+file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote. It will be
+purged on a schedule according to --vfs-cache-max-age.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+ rclone serve http remote:path [flags]
+
+Options
+
+ --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for http
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+
+rclone serve webdav
+
+Serve remote:path over webdav.
+
+Synopsis
+
+rclone serve webdav implements a basic webdav server to serve the remote
+over HTTP via the webdav protocol. This can be viewed with a webdav
+client or you can make a remote of type webdav to read and write it.
+
+NB at the moment each directory listing reads the start of each file
+which is undesirable: see https://github.com/golang/go/issues/22577
+
+Directory Cache
+
+Using the --dir-cache-time flag, you can set how long a directory should
+be considered up to date and not refreshed from the backend. Changes
+made locally in the mount may appear immediately or invalidate the
+cache. However, changes done on the remote will only be picked up once
+the cache expires.
+
+Alternatively, you can send a SIGHUP signal to rclone for it to flush
+all directory caches, regardless of how old they are. Assuming only one
+rclone instance is running, you can reset the cache like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+File Caching
+
+NB File caching is EXPERIMENTAL - use with care!
+
+These flags control the VFS file caching options. The VFS layer is used
+by rclone mount to make a cloud storage systm work more like a normal
+file system.
+
+You'll need to enable VFS caching if you want, for example, to read and
+write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with -vv rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with --cache-dir or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by --vfs-cache-mode. The higher
+the cache mode the more compatible rclone becomes at the cost of using
+disk space.
+
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won't get
+written back to the remote. However they will still be in the on disk
+cache.
+
+--vfs-cache-mode off
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+- Files can't be opened for both read AND write
+- Files opened for write can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files open for read with O_TRUNC will be opened write only
+- Files open for write only will behave as if O_TRUNC was supplied
+- Open modes O_APPEND, O_TRUNC are ignored
+- If an upload fails it can't be retried
+
+--vfs-cache-mode minimal
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for write
+will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+- Files opened for write only can't be seeked
+- Existing files opened for write must have O_TRUNC set
+- Files opened for write only will ignore O_APPEND, O_TRUNC
+- If an upload fails it can't be retried
+
+--vfs-cache-mode writes
+
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+--vfs-cache-mode full
+
+In this mode all reads and writes are buffered to and from disk. When a
+file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote. It will be
+purged on a schedule according to --vfs-cache-max-age.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+ rclone serve webdav remote:path [flags]
+
+Options
+
+ --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for webdav
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+
+rclone touch
+
+Create new file or change file modification time.
+
+Synopsis
+
+Create new file or change file modification time.
+
+ rclone touch remote:path [flags]
+
+Options
+
+ -h, --help help for touch
+ -C, --no-create Do not create the file if it does not exist.
+ -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
rclone tree
@@ -1467,8 +2078,8 @@ might want to pass --suffix with today's date.
Local address to bind to for outgoing connections. This can be an IPv4
address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the
-host name doesn't resolve or resoves to more than one IP address it will
-give an error.
+host name doesn't resolve or resolves to more than one IP address it
+will give an error.
--bwlimit=BANDWIDTH_SPEC
@@ -1666,6 +2277,11 @@ This can be useful as an additional layer of protection for immutable or
append-only data sets (notably backup archives), where modification
implies corruption and should not be propagated.
+
+--leave-root
+
+During rmdirs it will not remove root directory, even if it's empty.
+
--log-file=FILE
Log all of rclone's output to FILE. This is not active by default. This
@@ -1674,7 +2290,7 @@ the -v flag. See the Logging section for more info.
--log-level LEVEL
-This sets the log level for rclone. The default log level is INFO.
+This sets the log level for rclone. The default log level is NOTICE.
DEBUG is equivalent to -vv. It outputs lots of debug info - useful for
bug reports and really finding out what rclone is doing.
@@ -2081,13 +2697,19 @@ which are used for testing. These start with remote name eg
Write CPU profile to file. This can be analysed with go tool pprof.
---dump-auth
+--dump flag,flag,flag
-Dump HTTP headers - will contain sensitive info such as Authorization:
-headers - use --dump-headers to dump without Authorization: headers. Can
-be very verbose. Useful for debugging only.
+The --dump flag takes a comma separated list of flags to dump info
+about. These are:
---dump-bodies
+--dump headers
+
+Dump HTTP headers with Authorization: lines removed. May still contain
+sensitive info. Can be very verbose. Useful for debugging only.
+
+Use --dump auth if you do want the Authorization: headers.
+
+--dump bodies
Dump HTTP headers and bodies - may contain sensitive info. Can be very
verbose. Useful for debugging only.
@@ -2095,18 +2717,27 @@ verbose. Useful for debugging only.
Note that the bodies are buffered in memory so don't use this for
enormous files.
---dump-filters
+--dump requests
+
+Like --dump bodies but dumps the request bodies and the response
+headers. Useful for debugging download problems.
+
+--dump responses
+
+Like --dump bodies but dumps the response bodies and the request
+headers. Useful for debugging upload problems.
+
+--dump auth
+
+Dump HTTP headers - will contain sensitive info such as Authorization:
+headers - use --dump headers to dump without Authorization: headers. Can
+be very verbose. Useful for debugging only.
+
+--dump filters
Dump the filters to the output. Useful to see exactly what include and
exclude options are filtering on.
---dump-headers
-
-Dump HTTP headers with Authorization: lines removed. May still contain
-sensitive info. Can be very verbose. Useful for debugging only.
-
-Use --dump-auth if you do want the Authorization: headers.
-
--memprofile=FILE
Write memory profile to file. This can be analysed with go tool pprof.
@@ -2159,7 +2790,7 @@ For the filtering options
- --max-size
- --min-age
- --max-age
-- --dump-filters
+- --dump filters
See the filtering section.
@@ -2214,6 +2845,19 @@ so the user can see that any previous error messages may not be valid
after the retry. If rclone has done a retry it will log a high priority
message if the retry was successful.
+List of exit codes
+
+- 0 - success
+- 1 - Syntax or usage error
+- 2 - Error not otherwise categorised
+- 3 - Directory not found
+- 4 - File not found
+- 5 - Temporary error (one that more retries might fix) (Retry errors)
+- 6 - Less serious errors (like 461 errors from dropbox)
+ (NoRetry errors)
+- 7 - Fatal error (one that more retries won't fix, like
+ account suspended) (Fatal errors)
+
Environment Variables
@@ -2248,8 +2892,8 @@ the values are (the config file can be found by looking at the help for
--config in rclone help).
To find the name of the environment variable, you need to set, take
-RCLONE_ + name of remote + _ + name of config file option and make it
-all uppercase.
+RCLONE_CONFIG_ + name of remote + _ + name of config file option and
+make it all uppercase.
For example, to configure an S3 remote named mys3: without a config file
(using unix ways of setting environment variables):
@@ -2423,7 +3067,7 @@ A [ and ] together make a a character class, such as [a-z] or [aeiou] or
- doesn't match "hullo"
A { and } define a choice between elements. It should contain a comma
-seperated list of patterns, any of which might match. These patterns can
+separated list of patterns, any of which might match. These patterns can
contain wildcards.
{one,two}_potato - matches "one_potato"
@@ -2520,6 +3164,10 @@ type.
- --filter
- --filter-from
+IMPORTANT You should not use --include* together with --exclude*. It may
+produce different results than you expected. In that case try to use:
+--filter*.
+
Note that all the options of the same type are processed together in the
order above, regardless of what order they were placed on the command
line.
@@ -2751,7 +3399,7 @@ are now excluded from the sync.
Always test first with --dry-run and -v before using this flag.
---dump-filters - dump the filters to the output
+--dump filters - dump the filters to the output
This dumps the defined filters to the output as regular expressions.
@@ -2775,11 +3423,33 @@ should work fine
- --include *.jpg
+Exclude directory based on a file
+
+It is possible to exclude a directory based on a file, which is present
+in this directory. Filename should be specified using the
+--exclude-if-present flag. This flag has a priority over the other
+filtering flags.
+
+Imagine, you have the following directory structure:
+
+ dir1/file1
+ dir1/dir2/file2
+ dir1/dir2/dir3/file3
+ dir1/dir2/dir3/.ignore
+
+You can exclude dir3 from sync by running the following command:
+
+ rclone sync --exclude-if-present .ignore dir1 remote:backup
+
+Currently only one filename is supported, i.e. --exclude-if-present
+should not be used multiple times.
+
+
OVERVIEW OF CLOUD STORAGE SYSTEMS
-Each cloud storage system is slighly different. Rclone attempts to
+Each cloud storage system is slightly different. Rclone attempts to
provide a unified interface to them, but some underlying differences
show through.
@@ -2803,8 +3473,10 @@ Here is an overview of the major features of each cloud storage system.
Microsoft Azure Blob Storage MD5 Yes No No R/W
Microsoft OneDrive SHA1 Yes Yes No R
Openstack Swift MD5 Yes No No R/W
+ pCloud MD5, SHA1 Yes No No W
QingStor MD5 No No No R/W
SFTP MD5, SHA1 ‡ Yes Depends No -
+ WebDAV - Yes †† Depends No -
Yandex Disk MD5 Yes No No R/W
The local filesystem All Yes Depends No -
@@ -2824,6 +3496,8 @@ of all the 4MB block SHA256s.
‡ SFTP supports checksums if the same login has shell access and md5sum
or sha1sum as well as echo are in the remote's PATH.
+†† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
+
ModTime
The cloud storage system supports setting modification times on objects.
@@ -2902,8 +3576,10 @@ more efficient.
Microsoft Azure Blob Storage Yes Yes No No No Yes No
Microsoft OneDrive Yes Yes Yes No #197 No #575 No No
Openstack Swift Yes † Yes No No No Yes Yes
+ pCloud Yes Yes Yes Yes Yes No No
QingStor No Yes No No No Yes No
SFTP No No Yes Yes No No Yes
+ WebDAV Yes Yes Yes Yes No No Yes ‡
Yandex Disk Yes No No No Yes Yes Yes
The local filesystem Yes No Yes Yes No No Yes
@@ -2916,6 +3592,8 @@ directory.
markers but they don't actually have a quicker way of deleting files
other than deleting them individually.
+‡ StreamUpload is not supported with Nextcloud
+
Copy
Used when copying an object to and from the same remote. This known as a
@@ -3215,7 +3893,7 @@ This will guide you through an interactive setup process.
13 / Yandex Disk
\ "yandex"
Storage> 2
- Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
@@ -3412,6 +4090,7 @@ order of precedence:
- Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
- Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
- Session Token: AWS_SESSION_TOKEN
+- Running rclone in an ECS task with an IAM role
- Running rclone on an EC2 instance with an IAM role
If none of these option actually end up providing rclone with AWS
@@ -3519,7 +4198,7 @@ blank access_key_id and secret_access_key. Eg
10) s3
11) yandex
type> 10
- Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
* Enter AWS credentials in the next step
1) false
@@ -3573,6 +4252,54 @@ removed).
Because this is a json dump, it is encoding the / as \/, so if you use
the secret key as xxxxxx/xxxx it will work fine.
+DigitalOcean Spaces
+
+Spaces is an S3-interoperable object storage service from cloud provider
+DigitalOcean.
+
+To connect to DigitalOcean Spaces you will need an access key and secret
+key. These can be retrieved on the "Applications & API" page of the
+DigitalOcean control panel. They will be needed when promted by
+rclone config for your access_key_id and secret_access_key.
+
+When prompted for a region or location_constraint, press enter to use
+the default value. The region must be included in the endpoint setting
+(e.g. nyc3.digitaloceanspaces.com). The defualt values can be used for
+other settings.
+
+Going through the whole process of creating a new remote by running
+rclone config, each prompt should be answered as shown below:
+
+ Storage> 2
+ env_auth> 1
+ access_key_id> YOUR_ACCESS_KEY
+ secret_access_key> YOUR_SECRET_KEY
+ region>
+ endpoint> nyc3.digitaloceanspaces.com
+ location_constraint>
+ acl>
+ storage_class>
+
+The resulting configuration file should look like:
+
+ [spaces]
+ type = s3
+ env_auth = false
+ access_key_id = YOUR_ACCESS_KEY
+ secret_access_key = YOUR_SECRET_KEY
+ region =
+ endpoint = nyc3.digitaloceanspaces.com
+ location_constraint =
+ acl =
+ server_side_encryption =
+ storage_class =
+
+Once configured, you can create a new Space and begin copying files. For
+example:
+
+ rclone mkdir spaces:my-new-space
+ rclone copy /path/to/files spaces:my-new-space
+
Minio
Minio is an object storage server built for cloud application developers
@@ -3655,7 +4382,7 @@ rclone like this.
\ "s3"
[snip]
Storage> s3
- Get AWS credentials from runtime (environment variables or EC2 meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
+ Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars). Only applies if access_key_id and secret_access_key is blank.
Choose a number from below, or type in your own value
1 / Enter AWS credentials in the next step
\ "false"
@@ -3850,13 +4577,26 @@ SHA1 checksums
The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process.
-Large files which are uploaded in chunks will store their SHA1 on the
-object as X-Bz-Info-large_file_sha1 as recommended by Backblaze.
+Large files (bigger than the limit in --b2-upload-cutoff) which are
+uploaded in chunks will store their SHA1 on the object as
+X-Bz-Info-large_file_sha1 as recommended by Backblaze.
+
+For a large file to be uploaded with an SHA1 checksum, the source needs
+to support SHA1 checksums. The local disk supports SHA1 checksums so
+large file transfers from local disk will have an SHA1. See the overview
+for exactly which remotes support SHA1.
+
+Sources which don't support SHA1, in particular crypt will upload large
+files without SHA1 checksums. This may be fixed in the future (see
+#1767).
+
+Files sizes below --b2-upload-cutoff will always have an SHA1 regardless
+of the source.
Transfers
Backblaze recommends that you do lots of transfers simultaneously for
-maximum speed. In tests from my SSD equiped laptop the optimum setting
+maximum speed. In tests from my SSD equipped laptop the optimum setting
is about --transfers 32 though higher numbers may be used for a slight
speed improvement. The optimum number for you may vary depending on your
hardware, how big the files are, how much you want to load your
@@ -3890,8 +4630,8 @@ deleted then the bucket will be deleted.
However delete will cause the current versions of the files to become
hidden old versions.
-Here is a session showing the listing and and retreival of an old
-version followed by a cleanup of the old versions.
+Here is a session showing the listing and retrieval of an old version
+followed by a cleanup of the old versions.
Show current version and all the versions with --b2-versions flag.
@@ -3904,7 +4644,7 @@ Show current version and all the versions with --b2-versions flag.
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
-Retreive an old verson
+Retrieve an old version
$ rclone -q --b2-versions copy b2:cleanup-test/one-v2016-07-04-141003-000.txt /tmp
@@ -3953,15 +4693,6 @@ start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_upload_part/
/b2api/v1/b2_finish_large_file
-B2 with crypt
-
-When using B2 with crypt files are encrypted into a temporary location
-and streamed from there. This is required to calculate the encrypted
-file's checksum before beginning the upload. On Windows the %TMPDIR%
-environment variable is used as the temporary location. If the file
-requires chunking, both the chunking and encryption will take place in
-memory.
-
Specific options
Here are the command line options specific to this cloud storage system.
@@ -4247,6 +4978,345 @@ from an identical looking unicode equivalent \.
Box only supports filenames up to 255 characters in length.
+Cache (BETA)
+
+The cache remote wraps another existing remote and stores file structure
+and its data for long running tasks like rclone mount.
+
+To get started you just need to have an existing remote which can be
+configured with cache.
+
+Here is an example of how to make a remote called test-cache. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found - make a new one
+ n) New remote
+ r) Rename remote
+ c) Copy remote
+ s) Set configuration password
+ q) Quit config
+ n/r/c/s/q> n
+ name> test-cache
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ ...
+ 5 / Cache a remote
+ \ "cache"
+ ...
+ Storage> 5
+ Remote to cache.
+ Normally should contain a ':' and a path, eg "myremote:path/to/dir",
+ "myremote:bucket" or maybe "myremote:" (not recommended).
+ remote> local:/test
+ Optional: The URL of the Plex server
+ plex_url> http://127.0.0.1:32400
+ Optional: The username of the Plex user
+ plex_username> dummyusername
+ Optional: The password of the Plex user
+ y) Yes type in my own password
+ g) Generate random password
+ n) No leave this optional password blank
+ y/g/n> y
+ Enter the password:
+ password:
+ Confirm the password:
+ password:
+ The size of a chunk. Lower value good for slow connections but can affect seamless reading.
+ Default: 5M
+ Choose a number from below, or type in your own value
+ 1 / 1MB
+ \ "1m"
+ 2 / 5 MB
+ \ "5M"
+ 3 / 10 MB
+ \ "10M"
+ chunk_size> 2
+ How much time should object info (file size, file hashes etc) be stored in cache. Use a very high value if you don't plan on changing the source FS from outside the cache.
+ Accepted units are: "s", "m", "h".
+ Default: 5m
+ Choose a number from below, or type in your own value
+ 1 / 1 hour
+ \ "1h"
+ 2 / 24 hours
+ \ "24h"
+ 3 / 24 hours
+ \ "48h"
+ info_age> 2
+ The maximum size of stored chunks. When the storage grows beyond this size, the oldest chunks will be deleted.
+ Default: 10G
+ Choose a number from below, or type in your own value
+ 1 / 500 MB
+ \ "500M"
+ 2 / 1 GB
+ \ "1G"
+ 3 / 10 GB
+ \ "10G"
+ chunk_total_size> 3
+ Remote config
+ --------------------
+ [test-cache]
+ remote = local:/test
+ plex_url = http://127.0.0.1:32400
+ plex_username = dummyusername
+ plex_password = *** ENCRYPTED ***
+ chunk_size = 5M
+ info_age = 48h
+ chunk_total_size = 10G
+
+You can then use it like this,
+
+List directories in top level of your drive
+
+ rclone lsd test-cache:
+
+List all the files in your drive
+
+ rclone ls test-cache:
+
+To start a cached mount
+
+ rclone mount --allow-other test-cache: /var/tmp/test-cache
+
+Write Support
+
+Writes are supported through cache. One caveat is that a mounted cache
+remote does not add any retry or fallback mechanism to the upload
+operation. This will depend on the implementation of the wrapped remote.
+
+One special case is covered with cache-writes which will cache the file
+data at the same time as the upload when it is enabled making it
+available from the cache store immediately once the upload is finished.
+
+Read Features
+
+Multiple connections
+
+To counter the high latency between a local PC where rclone is running
+and cloud providers, the cache remote can split multiple requests to the
+cloud provider for smaller file chunks and combines them together
+locally where they can be available almost immediately before the reader
+usually needs them.
+
+This is similar to buffering when media files are played online. Rclone
+will stay around the current marker but always try its best to stay
+ahead and prepare the data before.
+
+Plex Integration
+
+There is a direct integration with Plex which allows cache to detect
+during reading if the file is in playback or not. This helps cache to
+adapt how it queries the cloud provider depending on what is needed for.
+
+Scans will have a minimum amount of workers (1) while in a confirmed
+playback cache will deploy the configured number of workers.
+
+This integration opens the doorway to additional performance
+improvements which will be explored in the near future.
+
+NOTE: If Plex options are not configured, cache will function with its
+configured options without adapting any of its settings.
+
+How to enable? Run rclone config and add all the Plex options (endpoint,
+username and password) in your remote and it will be automatically
+enabled.
+
+Affected settings: - cache-workers: _Configured value_ during confirmed
+playback or _1_ all the other times
+
+Known issues
+
+Windows support - Experimental
+
+There are a couple of issues with Windows mount functionality that still
+require some investigations. It should be considered as experimental
+thus far as fixes come in for this OS.
+
+Most of the issues seem to be related to the difference between
+filesystems on Linux flavors and Windows as cache is heavily dependant
+on them.
+
+Any reports or feedback on how cache behaves on this OS is greatly
+appreciated.
+
+- https://github.com/ncw/rclone/issues/1935
+- https://github.com/ncw/rclone/issues/1907
+- https://github.com/ncw/rclone/issues/1834
+
+Risk of throttling
+
+Future iterations of the cache backend will make use of the pooling
+functionality of the cloud provider to synchronize and at the same time
+make writing through it more tolerant to failures.
+
+There are a couple of enhancements in track to add these but in the
+meantime there is a valid concern that the expiring cache listings can
+lead to cloud provider throttles or bans due to repeated queries on it
+for very large mounts.
+
+Some recommendations: - don't use a very small interval for entry
+informations (--cache-info-age) - while writes aren't yet optimised, you
+can still write through cache which gives you the advantage of adding
+the file in the cache at the same time if configured to do so.
+
+Future enhancements:
+
+- https://github.com/ncw/rclone/issues/1937
+- https://github.com/ncw/rclone/issues/1936
+
+cache and crypt
+
+One common scenario is to keep your data encrypted in the cloud provider
+using the crypt remote. crypt uses a similar technique to wrap around an
+existing remote and handles this translation in a seamless way.
+
+There is an issue with wrapping the remotes in this order: CLOUD REMOTE
+-> CRYPT -> CACHE
+
+During testing, I experienced a lot of bans with the remotes in this
+order. I suspect it might be related to how crypt opens files on the
+cloud provider which makes it think we're downloading the full file
+instead of small chunks. Organizing the remotes in this order yelds
+better results: CLOUD REMOTE -> CACHE -> CRYPT
+
+Specific options
+
+Here are the command line options specific to this cloud storage system.
+
+--cache-chunk-path=PATH
+
+Path to where partial file data (chunks) is stored locally. The remote
+name is appended to the final path.
+
+This config follows the --cache-db-path. If you specify a custom
+location for --cache-db-path and don't specify one for
+--cache-chunk-path then --cache-chunk-path will use the same path as
+--cache-db-path.
+
+DEFAULT: /cache-backend/ EXAMPLE: /.cache/cache-backend/test-cache
+
+--cache-db-path=PATH
+
+Path to where the file structure metadata (DB) is stored locally. The
+remote name is used as the DB file name.
+
+DEFAULT: /cache-backend/ EXAMPLE: /.cache/cache-backend/test-cache
+
+--cache-db-purge
+
+Flag to clear all the cached data for this remote before.
+
+DEFAULT: not set
+
+--cache-chunk-size=SIZE
+
+The size of a chunk (partial file data). Use lower numbers for slower
+connections.
+
+DEFAULT: 5M
+
+--cache-total-chunk-size=SIZE
+
+The total size that the chunks can take up on the local disk. If cache
+exceeds this value then it will start to the delete the oldest chunks
+until it goes under this value.
+
+DEFAULT: 10G
+
+--cache-chunk-clean-interval=DURATION
+
+How often should cache perform cleanups of the chunk storage. The
+default value should be ok for most people. If you find that cache goes
+over cache-total-chunk-size too often then try to lower this value to
+force it to perform cleanups more often.
+
+DEFAULT: 1m
+
+--cache-info-age=DURATION
+
+How long to keep file structure information (directory listings, file
+size, mod times etc) locally.
+
+If all write operations are done through cache then you can safely make
+this value very large as the cache store will also be updated in real
+time.
+
+DEFAULT: 6h
+
+--cache-read-retries=RETRIES
+
+How many times to retry a read from a cache storage.
+
+Since reading from a cache stream is independent from downloading file
+data, readers can get to a point where there's no more data in the
+cache. Most of the times this can indicate a connectivity issue if cache
+isn't able to provide file data anymore.
+
+For really slow connections, increase this to a point where the stream
+is able to provide data but your experience will be very stuttering.
+
+DEFAULT: 10
+
+--cache-workers=WORKERS
+
+How many workers should run in parallel to download chunks.
+
+Higher values will mean more parallel processing (better CPU needed) and
+more concurrent requests on the cloud provider. This impacts several
+aspects like the cloud provider API limits, more stress on the hardware
+that rclone runs on but it also means that streams will be more fluid
+and data will be available much more faster to readers.
+
+NOTE: If the optional Plex integration is enabled then this setting will
+adapt to the type of reading performed and the value specified here will
+be used as a maximum number of workers to use. DEFAULT: 4
+
+--cache-chunk-no-memory
+
+By default, cache will keep file data during streaming in RAM as well to
+provide it to readers as fast as possible.
+
+This transient data is evicted as soon as it is read and the number of
+chunks stored doesn't exceed the number of workers. However, depending
+on other settings like cache-chunk-size and cache-workers this footprint
+can increase if there are parallel streams too (multiple files being
+read at the same time).
+
+If the hardware permits it, use this feature to provide an overall
+better performance during streaming but it can also be disabled if RAM
+is not available on the local machine.
+
+DEFAULT: not set
+
+--cache-rps=NUMBER
+
+This setting places a hard limit on the number of requests per second
+that cache will be doing to the cloud provider remote and try to respect
+that value by setting waits between reads.
+
+If you find that you're getting banned or limited on the cloud provider
+through cache and know that a smaller number of requests per second will
+allow you to work with it then you can use this setting for that.
+
+A good balance of all the other settings should make this setting
+useless but it is available to set for more special cases.
+
+NOTE: This will limit the number of requests during streams but other
+API calls to the cloud provider like directory listings will still pass.
+
+DEFAULT: disabled
+
+--cache-writes
+
+If you need to read files immediately after you upload them through
+cache you can enable this flag to have their data stored in the cache
+store at the same time during upload.
+
+DEFAULT: not set
+
+
Crypt
The crypt remote encrypts and decrypts another remote.
@@ -4315,6 +5385,13 @@ differentiate it from the remote.
3 / Very simple filename obfuscation.
\ "obfuscate"
filename_encryption> 2
+ Option to either encrypt directory names or leave them intact.
+ Choose a number from below, or type in your own value
+ 1 / Encrypt directory names.
+ \ "true"
+ 2 / Don't encrypt directory names, leave them intact.
+ \ "false"
+ filename_encryption> 1
Password or pass phrase for encryption.
y) Yes type in my own password
g) Generate random password
@@ -4455,7 +5532,7 @@ Standard
- file names encrypted
- file names can't be as long (~156 characters)
- can use sub paths and copy single files
-- directory structure visibile
+- directory structure visible
- identical files names will have identical uploaded names
- can use shortcuts to shorten the directory recursion
@@ -4477,7 +5554,7 @@ equivalents. You can not rely on this for strong protection.
- file names very lightly obfuscated
- file names can be longer than standard encryption
- can use sub paths and copy single files
-- directory structure visibile
+- directory structure visible
- identical files names will have identical uploaded names
Cloud storage systems have various limits on file name length and total
@@ -4488,6 +5565,22 @@ length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future
which will address the long file name problem.
+Directory name encryption
+
+Crypt offers the option of encrypting dir names or leaving them intact.
+There are two options:
+
+True
+
+Encrypts the whole file path including directory names Example:
+1/12/123.txt is encrypted to
+p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0
+
+False
+
+Only encrypts file names, skips directory names Example: 1/12/123/txt is
+encrypted to 1/12/qgm4avr35m5loi1th53ato71v0
+
Modified time and hashes
Crypt stores modification times using the underlying remote so support
@@ -4525,7 +5618,7 @@ This will have the following advantages
- rclone sync will check the checksums while copying
- you can use rclone check between the encrypted remotes
-- you don't decrypt and encrypt unecessarily
+- you don't decrypt and encrypt unnecessarily
For example, let's say you have your original remote at remote: with the
encrypted version at eremote: with path remote:crypt. You would then set
@@ -4554,9 +5647,9 @@ Header
- 24 bytes Nonce (IV)
The initial nonce is generated from the operating systems crypto strong
-random number genrator. The nonce is incremented for each chunk read
+random number generator. The nonce is incremented for each chunk read
making sure each nonce is unique for each block written. The chance of a
-nonce being re-used is miniscule. If you wrote an exabyte of data (10¹⁸
+nonce being re-used is minuscule. If you wrote an exabyte of data (10¹⁸
bytes) you would have a probability of approximately 2×10⁻³² of re-using
a nonce.
@@ -4608,7 +5701,7 @@ They are then encrypted with EME using AES with 256 bit key. EME
(ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003
paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
-This makes for determinstic encryption which is what we want - the same
+This makes for deterministic encryption which is what we want - the same
filename must encrypt to the same thing otherwise we can't find it on
the cloud storage system.
@@ -4632,13 +5725,13 @@ used on case insensitive remotes (eg Windows, Amazon Drive).
Key derivation
-Rclone uses scrypt with parameters N=16384, r=8, p=1 with a an optional
+Rclone uses scrypt with parameters N=16384, r=8, p=1 with an optional
user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key
material required. If the user doesn't supply a salt then rclone uses an
internal one.
scrypt makes it impractical to mount a dictionary attack on rclone
-encrypted data. For full protection agains this you should always use a
+encrypted data. For full protection against this you should always use a
salt.
@@ -4744,8 +5837,13 @@ Here are the command line options specific to this cloud storage system.
--dropbox-chunk-size=SIZE
-Upload chunk size. Max 150M. The default is 128MB. Note that this isn't
-buffered into memory.
+Any files larger than this will be uploaded in chunks of this size. The
+default is 48MB. The maximum is 150MB.
+
+Note that chunks are buffered in memory (one at a time) so rclone can
+deal with retries. Setting this larger will increase the speed slightly
+(at most 10% for 128MB in tests) at the cost of using more memory. It
+can be set smaller if you are tight on memory.
Limitations
@@ -5162,6 +6260,8 @@ This will guide you through an interactive setup process:
client_id>
Google Application Client Secret - leave blank normally.
client_secret>
+ Service Account Credentials JSON file path - needed only if you want use SA instead of interactive login.
+ service_account_file>
Remote config
Use auto config?
* Say Y if not sure
@@ -5209,6 +6309,23 @@ To copy a local directory to a drive directory called backup
rclone copy /home/source remote:backup
+Service Account support
+
+You can set up rclone with Google Drive in an unattended mode, i.e. not
+tied to a specific end-user Google account. This is useful when you want
+to synchronise files onto machines that don't have actively logged-in
+users, for example build machines.
+
+To create a service account and obtain its credentials, go to the Google
+Developer Console and use the "Create Credentials" button. After
+creating an account, a JSON file containing the Service Account's
+credentials will be downloaded onto your machine. These credentials are
+what rclone will use for authentication.
+
+To use a Service Account instead of OAuth2 token flow, enter the path to
+your Service Account credentials at the service_account_file prompt and
+rclone won't use the browser based authentication flow.
+
Team drives
If you want to configure the remote to point to a Google Team Drive then
@@ -5501,13 +6618,13 @@ Here is how to create your own Google Drive client ID for rclone:
2. Select a project or create a new project.
-3. Under Overview, Google APIs, Google Apps APIs, click "Drive API",
- then "Enable".
+3. Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the
+ then "Google Drive API".
-4. Click "Credentials" in the left-side panel (not "Go to credentials",
- which opens the wizard), then "Create credentials", then "OAuth
- client ID". It will prompt you to set the OAuth consent screen
- product name, if you haven't set one already.
+4. Click "Credentials" in the left-side panel (not "Create
+ credentials", which opens the wizard), then "Create credentials",
+ then "OAuth client ID". It will prompt you to set the OAuth consent
+ screen product name, if you haven't set one already.
5. Choose an application type of "other", and click "Create". (the
default name is fine)
@@ -6246,6 +7363,7 @@ that being:
- Memset Memstore
- OVH Object Storage
- Oracle Cloud Storage
+- IBM Bluemix Cloud ObjectStorage Swift
Paths are specified as remote:container (or remote: for the lsd
command.) You may put subdirectories in too, eg
@@ -6273,33 +7391,39 @@ This will guide you through an interactive setup process.
\ "b2"
4 / Box
\ "box"
- 5 / Dropbox
+ 5 / Cache a remote
+ \ "cache"
+ 6 / Dropbox
\ "dropbox"
- 6 / Encrypt/Decrypt a remote
+ 7 / Encrypt/Decrypt a remote
\ "crypt"
- 7 / FTP Connection
+ 8 / FTP Connection
\ "ftp"
- 8 / Google Cloud Storage (this is not Google Drive)
+ 9 / Google Cloud Storage (this is not Google Drive)
\ "google cloud storage"
- 9 / Google Drive
+ 10 / Google Drive
\ "drive"
- 10 / Hubic
+ 11 / Hubic
\ "hubic"
- 11 / Local Disk
+ 12 / Local Disk
\ "local"
- 12 / Microsoft Azure Blob Storage
+ 13 / Microsoft Azure Blob Storage
\ "azureblob"
- 13 / Microsoft OneDrive
+ 14 / Microsoft OneDrive
\ "onedrive"
- 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ 15 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
\ "swift"
- 15 / QingClound Object Storage
+ 16 / Pcloud
+ \ "pcloud"
+ 17 / QingCloud Object Storage
\ "qingstor"
- 16 / SSH/SFTP Connection
+ 18 / SSH/SFTP Connection
\ "sftp"
- 17 / Yandex Disk
+ 19 / Webdav
+ \ "webdav"
+ 20 / Yandex Disk
\ "yandex"
- 18 / http Connection
+ 21 / http Connection
\ "http"
Storage> swift
Get swift credentials from environment variables in standard OpenStack form.
@@ -6308,12 +7432,12 @@ This will guide you through an interactive setup process.
\ "false"
2 / Get swift credentials from environment vars. Leave other fields blank if using this.
\ "true"
- env_auth> 1
- User name to log in.
- user> user_name
- API key or password.
- key> password_or_api_key
- Authentication URL for server.
+ env_auth> true
+ User name to log in (OS_USERNAME).
+ user>
+ API key or password (OS_PASSWORD).
+ key>
+ Authentication URL for server (OS_AUTH_URL).
Choose a number from below, or type in your own value
1 / Rackspace US
\ "https://auth.api.rackspacecloud.com/v1.0"
@@ -6327,20 +7451,26 @@ This will guide you through an interactive setup process.
\ "https://auth.storage.memset.com/v2.0"
6 / OVH
\ "https://auth.cloud.ovh.net/v2.0"
- auth> 1
- User domain - optional (v3 auth)
- domain> Default
- Tenant name - optional for v1 auth, required otherwise
- tenant> tenant_name
- Tenant domain - optional (v3 auth)
- tenant_domain>
- Region name - optional
- region>
- Storage URL - optional
- storage_url>
- AuthVersion - optional - set to (1,2,3) if your auth URL has no version
- auth_version>
- Endpoint type to choose from the service catalogue
+ auth>
+ User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
+ user_id>
+ User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
+ domain>
+ Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
+ tenant>
+ Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
+ tenant_id>
+ Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
+ tenant_domain>
+ Region name - optional (OS_REGION_NAME)
+ region>
+ Storage URL - optional (OS_STORAGE_URL)
+ storage_url>
+ Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
+ auth_token>
+ AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
+ auth_version>
+ Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
Choose a number from below, or type in your own value
1 / Public (default, choose this if not sure)
\ "public"
@@ -6348,21 +7478,24 @@ This will guide you through an interactive setup process.
\ "internal"
3 / Admin
\ "admin"
- endpoint_type>
+ endpoint_type>
Remote config
--------------------
- [remote]
- env_auth = false
- user = user_name
- key = password_or_api_key
- auth = https://auth.api.rackspacecloud.com/v1.0
- domain = Default
- tenant =
- tenant_domain =
- region =
- storage_url =
- auth_version =
- endpoint_type =
+ [test]
+ env_auth = true
+ user =
+ key =
+ auth =
+ user_id =
+ domain =
+ tenant =
+ tenant_id =
+ tenant_domain =
+ region =
+ storage_url =
+ auth_token =
+ auth_version =
+ endpoint_type =
--------------------
y) Yes this is OK
e) Edit this remote
@@ -6425,10 +7558,21 @@ of OpenStack environment variables.
When you run through the config, make sure you choose true for env_auth
and leave everything else blank.
-rclone will then set any empty config parameters from the enviroment
+rclone will then set any empty config parameters from the environment
using standard OpenStack environment variables. There is a list of the
variables in the docs for the swift library.
+Using an alternate authentication method
+
+If your OpenStack installation uses a non-standard authentication method
+that might not be yet supported by rclone or the underlying swift
+library, you can authenticate externally (e.g. calling manually the
+openstack commands to get a token). Then, you just need to pass the two
+configuration variables auth_token and storage_url. If they are both
+provided, the other variables are ignored. rclone will not try to
+authenticate but instead assume it is already authenticated and use
+these two variables to access the OpenStack installation.
+
Using rclone without a config file
You can use rclone with swift without a config file, if desired, like
@@ -6488,6 +7632,134 @@ This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
+pCloud
+
+Paths are specified as remote:path
+
+Paths may be as deep as required, eg remote:directory/subdirectory.
+
+The initial setup for pCloud involves getting a token from pCloud which
+you need to do in your browser. rclone config walks you through it.
+
+Here is an example of how to make a remote called remote. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Box
+ \ "box"
+ 5 / Dropbox
+ \ "dropbox"
+ 6 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 7 / FTP Connection
+ \ "ftp"
+ 8 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 9 / Google Drive
+ \ "drive"
+ 10 / Hubic
+ \ "hubic"
+ 11 / Local Disk
+ \ "local"
+ 12 / Microsoft Azure Blob Storage
+ \ "azureblob"
+ 13 / Microsoft OneDrive
+ \ "onedrive"
+ 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+ 15 / Pcloud
+ \ "pcloud"
+ 16 / QingCloud Object Storage
+ \ "qingstor"
+ 17 / SSH/SFTP Connection
+ \ "sftp"
+ 18 / Yandex Disk
+ \ "yandex"
+ 19 / http Connection
+ \ "http"
+ Storage> pcloud
+ Pcloud App Client Id - leave blank normally.
+ client_id>
+ Pcloud App Client Secret - leave blank normally.
+ client_secret>
+ Remote config
+ Use auto config?
+ * Say Y if not sure
+ * Say N if you are working on a remote or headless machine
+ y) Yes
+ n) No
+ y/n> y
+ If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth
+ Log in and authorize rclone for access
+ Waiting for code...
+ Got code
+ --------------------
+ [remote]
+ client_id =
+ client_secret =
+ token = {"access_token":"XXX","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+See the remote setup docs for how to set it up on a machine with no
+Internet browser available.
+
+Note that rclone runs a webserver on your local machine to collect the
+token as returned from pCloud. This only runs from the moment it opens
+your browser to the moment you get back the verification code. This is
+on http://127.0.0.1:53682/ and this it may require you to unblock it
+temporarily if you are running a host firewall.
+
+Once configured you can then use rclone like this,
+
+List directories in top level of your pCloud
+
+ rclone lsd remote:
+
+List all the files in your pCloud
+
+ rclone ls remote:
+
+To copy a local directory to an pCloud directory called backup
+
+ rclone copy /home/source remote:backup
+
+Modified time and hashes
+
+pCloud allows modification times to be set on objects accurate to 1
+second. These will be used to detect whether objects need syncing or
+not. In order to set a Modification time pCloud requires the object be
+re-uploaded.
+
+pCloud supports MD5 and SHA1 type hashes, so you can use the --checksum
+flag.
+
+Deleting files
+
+Deleted files will be moved to the trash. Your subscription level will
+determine how long items stay in the trash. rclone cleanup can be used
+to empty the trash.
+
+
SFTP
SFTP is the Secure (or SSH) File Transfer Protocol.
@@ -6496,9 +7768,9 @@ It runs over SSH v2 and is standard with most modern SSH installations.
Paths are specified as remote:path. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
-refers to the users home directory.
+refers to the user's home directory.
-Here is an example of making a SFTP configuration. First run
+Here is an example of making an SFTP configuration. First run
rclone config
@@ -6573,7 +7845,7 @@ This will guide you through an interactive setup process.
d) Delete this remote
y/e/d> y
-This remote is called remote and can now be used like this
+This remote is called remote and can now be used like this:
See all directories in the home directory
@@ -6594,7 +7866,7 @@ files in the directory.
SSH Authentication
-The SFTP remote supports 3 authentication methods
+The SFTP remote supports three authentication methods:
- Password
- Key file
@@ -6603,8 +7875,8 @@ The SFTP remote supports 3 authentication methods
Key files should be unencrypted PEM-encoded private key files. For
instance /home/$USER/.ssh/id_rsa.
-If you don't specify pass or key_file then it will attempt to contact an
-ssh-agent.
+If you don't specify pass or key_file then rclone will attempt to
+contact an ssh-agent.
ssh-agent on macOS
@@ -6631,7 +7903,13 @@ Limitations
SFTP supports checksums if the same login has shell access and md5sum or
sha1sum as well as echo are in the remote's PATH.
-The only ssh agent supported under Windows is Putty's pagent.
+The only ssh agent supported under Windows is Putty's pageant.
+
+The Go SSH library disables the use of the aes128-cbc cipher by default,
+due to security concerns. This can be re-enabled on a per-connection
+basis by setting the use_insecure_cipher setting in the configuration
+file to true. Further details on the insecurity of this cipher can be
+found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn't supported under plan9 until this issue is fixed.
@@ -6641,6 +7919,167 @@ with it: --dump-headers, --dump-bodies, --dump-auth
Note that --timeout isn't supported (but --contimeout is).
+WebDAV
+
+Paths are specified as remote:path
+
+Paths may be as deep as required, eg remote:directory/subdirectory.
+
+To configure the WebDAV remote you will need to have a URL for it, and a
+username and password. If you know what kind of system you are
+connecting to then rclone can enable extra features.
+
+Here is an example of how to make a remote called remote. First run:
+
+ rclone config
+
+This will guide you through an interactive setup process:
+
+ No remotes found - make a new one
+ n) New remote
+ s) Set configuration password
+ q) Quit config
+ n/s/q> n
+ name> remote
+ Type of storage to configure.
+ Choose a number from below, or type in your own value
+ 1 / Amazon Drive
+ \ "amazon cloud drive"
+ 2 / Amazon S3 (also Dreamhost, Ceph, Minio)
+ \ "s3"
+ 3 / Backblaze B2
+ \ "b2"
+ 4 / Box
+ \ "box"
+ 5 / Dropbox
+ \ "dropbox"
+ 6 / Encrypt/Decrypt a remote
+ \ "crypt"
+ 7 / FTP Connection
+ \ "ftp"
+ 8 / Google Cloud Storage (this is not Google Drive)
+ \ "google cloud storage"
+ 9 / Google Drive
+ \ "drive"
+ 10 / Hubic
+ \ "hubic"
+ 11 / Local Disk
+ \ "local"
+ 12 / Microsoft Azure Blob Storage
+ \ "azureblob"
+ 13 / Microsoft OneDrive
+ \ "onedrive"
+ 14 / Openstack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
+ \ "swift"
+ 15 / Pcloud
+ \ "pcloud"
+ 16 / QingCloud Object Storage
+ \ "qingstor"
+ 17 / SSH/SFTP Connection
+ \ "sftp"
+ 18 / WebDAV
+ \ "webdav"
+ 19 / Yandex Disk
+ \ "yandex"
+ 20 / http Connection
+ \ "http"
+ Storage> webdav
+ URL of http host to connect to
+ Choose a number from below, or type in your own value
+ 1 / Connect to example.com
+ \ "https://example.com"
+ url> https://example.com/remote.php/webdav/
+ Name of the WebDAV site/service/software you are using
+ Choose a number from below, or type in your own value
+ 1 / Nextcloud
+ \ "nextcloud"
+ 2 / Owncloud
+ \ "owncloud"
+ 3 / Other site/service or software
+ \ "other"
+ vendor> 1
+ User name
+ user> user
+ Password.
+ y) Yes type in my own password
+ g) Generate random password
+ n) No leave this optional password blank
+ y/g/n> y
+ Enter the password:
+ password:
+ Confirm the password:
+ password:
+ Remote config
+ --------------------
+ [remote]
+ url = https://example.com/remote.php/webdav/
+ vendor = nextcloud
+ user = user
+ pass = *** ENCRYPTED ***
+ --------------------
+ y) Yes this is OK
+ e) Edit this remote
+ d) Delete this remote
+ y/e/d> y
+
+Once configured you can then use rclone like this,
+
+List directories in top level of your WebDAV
+
+ rclone lsd remote:
+
+List all the files in your WebDAV
+
+ rclone ls remote:
+
+To copy a local directory to an WebDAV directory called backup
+
+ rclone copy /home/source remote:backup
+
+Modified time and hashes
+
+Plain WebDAV does not support modified times. However when used with
+Owncloud or Nextcloud rclone will support modified times.
+
+Hashes are not supported.
+
+Owncloud
+
+Click on the settings cog in the bottom right of the page and this will
+show the WebDAV URL that rclone needs in the config step. It will look
+something like https://example.com/remote.php/webdav/.
+
+Owncloud supports modified times using the X-OC-Mtime header.
+
+Nextcloud
+
+This is configured in an identical way to Owncloud. Note that Nextcloud
+does not support streaming of files (rcat) whereas Owncloud does. This
+may be fixed in the future.
+
+
+Put.io
+
+put.io can be accessed in a read only way using webdav.
+
+Configure the url as https://webdav.put.io and use your normal account
+username and password for user and pass. Set the vendor to other.
+
+Your config file should end up looking like this:
+
+ [putio]
+ type = webdav
+ url = https://webdav.put.io
+ vendor = other
+ user = YourUserName
+ pass = encryptedpassword
+
+If you are using put.io with rclone mount then use the --read-only flag
+to signal to the OS that it can't write to the mount.
+
+For more help see the put.io webdav docs.
+
+
Yandex Disk
Yandex Disk is a cloud storage solution created by Yandex.
@@ -6793,9 +8232,9 @@ old Linux filesystem with non UTF-8 file names (eg latin1) then you can
use the convmv tool to convert the filesystem to UTF-8. This tool is
available in most distributions' package managers.
-If an invalid (non-UTF8) filename is read, the invalid caracters will be
-replaced with the unicode replacement character, '�'. rclone will emit a
-debug message in this case (use -v to see), eg
+If an invalid (non-UTF8) filename is read, the invalid characters will
+be replaced with the unicode replacement character, '�'. rclone will
+emit a debug message in this case (use -v to see), eg
Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
@@ -6880,7 +8319,7 @@ routine instead.
This tells rclone to stay in the filesystem specified by the root and
not to recurse into different file systems.
-For example if you have a directory heirachy like this
+For example if you have a directory hierarchy like this
root
├── disk1 - disk1 mounted on the root
@@ -6917,6 +8356,102 @@ points, as you explicitly acknowledge that they should be skipped.
Changelog
+- v1.39 - 2017-12-23
+ - New backends
+ - WebDAV
+ - tested with nextcloud, owncloud, put.io and others!
+ - Pcloud
+ - cache - wraps a cache around other backends (Remus Bunduc)
+ - useful in combination with mount
+ - NB this feature is in beta so use with care
+ - New commands
+ - serve command with subcommands:
+ - serve webdav: this implements a webdav server for any
+ rclone remote.
+ - serve http: command to serve a remote over HTTP
+ - config: add sub commands for full config file management
+ - create/delete/dump/edit/file/password/providers/show/update
+ - touch: to create or update the timestamp of a file
+ (Jakub Tasiemski)
+ - New Features
+ - curl install for rclone (Filip Bartodziej)
+ - --stats now shows percentage, size, rate and ETA in condensed
+ form (Ishuah Kariuki)
+ - --exclude-if-present to exclude a directory if a file is present
+ (Iakov Davydov)
+ - rmdirs: add --leave-root flag (lewpam)
+ - move: add --delete-empty-src-dirs flag to remove dirs after move
+ (Ishuah Kariuki)
+ - Add --dump flag, introduce --dump requests, responses and remove
+ --dump-auth, --dump-filters
+ - Obscure X-Auth-Token: from headers when dumping too
+ - Document and implement exit codes for different failure modes
+ (Ishuah Kariuki)
+ - Compile
+ - Bug Fixes
+ - Retry lots more different types of errors to make multipart
+ transfers more reliable
+ - Save the config before asking for a token, fixes disappearing
+ oauth config
+ - Warn the user if --include and --exclude are used together
+ (Ernest Borowski)
+ - Fix duplicate files (eg on Google drive) causing spurious copies
+ - Allow trailing and leading whitespace for passwords (Jason Rose)
+ - ncdu: fix crashes on empty directories
+ - rcat: fix goroutine leak
+ - moveto/copyto: Fix to allow copying to the same name
+ - Mount
+ - --vfs-cache mode to make writes into mounts more reliable.
+ - this requires caching files on the disk (see --cache-dir)
+ - As this is a new feature, use with care
+ - Use sdnotify to signal systemd the mount is ready
+ (Fabian Möller)
+ - Check if directory is not empty before mounting
+ (Ernest Borowski)
+ - Local
+ - Add error message for cross file system moves
+ - Fix equality check for times
+ - Dropbox
+ - Rework multipart upload
+ - buffer the chunks when uploading large files so they can be
+ retried
+ - change default chunk size to 48MB now we are buffering them
+ in memory
+ - retry every error after the first chunk is done successfully
+ - Fix error when renaming directories
+ - Swift
+ - Fix crash on bad authentication
+ - Google Drive
+ - Add service account support (Tim Cooijmans)
+ - S3
+ - Make it work properly with Digital Ocean Spaces
+ (Andrew Starr-Bochicchio)
+ - Fix crash if a bad listing is received
+ - Add support for ECS task IAM roles (David Minor)
+ - Backblaze B2
+ - Fix multipart upload retries
+ - Fix --hard-delete to make it work 100% of the time
+ - Swift
+ - Allow authentication with storage URL and auth key
+ (Giovanni Pizzi)
+ - Add new fields for swift configuration to support IBM Bluemix
+ Swift (Pierre Carlson)
+ - Add OS_TENANT_ID and OS_USER_ID to config
+ - Allow configs with user id instead of user name
+ - Check if swift segments container exists before creating
+ (John Leach)
+ - Fix memory leak in swift transfers (upstream fix)
+ - SFTP
+ - Add option to enable the use of aes128-cbc cipher (Jon Fautley)
+ - Amazon cloud drive
+ - Fix download of large files failing with "Only one auth
+ mechanism allowed"
+ - crypt
+ - Option to encrypt directory names or leave them intact
+ - Implement DirChangeNotify (Fabian Möller)
+ - onedrive
+ - Add option to choose resourceURL during setup of OneDrive
+ Business account if more than one is available for user
- v1.38 - 2017-09-30
- New backends
- Azure Blob Storage (thanks Andrei Dragomir)
@@ -8102,6 +9637,28 @@ Contributors
- Girish Ramakrishnan girish@cloudron.io
- LingMan LingMan@users.noreply.github.com
- Jacob McNamee jacobmcnamee@gmail.com
+- jersou jertux@gmail.com
+- thierry thierry@substantiel.fr
+- Simon Leinen simon.leinen@gmail.com ubuntu@s3-test.novalocal
+- Dan Dascalescu ddascalescu+github@gmail.com
+- Jason Rose jason@jro.io
+- Andrew Starr-Bochicchio a.starr.b@gmail.com
+- John Leach john@johnleach.co.uk
+- Corban Raun craun@instructure.com
+- Pierre Carlson mpcarl@us.ibm.com
+- Ernest Borowski er.borowski@gmail.com
+- Remus Bunduc remus.bunduc@gmail.com
+- Iakov Davydov iakov.davydov@unil.ch
+- Fabian Möller f.moeller@nynex.de
+- Jakub Tasiemski tasiemski@gmail.com
+- David Minor dminor@saymedia.com
+- Tim Cooijmans cooijmans.tim@gmail.com
+- Laurence liuxy6@gmail.com
+- Giovanni Pizzi gio.piz@gmail.com
+- Filip Bartodziej filipbartodziej@gmail.com
+- Jon Fautley jon@dead.li
+- lewapm 32110057+lewapm@users.noreply.github.com
+- Yassine Imounachen yassine256@gmail.com
@@ -8134,7 +9691,7 @@ Rclone has a Google+ page which announcements are posted to
Twitter
-You can also follow me on twitter for rclone announcments
+You can also follow me on twitter for rclone announcements
- [@njcw](https://twitter.com/njcw)
diff --git a/RELEASE.md b/RELEASE.md
index 3975e125c..8388da3ad 100644
--- a/RELEASE.md
+++ b/RELEASE.md
@@ -11,7 +11,6 @@ Making a release
* edit docs/content/changelog.md
* make doc
* git status - to check for new man pages - git add them
- * # Update version number in snapcraft.yml
* git commit -a -v -m "Version v1.XX"
* make retag
* # Set the GOPATH for a current stable go compiler
diff --git a/docs/content/changelog.md b/docs/content/changelog.md
index b0e6d9b48..551fbc4db 100644
--- a/docs/content/changelog.md
+++ b/docs/content/changelog.md
@@ -1,12 +1,88 @@
---
title: "Documentation"
description: "Rclone Changelog"
-date: "2017-09-30"
+date: "2017-12-23"
---
Changelog
---------
+ * v1.39 - 2017-12-23
+ * New backends
+ * WebDAV
+ * tested with nextcloud, owncloud, put.io and others!
+ * Pcloud
+ * cache - wraps a cache around other backends (Remus Bunduc)
+ * useful in combination with mount
+ * NB this feature is in beta so use with care
+ * New commands
+ * serve command with subcommands:
+ * serve webdav: this implements a webdav server for any rclone remote.
+ * serve http: command to serve a remote over HTTP
+ * config: add sub commands for full config file management
+ * create/delete/dump/edit/file/password/providers/show/update
+ * touch: to create or update the timestamp of a file (Jakub Tasiemski)
+ * New Features
+ * curl install for rclone (Filip Bartodziej)
+ * --stats now shows percentage, size, rate and ETA in condensed form (Ishuah Kariuki)
+ * --exclude-if-present to exclude a directory if a file is present (Iakov Davydov)
+ * rmdirs: add --leave-root flag (lewpam)
+ * move: add --delete-empty-src-dirs flag to remove dirs after move (Ishuah Kariuki)
+ * Add --dump flag, introduce --dump requests, responses and remove --dump-auth, --dump-filters
+ * Obscure X-Auth-Token: from headers when dumping too
+ * Document and implement exit codes for different failure modes (Ishuah Kariuki)
+ * Compile
+ * Bug Fixes
+ * Retry lots more different types of errors to make multipart transfers more reliable
+ * Save the config before asking for a token, fixes disappearing oauth config
+ * Warn the user if --include and --exclude are used together (Ernest Borowski)
+ * Fix duplicate files (eg on Google drive) causing spurious copies
+ * Allow trailing and leading whitespace for passwords (Jason Rose)
+ * ncdu: fix crashes on empty directories
+ * rcat: fix goroutine leak
+ * moveto/copyto: Fix to allow copying to the same name
+ * Mount
+ * --vfs-cache mode to make writes into mounts more reliable.
+ * this requires caching files on the disk (see --cache-dir)
+ * As this is a new feature, use with care
+ * Use sdnotify to signal systemd the mount is ready (Fabian Möller)
+ * Check if directory is not empty before mounting (Ernest Borowski)
+ * Local
+ * Add error message for cross file system moves
+ * Fix equality check for times
+ * Dropbox
+ * Rework multipart upload
+ * buffer the chunks when uploading large files so they can be retried
+ * change default chunk size to 48MB now we are buffering them in memory
+ * retry every error after the first chunk is done successfully
+ * Fix error when renaming directories
+ * Swift
+ * Fix crash on bad authentication
+ * Google Drive
+ * Add service account support (Tim Cooijmans)
+ * S3
+ * Make it work properly with Digital Ocean Spaces (Andrew Starr-Bochicchio)
+ * Fix crash if a bad listing is received
+ * Add support for ECS task IAM roles (David Minor)
+ * Backblaze B2
+ * Fix multipart upload retries
+ * Fix --hard-delete to make it work 100% of the time
+ * Swift
+ * Allow authentication with storage URL and auth key (Giovanni Pizzi)
+ * Add new fields for swift configuration to support IBM Bluemix Swift (Pierre Carlson)
+ * Add OS_TENANT_ID and OS_USER_ID to config
+ * Allow configs with user id instead of user name
+ * Check if swift segments container exists before creating (John Leach)
+ * Fix memory leak in swift transfers (upstream fix)
+ * SFTP
+ * Add option to enable the use of aes128-cbc cipher (Jon Fautley)
+ * Amazon cloud drive
+ * Fix download of large files failing with "Only one auth mechanism allowed"
+ * crypt
+ * Option to encrypt directory names or leave them intact
+ * Implement DirChangeNotify (Fabian Möller)
+ * onedrive
+ * Add option to choose resourceURL during setup of OneDrive Business account if more than one is available for user
* v1.38 - 2017-09-30
* New backends
* Azure Blob Storage (thanks Andrei Dragomir)
diff --git a/docs/content/commands/rclone.md b/docs/content/commands/rclone.md
index 5cce604c6..70088867a 100644
--- a/docs/content/commands/rclone.md
+++ b/docs/content/commands/rclone.md
@@ -1,12 +1,12 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone"
slug: rclone
url: /commands/rclone/
---
## rclone
-Sync files and directories to and from local and remote object stores - v1.38
+Sync files and directories to and from local and remote object stores - v1.39
### Synopsis
@@ -28,8 +28,10 @@ from various cloud storage systems and using file transfer services, such as:
* Microsoft Azure Blob Storage
* Microsoft OneDrive
* Openstack Swift / Rackspace cloud files / Memset Memstore
+ * pCloud
* QingStor
* SFTP
+ * Webdav / Owncloud / Nextcloud
* Yandex Disk
* The local filesystem
@@ -56,110 +58,126 @@ rclone [flags]
### Options
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- -h, --help help for rclone
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
- -V, --version Print the version number
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ -h, --help help for rclone
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ -V, --version Print the version number
```
### SEE ALSO
* [rclone authorize](/commands/rclone_authorize/) - Remote authorization.
+* [rclone cachestats](/commands/rclone_cachestats/) - Print cache stats for a remote
* [rclone cat](/commands/rclone_cat/) - Concatenates any files and sends them to stdout.
* [rclone check](/commands/rclone_check/) - Checks the files in the source and destination match.
* [rclone cleanup](/commands/rclone_cleanup/) - Clean up the remote if possible
@@ -168,8 +186,8 @@ rclone [flags]
* [rclone copyto](/commands/rclone_copyto/) - Copy files from source to dest, skipping already copied
* [rclone cryptcheck](/commands/rclone_cryptcheck/) - Cryptcheck checks the integrity of a crypted remote.
* [rclone cryptdecode](/commands/rclone_cryptdecode/) - Cryptdecode returns unencrypted file names.
-* [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbbox hash file for all the objects in the path.
-* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files delete/rename them.
+* [rclone dbhashsum](/commands/rclone_dbhashsum/) - Produces a Dropbox hash file for all the objects in the path.
+* [rclone dedupe](/commands/rclone_dedupe/) - Interactively find duplicate files and delete/rename them.
* [rclone delete](/commands/rclone_delete/) - Remove the contents of path.
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
* [rclone gendocs](/commands/rclone_gendocs/) - Output markdown docs for rclone to the directory supplied.
@@ -189,10 +207,12 @@ rclone [flags]
* [rclone rcat](/commands/rclone_rcat/) - Copies standard input to file on remote.
* [rclone rmdir](/commands/rclone_rmdir/) - Remove the path if empty.
* [rclone rmdirs](/commands/rclone_rmdirs/) - Remove empty directories under the path.
+* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
* [rclone sha1sum](/commands/rclone_sha1sum/) - Produces an sha1sum file for all the objects in the path.
* [rclone size](/commands/rclone_size/) - Prints the total size and number of objects in remote:path.
* [rclone sync](/commands/rclone_sync/) - Make source and dest identical, modifying destination only.
+* [rclone touch](/commands/rclone_touch/) - Create new file or change file modification time.
* [rclone tree](/commands/rclone_tree/) - List the contents of the remote in a tree like fashion.
* [rclone version](/commands/rclone_version/) - Show the version number.
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_authorize.md b/docs/content/commands/rclone_authorize.md
index 6cda64df1..768bf7041 100644
--- a/docs/content/commands/rclone_authorize.md
+++ b/docs/content/commands/rclone_authorize.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone authorize"
slug: rclone_authorize
url: /commands/rclone_authorize/
@@ -29,107 +29,122 @@ rclone authorize [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_cachestats.md b/docs/content/commands/rclone_cachestats.md
new file mode 100644
index 000000000..3d13b9a87
--- /dev/null
+++ b/docs/content/commands/rclone_cachestats.md
@@ -0,0 +1,149 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone cachestats"
+slug: rclone_cachestats
+url: /commands/rclone_cachestats/
+---
+## rclone cachestats
+
+Print cache stats for a remote
+
+### Synopsis
+
+
+
+Print cache stats for a remote in JSON format
+
+
+```
+rclone cachestats source: [flags]
+```
+
+### Options
+
+```
+ -h, --help help for cachestats
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_cat.md b/docs/content/commands/rclone_cat.md
index b6c90a5fd..06ac6fbc6 100644
--- a/docs/content/commands/rclone_cat.md
+++ b/docs/content/commands/rclone_cat.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone cat"
slug: rclone_cat
url: /commands/rclone_cat/
@@ -50,107 +50,122 @@ rclone cat remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_check.md b/docs/content/commands/rclone_check.md
index d56d0c4fb..c1ce2d68b 100644
--- a/docs/content/commands/rclone_check.md
+++ b/docs/content/commands/rclone_check.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone check"
slug: rclone_check
url: /commands/rclone_check/
@@ -39,107 +39,122 @@ rclone check source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_cleanup.md b/docs/content/commands/rclone_cleanup.md
index d56ae415c..c179b759c 100644
--- a/docs/content/commands/rclone_cleanup.md
+++ b/docs/content/commands/rclone_cleanup.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone cleanup"
slug: rclone_cleanup
url: /commands/rclone_cleanup/
@@ -29,107 +29,122 @@ rclone cleanup remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config.md b/docs/content/commands/rclone_config.md
index c992f68be..bfe834845 100644
--- a/docs/content/commands/rclone_config.md
+++ b/docs/content/commands/rclone_config.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone config"
slug: rclone_config
url: /commands/rclone_config/
@@ -11,20 +11,13 @@ Enter an interactive configuration session.
### Synopsis
-`rclone config`
- enters an interactive configuration sessions where you can setup
-new remotes and manage existing ones. You may also set or remove a password to
-protect your configuration.
-
-Additional functions:
-
- * `rclone config edit` – same as above
- * `rclone config file` – show path of configuration file in use
- * `rclone config show` – print (decrypted) config file
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones. You may also set or remove a
+password to protect your configuration.
```
-rclone config [function] [flags]
+rclone config [flags]
```
### Options
@@ -36,107 +29,131 @@ rclone config [function] [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
+* [rclone config create](/commands/rclone_config_create/) - Create a new remote with name, type and options.
+* [rclone config delete](/commands/rclone_config_delete/) - Delete an existing remote .
+* [rclone config dump](/commands/rclone_config_dump/) - Dump the config file as JSON.
+* [rclone config edit](/commands/rclone_config_edit/) - Enter an interactive configuration session.
+* [rclone config file](/commands/rclone_config_file/) - Show path of configuration file in use.
+* [rclone config password](/commands/rclone_config_password/) - Update password in an existing remote.
+* [rclone config providers](/commands/rclone_config_providers/) - List in JSON format all the providers and options.
+* [rclone config show](/commands/rclone_config_show/) - Print (decrypted) config file, or the config for a single remote.
+* [rclone config update](/commands/rclone_config_update/) - Update options in an existing remote.
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_create.md b/docs/content/commands/rclone_config_create.md
new file mode 100644
index 000000000..33c1b6843
--- /dev/null
+++ b/docs/content/commands/rclone_config_create.md
@@ -0,0 +1,155 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config create"
+slug: rclone_config_create
+url: /commands/rclone_config_create/
+---
+## rclone config create
+
+Create a new remote with name, type and options.
+
+### Synopsis
+
+
+
+Create a new remote of with and options. The options
+should be passed in in pairs of .
+
+For example to make a swift remote of name myremote using auto config
+you would do:
+
+ rclone config create myremote swift env_auth true
+
+
+```
+rclone config create [ ]* [flags]
+```
+
+### Options
+
+```
+ -h, --help help for create
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_delete.md b/docs/content/commands/rclone_config_delete.md
new file mode 100644
index 000000000..18bc6cdd7
--- /dev/null
+++ b/docs/content/commands/rclone_config_delete.md
@@ -0,0 +1,147 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config delete"
+slug: rclone_config_delete
+url: /commands/rclone_config_delete/
+---
+## rclone config delete
+
+Delete an existing remote .
+
+### Synopsis
+
+
+Delete an existing remote .
+
+```
+rclone config delete [flags]
+```
+
+### Options
+
+```
+ -h, --help help for delete
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_dump.md b/docs/content/commands/rclone_config_dump.md
new file mode 100644
index 000000000..26ecb989c
--- /dev/null
+++ b/docs/content/commands/rclone_config_dump.md
@@ -0,0 +1,147 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config dump"
+slug: rclone_config_dump
+url: /commands/rclone_config_dump/
+---
+## rclone config dump
+
+Dump the config file as JSON.
+
+### Synopsis
+
+
+Dump the config file as JSON.
+
+```
+rclone config dump [flags]
+```
+
+### Options
+
+```
+ -h, --help help for dump
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_edit.md b/docs/content/commands/rclone_config_edit.md
new file mode 100644
index 000000000..fa5bfc8cd
--- /dev/null
+++ b/docs/content/commands/rclone_config_edit.md
@@ -0,0 +1,150 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config edit"
+slug: rclone_config_edit
+url: /commands/rclone_config_edit/
+---
+## rclone config edit
+
+Enter an interactive configuration session.
+
+### Synopsis
+
+
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones. You may also set or remove a
+password to protect your configuration.
+
+
+```
+rclone config edit [flags]
+```
+
+### Options
+
+```
+ -h, --help help for edit
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_file.md b/docs/content/commands/rclone_config_file.md
new file mode 100644
index 000000000..0e06df4d8
--- /dev/null
+++ b/docs/content/commands/rclone_config_file.md
@@ -0,0 +1,147 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config file"
+slug: rclone_config_file
+url: /commands/rclone_config_file/
+---
+## rclone config file
+
+Show path of configuration file in use.
+
+### Synopsis
+
+
+Show path of configuration file in use.
+
+```
+rclone config file [flags]
+```
+
+### Options
+
+```
+ -h, --help help for file
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_password.md b/docs/content/commands/rclone_config_password.md
new file mode 100644
index 000000000..04f7b1313
--- /dev/null
+++ b/docs/content/commands/rclone_config_password.md
@@ -0,0 +1,154 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config password"
+slug: rclone_config_password
+url: /commands/rclone_config_password/
+---
+## rclone config password
+
+Update password in an existing remote.
+
+### Synopsis
+
+
+
+Update an existing remote's password. The password
+should be passed in in pairs of .
+
+For example to set password of a remote of name myremote you would do:
+
+ rclone config password myremote fieldname mypassword
+
+
+```
+rclone config password [ ]+ [flags]
+```
+
+### Options
+
+```
+ -h, --help help for password
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_providers.md b/docs/content/commands/rclone_config_providers.md
new file mode 100644
index 000000000..ea9c96fd6
--- /dev/null
+++ b/docs/content/commands/rclone_config_providers.md
@@ -0,0 +1,147 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config providers"
+slug: rclone_config_providers
+url: /commands/rclone_config_providers/
+---
+## rclone config providers
+
+List in JSON format all the providers and options.
+
+### Synopsis
+
+
+List in JSON format all the providers and options.
+
+```
+rclone config providers [flags]
+```
+
+### Options
+
+```
+ -h, --help help for providers
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_show.md b/docs/content/commands/rclone_config_show.md
new file mode 100644
index 000000000..76f8d0db2
--- /dev/null
+++ b/docs/content/commands/rclone_config_show.md
@@ -0,0 +1,147 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config show"
+slug: rclone_config_show
+url: /commands/rclone_config_show/
+---
+## rclone config show
+
+Print (decrypted) config file, or the config for a single remote.
+
+### Synopsis
+
+
+Print (decrypted) config file, or the config for a single remote.
+
+```
+rclone config show [] [flags]
+```
+
+### Options
+
+```
+ -h, --help help for show
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_config_update.md b/docs/content/commands/rclone_config_update.md
new file mode 100644
index 000000000..1c5a90bb6
--- /dev/null
+++ b/docs/content/commands/rclone_config_update.md
@@ -0,0 +1,154 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone config update"
+slug: rclone_config_update
+url: /commands/rclone_config_update/
+---
+## rclone config update
+
+Update options in an existing remote.
+
+### Synopsis
+
+
+
+Update an existing remote's options. The options should be passed in
+in pairs of .
+
+For example to update the env_auth field of a remote of name myremote you would do:
+
+ rclone config update myremote swift env_auth true
+
+
+```
+rclone config update [ ]+ [flags]
+```
+
+### Options
+
+```
+ -h, --help help for update
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone config](/commands/rclone_config/) - Enter an interactive configuration session.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_copy.md b/docs/content/commands/rclone_copy.md
index 3d1e5b523..5f84793bb 100644
--- a/docs/content/commands/rclone_copy.md
+++ b/docs/content/commands/rclone_copy.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone copy"
slug: rclone_copy
url: /commands/rclone_copy/
@@ -65,107 +65,122 @@ rclone copy source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_copyto.md b/docs/content/commands/rclone_copyto.md
index 6a1f95984..ecaa58567 100644
--- a/docs/content/commands/rclone_copyto.md
+++ b/docs/content/commands/rclone_copyto.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone copyto"
slug: rclone_copyto
url: /commands/rclone_copyto/
@@ -52,107 +52,122 @@ rclone copyto source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_cryptcheck.md b/docs/content/commands/rclone_cryptcheck.md
index ddd3fa378..08d82d0d4 100644
--- a/docs/content/commands/rclone_cryptcheck.md
+++ b/docs/content/commands/rclone_cryptcheck.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone cryptcheck"
slug: rclone_cryptcheck
url: /commands/rclone_cryptcheck/
@@ -49,107 +49,122 @@ rclone cryptcheck remote:path cryptedremote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_cryptdecode.md b/docs/content/commands/rclone_cryptdecode.md
index 0df24a4f4..d6e23a41a 100644
--- a/docs/content/commands/rclone_cryptdecode.md
+++ b/docs/content/commands/rclone_cryptdecode.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone cryptdecode"
slug: rclone_cryptdecode
url: /commands/rclone_cryptdecode/
@@ -33,107 +33,122 @@ rclone cryptdecode encryptedremote: encryptedfilename [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_dbhashsum.md b/docs/content/commands/rclone_dbhashsum.md
index bd6937b95..d6606b5d6 100644
--- a/docs/content/commands/rclone_dbhashsum.md
+++ b/docs/content/commands/rclone_dbhashsum.md
@@ -1,12 +1,12 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone dbhashsum"
slug: rclone_dbhashsum
url: /commands/rclone_dbhashsum/
---
## rclone dbhashsum
-Produces a Dropbbox hash file for all the objects in the path.
+Produces a Dropbox hash file for all the objects in the path.
### Synopsis
@@ -31,107 +31,122 @@ rclone dbhashsum remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_dedupe.md b/docs/content/commands/rclone_dedupe.md
index 02f8b9006..742f023e8 100644
--- a/docs/content/commands/rclone_dedupe.md
+++ b/docs/content/commands/rclone_dedupe.md
@@ -1,12 +1,12 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone dedupe"
slug: rclone_dedupe
url: /commands/rclone_dedupe/
---
## rclone dedupe
-Interactively find duplicate files delete/rename them.
+Interactively find duplicate files and delete/rename them.
### Synopsis
@@ -106,107 +106,122 @@ rclone dedupe [mode] remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_delete.md b/docs/content/commands/rclone_delete.md
index c3ea12dba..da127a738 100644
--- a/docs/content/commands/rclone_delete.md
+++ b/docs/content/commands/rclone_delete.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone delete"
slug: rclone_delete
url: /commands/rclone_delete/
@@ -43,107 +43,122 @@ rclone delete remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_genautocomplete.md b/docs/content/commands/rclone_genautocomplete.md
index e4d49fe03..148c1ca59 100644
--- a/docs/content/commands/rclone_genautocomplete.md
+++ b/docs/content/commands/rclone_genautocomplete.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone genautocomplete"
slug: rclone_genautocomplete
url: /commands/rclone_genautocomplete/
@@ -25,109 +25,124 @@ Run with --help to list the supported shells.
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
* [rclone genautocomplete bash](/commands/rclone_genautocomplete_bash/) - Output bash completion script for rclone.
* [rclone genautocomplete zsh](/commands/rclone_genautocomplete_zsh/) - Output zsh completion script for rclone.
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_genautocomplete_bash.md b/docs/content/commands/rclone_genautocomplete_bash.md
index 832fd7a00..c15d02b7c 100644
--- a/docs/content/commands/rclone_genautocomplete_bash.md
+++ b/docs/content/commands/rclone_genautocomplete_bash.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone genautocomplete bash"
slug: rclone_genautocomplete_bash
url: /commands/rclone_genautocomplete_bash/
@@ -41,107 +41,122 @@ rclone genautocomplete bash [output_file] [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_genautocomplete_zsh.md b/docs/content/commands/rclone_genautocomplete_zsh.md
index 0a5dec40a..95d531a4e 100644
--- a/docs/content/commands/rclone_genautocomplete_zsh.md
+++ b/docs/content/commands/rclone_genautocomplete_zsh.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone genautocomplete zsh"
slug: rclone_genautocomplete_zsh
url: /commands/rclone_genautocomplete_zsh/
@@ -41,107 +41,122 @@ rclone genautocomplete zsh [output_file] [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
* [rclone genautocomplete](/commands/rclone_genautocomplete/) - Output completion script for a given shell.
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_gendocs.md b/docs/content/commands/rclone_gendocs.md
index a1058f748..fe8242124 100644
--- a/docs/content/commands/rclone_gendocs.md
+++ b/docs/content/commands/rclone_gendocs.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone gendocs"
slug: rclone_gendocs
url: /commands/rclone_gendocs/
@@ -29,107 +29,122 @@ rclone gendocs output_directory [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_listremotes.md b/docs/content/commands/rclone_listremotes.md
index c06a2b80a..596b77a91 100644
--- a/docs/content/commands/rclone_listremotes.md
+++ b/docs/content/commands/rclone_listremotes.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone listremotes"
slug: rclone_listremotes
url: /commands/rclone_listremotes/
@@ -31,107 +31,122 @@ rclone listremotes [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_ls.md b/docs/content/commands/rclone_ls.md
index 0a2ff5208..c7dcd8492 100644
--- a/docs/content/commands/rclone_ls.md
+++ b/docs/content/commands/rclone_ls.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone ls"
slug: rclone_ls
url: /commands/rclone_ls/
@@ -26,107 +26,122 @@ rclone ls remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_lsd.md b/docs/content/commands/rclone_lsd.md
index 2e32d7132..f093992b4 100644
--- a/docs/content/commands/rclone_lsd.md
+++ b/docs/content/commands/rclone_lsd.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone lsd"
slug: rclone_lsd
url: /commands/rclone_lsd/
@@ -26,107 +26,122 @@ rclone lsd remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_lsjson.md b/docs/content/commands/rclone_lsjson.md
index 84b9c85bc..2aaf1a02c 100644
--- a/docs/content/commands/rclone_lsjson.md
+++ b/docs/content/commands/rclone_lsjson.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone lsjson"
slug: rclone_lsjson
url: /commands/rclone_lsjson/
@@ -54,107 +54,122 @@ rclone lsjson remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_lsl.md b/docs/content/commands/rclone_lsl.md
index 844ca06f0..c84d5cc9d 100644
--- a/docs/content/commands/rclone_lsl.md
+++ b/docs/content/commands/rclone_lsl.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone lsl"
slug: rclone_lsl
url: /commands/rclone_lsl/
@@ -26,107 +26,122 @@ rclone lsl remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_md5sum.md b/docs/content/commands/rclone_md5sum.md
index 3c83a9ee3..007b22dea 100644
--- a/docs/content/commands/rclone_md5sum.md
+++ b/docs/content/commands/rclone_md5sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone md5sum"
slug: rclone_md5sum
url: /commands/rclone_md5sum/
@@ -29,107 +29,122 @@ rclone md5sum remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_mkdir.md b/docs/content/commands/rclone_mkdir.md
index e882a5e3e..a66810f6a 100644
--- a/docs/content/commands/rclone_mkdir.md
+++ b/docs/content/commands/rclone_mkdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone mkdir"
slug: rclone_mkdir
url: /commands/rclone_mkdir/
@@ -26,107 +26,122 @@ rclone mkdir remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_mount.md b/docs/content/commands/rclone_mount.md
index fdbd8bc8e..a5c221553 100644
--- a/docs/content/commands/rclone_mount.md
+++ b/docs/content/commands/rclone_mount.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone mount"
slug: rclone_mount
url: /commands/rclone_mount/
@@ -90,14 +90,21 @@ systems are a long way from 100% reliable. The rclone sync/copy
commands cope with this with lots of retries. However rclone mount
can't use retries in the same way without making local copies of the
uploads. This might happen in the future, but for the moment rclone
-mount won't do that, so will be less reliable than the rclone sync
-command.
+mount won't do that, so will be less reliable than the rclone command.
### Filters ###
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
+### systemd ###
+
+When running rclone mount as a systemd service, it is possible
+to use Type=notify. In this case the service will enter the started state
+after the mountpoint has been successfully set up.
+Units having the rclone mount service specified as a requirement
+will see all files and folders immediately in this mode.
+
### Directory Cache ###
Using the `--dir-cache-time` flag, you can set how long a
@@ -113,6 +120,95 @@ like this:
kill -SIGHUP $(pidof rclone)
+### File Caching ###
+
+**NB** File caching is **EXPERIMENTAL** - use with care!
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage systm work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off ####
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal ####
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes ####
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full ####
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
```
rclone mount remote:path /path/to/mountpoint [flags]
@@ -121,131 +217,149 @@ rclone mount remote:path /path/to/mountpoint [flags]
### Options
```
- --allow-non-empty Allow mounting over a non-empty directory.
- --allow-other Allow access to other users.
- --allow-root Allow access to root user.
- --debug-fuse Debug the FUSE internals - needs -v.
- --default-permissions Makes kernel enforce access control based on the file mode.
- --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
- --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
- --gid uint32 Override the gid field set by the filesystem. (default 502)
- -h, --help help for mount
- --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
- --no-checksum Don't compare checksums on up/download.
- --no-modtime Don't read/write the modification time (can speed things up).
- --no-seek Don't allow seeking in files.
- -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
- --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
- --read-only Mount read-only.
- --uid uint32 Override the uid field set by the filesystem. (default 502)
- --umask int Override the permission bits set by the filesystem.
- --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
+ --allow-non-empty Allow mounting over a non-empty directory.
+ --allow-other Allow access to other users.
+ --allow-root Allow access to root user.
+ --debug-fuse Debug the FUSE internals - needs -v.
+ --default-permissions Makes kernel enforce access control based on the file mode.
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --fuse-flag stringArray Flags or arguments to be passed direct to libfuse/WinFsp. Repeat if required.
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for mount
+ --max-read-ahead int The number of bytes that can be prefetched for sequential reads. (default 128k)
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ -o, --option stringArray Option for libfuse/WinFsp. Repeat if required.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+ --write-back-cache Makes kernel buffer writes before sending them to rclone. Without this, writethrough caching is used.
```
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_move.md b/docs/content/commands/rclone_move.md
index 7f6a26d5b..14c3b0d83 100644
--- a/docs/content/commands/rclone_move.md
+++ b/docs/content/commands/rclone_move.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone move"
slug: rclone_move
url: /commands/rclone_move/
@@ -26,7 +26,7 @@ move will be used, otherwise it will copy it (server side if possible)
into `dest:path` then delete the original (if no errors on copy) in
`source:path`.
-If you want to delete empty source directories after move, use the --delete-empty-source-dirs flag.
+If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
**Important**: Since this can cause data loss, test first with the
--dry-run flag.
@@ -39,114 +39,129 @@ rclone move source:path dest:path [flags]
### Options
```
- --delete-empty-src-dirs Delete empty dirs after move
- -h, --help help for move
+ --delete-empty-src-dirs Delete empty source dirs after move
+ -h, --help help for move
```
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_moveto.md b/docs/content/commands/rclone_moveto.md
index 1bfc86c09..ba1d0ebfa 100644
--- a/docs/content/commands/rclone_moveto.md
+++ b/docs/content/commands/rclone_moveto.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone moveto"
slug: rclone_moveto
url: /commands/rclone_moveto/
@@ -55,107 +55,122 @@ rclone moveto source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_ncdu.md b/docs/content/commands/rclone_ncdu.md
index 8a9e78b09..53caf6e02 100644
--- a/docs/content/commands/rclone_ncdu.md
+++ b/docs/content/commands/rclone_ncdu.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone ncdu"
slug: rclone_ncdu
url: /commands/rclone_ncdu/
@@ -50,107 +50,122 @@ rclone ncdu remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_obscure.md b/docs/content/commands/rclone_obscure.md
index 5997585cd..d9399ad46 100644
--- a/docs/content/commands/rclone_obscure.md
+++ b/docs/content/commands/rclone_obscure.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone obscure"
slug: rclone_obscure
url: /commands/rclone_obscure/
@@ -26,107 +26,122 @@ rclone obscure password [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_purge.md b/docs/content/commands/rclone_purge.md
index 1c0c69497..4ede2ea8c 100644
--- a/docs/content/commands/rclone_purge.md
+++ b/docs/content/commands/rclone_purge.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone purge"
slug: rclone_purge
url: /commands/rclone_purge/
@@ -30,107 +30,122 @@ rclone purge remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_rcat.md b/docs/content/commands/rclone_rcat.md
index d2fdd6356..aa6bfe17c 100644
--- a/docs/content/commands/rclone_rcat.md
+++ b/docs/content/commands/rclone_rcat.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone rcat"
slug: rclone_rcat
url: /commands/rclone_rcat/
@@ -48,107 +48,122 @@ rclone rcat remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_rmdir.md b/docs/content/commands/rclone_rmdir.md
index c13390bd9..9a90a9856 100644
--- a/docs/content/commands/rclone_rmdir.md
+++ b/docs/content/commands/rclone_rmdir.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone rmdir"
slug: rclone_rmdir
url: /commands/rclone_rmdir/
@@ -28,107 +28,122 @@ rclone rmdir remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_rmdirs.md b/docs/content/commands/rclone_rmdirs.md
index 963d24959..79e2ce767 100644
--- a/docs/content/commands/rclone_rmdirs.md
+++ b/docs/content/commands/rclone_rmdirs.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone rmdirs"
slug: rclone_rmdirs
url: /commands/rclone_rmdirs/
@@ -15,6 +15,8 @@ This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
+If you supply the --leave-root flag, it will not remove the root directory.
+
This is useful for tidying up remotes that rclone has left a lot of
empty directories in.
@@ -27,113 +29,129 @@ rclone rmdirs remote:path [flags]
### Options
```
- -h, --help help for rmdirs
+ -h, --help help for rmdirs
+ --leave-root Do not remove root directory if empty
```
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_serve.md b/docs/content/commands/rclone_serve.md
new file mode 100644
index 000000000..4e86fa98d
--- /dev/null
+++ b/docs/content/commands/rclone_serve.md
@@ -0,0 +1,155 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone serve"
+slug: rclone_serve
+url: /commands/rclone_serve/
+---
+## rclone serve
+
+Serve a remote over a protocol.
+
+### Synopsis
+
+
+rclone serve is used to serve a remote over a given protocol. This
+command requires the use of a subcommand to specify the protocol, eg
+
+ rclone serve http remote:
+
+Each subcommand has its own options which you can see in their help.
+
+
+```
+rclone serve [opts] [flags]
+```
+
+### Options
+
+```
+ -h, --help help for serve
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
+* [rclone serve http](/commands/rclone_serve_http/) - Serve the remote over HTTP.
+* [rclone serve webdav](/commands/rclone_serve_webdav/) - Serve remote:path over webdav.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_serve_http.md b/docs/content/commands/rclone_serve_http.md
new file mode 100644
index 000000000..432918632
--- /dev/null
+++ b/docs/content/commands/rclone_serve_http.md
@@ -0,0 +1,279 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone serve http"
+slug: rclone_serve_http
+url: /commands/rclone_serve_http/
+---
+## rclone serve http
+
+Serve the remote over HTTP.
+
+### Synopsis
+
+
+rclone serve http implements a basic web server to serve the remote
+over HTTP. This can be viewed in a web browser or you can make a
+remote of type http read from it.
+
+Use --addr to specify which IP address and port the server should
+listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all
+IPs. By default it only listens on localhost.
+
+You can use the filter flags (eg --include, --exclude) to control what
+is served.
+
+The server will log errors. Use -v to see access logs.
+
+--bwlimit will be respected for file transfers. Use --stats to
+control the stats printing.
+
+### Directory Cache ###
+
+Using the `--dir-cache-time` flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made locally in the mount may appear immediately or
+invalidate the cache. However, changes done on the remote will only
+be picked up once the cache expires.
+
+Alternatively, you can send a `SIGHUP` signal to rclone for
+it to flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+### File Caching ###
+
+**NB** File caching is **EXPERIMENTAL** - use with care!
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage systm work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off ####
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal ####
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes ####
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full ####
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+
+```
+rclone serve http remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port to bind server to. (default "localhost:8080")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for http
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_serve_webdav.md b/docs/content/commands/rclone_serve_webdav.md
new file mode 100644
index 000000000..883c1c1e7
--- /dev/null
+++ b/docs/content/commands/rclone_serve_webdav.md
@@ -0,0 +1,273 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone serve webdav"
+slug: rclone_serve_webdav
+url: /commands/rclone_serve_webdav/
+---
+## rclone serve webdav
+
+Serve remote:path over webdav.
+
+### Synopsis
+
+
+
+rclone serve webdav implements a basic webdav server to serve the
+remote over HTTP via the webdav protocol. This can be viewed with a
+webdav client or you can make a remote of type webdav to read and
+write it.
+
+NB at the moment each directory listing reads the start of each file
+which is undesirable: see https://github.com/golang/go/issues/22577
+
+
+### Directory Cache ###
+
+Using the `--dir-cache-time` flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend. Changes made locally in the mount may appear immediately or
+invalidate the cache. However, changes done on the remote will only
+be picked up once the cache expires.
+
+Alternatively, you can send a `SIGHUP` signal to rclone for
+it to flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+
+ kill -SIGHUP $(pidof rclone)
+
+### File Caching ###
+
+**NB** File caching is **EXPERIMENTAL** - use with care!
+
+These flags control the VFS file caching options. The VFS layer is
+used by rclone mount to make a cloud storage systm work more like a
+normal file system.
+
+You'll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file. See below for more details.
+
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+
+ --vfs-cache-dir string Directory rclone will use for caching.
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+
+If run with `-vv` rclone will print the location of the file cache. The
+files are stored in the user cache file area which is OS dependent but
+can be controlled with `--cache-dir` or setting the appropriate
+environment variable.
+
+The cache has 4 different modes selected by `--vfs-cache-mode`.
+The higher the cache mode the more compatible rclone becomes at the
+cost of using disk space.
+
+Note that files are written back to the remote only when they are
+closed so if rclone is quit or dies with open files then these won't
+get written back to the remote. However they will still be in the on
+disk cache.
+
+#### --vfs-cache-mode off ####
+
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+
+This will mean some operations are not possible
+
+ * Files can't be opened for both read AND write
+ * Files opened for write can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files open for read with O_TRUNC will be opened write only
+ * Files open for write only will behave as if O_TRUNC was supplied
+ * Open modes O_APPEND, O_TRUNC are ignored
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode minimal ####
+
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks. This means that files opened for
+write will be a lot more compatible, but uses the minimal disk space.
+
+These operations are not possible
+
+ * Files opened for write only can't be seeked
+ * Existing files opened for write must have O_TRUNC set
+ * Files opened for write only will ignore O_APPEND, O_TRUNC
+ * If an upload fails it can't be retried
+
+#### --vfs-cache-mode writes ####
+
+In this mode files opened for read only are still read directly from
+the remote, write only and read/write files are buffered to disk
+first.
+
+This mode should support all normal file system operations.
+
+If an upload fails it will be retried up to --low-level-retries times.
+
+#### --vfs-cache-mode full ####
+
+In this mode all reads and writes are buffered to and from disk. When
+a file is opened for read it will be downloaded in its entirety first.
+
+This may be appropriate for your needs, or you may prefer to look at
+the cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+
+In this mode, unlike the others, when a file is written to the disk,
+it will be kept on the disk after it is written to the remote. It
+will be purged on a schedule according to `--vfs-cache-max-age`.
+
+This mode should support all normal file system operations.
+
+If an upload or download fails it will be retried up to
+--low-level-retries times.
+
+
+```
+rclone serve webdav remote:path [flags]
+```
+
+### Options
+
+```
+ --addr string IPaddress:Port to bind server to. (default "localhost:8081")
+ --dir-cache-time duration Time to cache directory entries for. (default 5m0s)
+ --gid uint32 Override the gid field set by the filesystem. (default 502)
+ -h, --help help for webdav
+ --no-checksum Don't compare checksums on up/download.
+ --no-modtime Don't read/write the modification time (can speed things up).
+ --no-seek Don't allow seeking in files.
+ --poll-interval duration Time to wait between polling for changes. Must be smaller than dir-cache-time. Only on supported remotes. Set to 0 to disable. (default 1m0s)
+ --read-only Mount read-only.
+ --uid uint32 Override the uid field set by the filesystem. (default 502)
+ --umask int Override the permission bits set by the filesystem. (default 2)
+ --vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
+ --vfs-cache-mode string Cache mode off|minimal|writes|full (default "off")
+ --vfs-cache-poll-interval duration Interval to poll the cache for stale objects. (default 1m0s)
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone serve](/commands/rclone_serve/) - Serve a remote over a protocol.
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_sha1sum.md b/docs/content/commands/rclone_sha1sum.md
index aae39e0ee..2d22a3383 100644
--- a/docs/content/commands/rclone_sha1sum.md
+++ b/docs/content/commands/rclone_sha1sum.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone sha1sum"
slug: rclone_sha1sum
url: /commands/rclone_sha1sum/
@@ -29,107 +29,122 @@ rclone sha1sum remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_size.md b/docs/content/commands/rclone_size.md
index abc249fe8..15d442052 100644
--- a/docs/content/commands/rclone_size.md
+++ b/docs/content/commands/rclone_size.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone size"
slug: rclone_size
url: /commands/rclone_size/
@@ -26,107 +26,122 @@ rclone size remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_sync.md b/docs/content/commands/rclone_sync.md
index 9dadd192d..f83e0f832 100644
--- a/docs/content/commands/rclone_sync.md
+++ b/docs/content/commands/rclone_sync.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone sync"
slug: rclone_sync
url: /commands/rclone_sync/
@@ -45,107 +45,122 @@ rclone sync source:path dest:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_touch.md b/docs/content/commands/rclone_touch.md
new file mode 100644
index 000000000..b9675f8de
--- /dev/null
+++ b/docs/content/commands/rclone_touch.md
@@ -0,0 +1,149 @@
+---
+date: 2017-12-23T13:05:26Z
+title: "rclone touch"
+slug: rclone_touch
+url: /commands/rclone_touch/
+---
+## rclone touch
+
+Create new file or change file modification time.
+
+### Synopsis
+
+
+Create new file or change file modification time.
+
+```
+rclone touch remote:path [flags]
+```
+
+### Options
+
+```
+ -h, --help help for touch
+ -C, --no-create Do not create the file if it does not exist.
+ -t, --timestamp string Change the modification times to the specified time instead of the current time of day. The argument is of the form 'YYMMDD' (ex. 17.10.30) or 'YYYY-MM-DDTHH:MM:SS' (ex. 2006-01-02T15:04:05)
+```
+
+### Options inherited from parent commands
+
+```
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+```
+
+### SEE ALSO
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
+
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_tree.md b/docs/content/commands/rclone_tree.md
index ba7722afb..e08cb05c3 100644
--- a/docs/content/commands/rclone_tree.md
+++ b/docs/content/commands/rclone_tree.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone tree"
slug: rclone_tree
url: /commands/rclone_tree/
@@ -69,107 +69,122 @@ rclone tree remote:path [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/content/commands/rclone_version.md b/docs/content/commands/rclone_version.md
index 76c22a3eb..9350c0349 100644
--- a/docs/content/commands/rclone_version.md
+++ b/docs/content/commands/rclone_version.md
@@ -1,5 +1,5 @@
---
-date: 2017-09-30T14:21:35+01:00
+date: 2017-12-23T13:05:26Z
title: "rclone version"
slug: rclone_version
url: /commands/rclone_version/
@@ -26,107 +26,122 @@ rclone version [flags]
### Options inherited from parent commands
```
- --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
- --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
- --ask-password Allow prompt for password for encrypted configuration. (default true)
- --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
- --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
- --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
- --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
- --b2-test-mode string A flag string for X-Bz-Test-Mode header.
- --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
- --b2-versions Include old versions in directory listings.
- --backup-dir string Make backups into hierarchy based in DIR.
- --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
- --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
- --buffer-size int Buffer size when copying files. (default 16M)
- --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
- --checkers int Number of checkers to run in parallel. (default 8)
- -c, --checksum Skip based on checksum & size, not mod-time & size
- --config string Config file. (default "/home/ncw/.rclone.conf")
- --contimeout duration Connect timeout (default 1m0s)
- -L, --copy-links Follow symlinks and copy the pointed to item.
- --cpuprofile string Write cpu profile to file
- --crypt-show-mapping For all files listed show how the names encrypt.
- --delete-after When synchronizing, delete files on destination after transfering
- --delete-before When synchronizing, delete files on destination before transfering
- --delete-during When synchronizing, delete files during transfer (default)
- --delete-excluded Delete files on dest excluded from sync
- --disable string Disable a comma separated list of features. Use help to see a list.
- --drive-auth-owner-only Only consider files owned by the authenticated user.
- --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
- --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
- --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
- --drive-shared-with-me Only show files that are shared with me
- --drive-skip-gdocs Skip google documents in all listings.
- --drive-trashed-only Only show files that are in the trash
- --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
- --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
- --dropbox-chunk-size int Upload chunk size. Max 150M. (default 128M)
- -n, --dry-run Do a trial run with no permanent changes
- --dump-auth Dump HTTP headers with auth info
- --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
- --dump-filters Dump the filters to the output
- --dump-headers Dump HTTP headers - may contain sensitive info
- --exclude stringArray Exclude files matching pattern
- --exclude-from stringArray Read exclude patterns from file
- --fast-list Use recursive list if available. Uses more memory but fewer transactions.
- --files-from stringArray Read list of source-file names from file
- -f, --filter stringArray Add a file-filtering rule
- --filter-from stringArray Read filtering patterns from a file
- --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
- --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
- --ignore-checksum Skip post copy check of checksums.
- --ignore-existing Skip all files that exist on destination
- --ignore-size Ignore size when skipping use mod-time or checksum.
- -I, --ignore-times Don't skip files that match size and time - transfer all files
- --immutable Do not modify files. Fail if existing files have been modified.
- --include stringArray Include files matching pattern
- --include-from stringArray Read include patterns from file
- --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
- --log-file string Log everything to this file
- --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
- --low-level-retries int Number of low level retries to do. (default 10)
- --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
- --max-depth int If set limits the recursion depth to this. (default -1)
- --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
- --memprofile string Write memory profile to file
- --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
- --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
- --modify-window duration Max time diff to be considered the same (default 1ns)
- --no-check-certificate Do not verify the server SSL certificate. Insecure.
- --no-gzip-encoding Don't set Accept-Encoding: gzip.
- --no-traverse Don't traverse destination file system on copy.
- --no-update-modtime Don't update destination mod-time if files identical.
- --old-sync-method Deprecated - use --fast-list instead
- -x, --one-file-system Don't cross filesystem boundaries.
- --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
- --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
- -q, --quiet Print as little stuff as possible
- --retries int Retry operations this many times if they fail (default 3)
- --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
- --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
- --size-only Skip based on size only, not mod-time or checksum
- --skip-links Don't warn about skipped symlinks.
- --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
- --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
- --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
- --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
- --suffix string Suffix for use with --backup-dir.
- --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
- --syslog Use Syslog for logging
- --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
- --timeout duration IO idle timeout (default 5m0s)
- --tpslimit float Limit HTTP transactions per second to this.
- --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
- --track-renames When synchronizing, track file renames and do a server side move if possible
- --transfers int Number of file transfers to run in parallel. (default 4)
- -u, --update Skip files that are newer on the destination.
- --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.38")
- -v, --verbose count[=-1] Print lots more stuff (repeat for more)
+ --acd-templink-threshold int Files >= this size will be downloaded via their tempLink. (default 9G)
+ --acd-upload-wait-per-gb duration Additional time per GB to wait after a failed complete upload to see if it appears. (default 3m0s)
+ --ask-password Allow prompt for password for encrypted configuration. (default true)
+ --auto-confirm If enabled, do not request console confirmation.
+ --azureblob-chunk-size int Upload chunk size. Must fit in memory. (default 4M)
+ --azureblob-upload-cutoff int Cutoff for switching to chunked upload (default 256M)
+ --b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)
+ --b2-hard-delete Permanently delete files on remote removal, otherwise hide files.
+ --b2-test-mode string A flag string for X-Bz-Test-Mode header.
+ --b2-upload-cutoff int Cutoff for switching to chunked upload (default 190.735M)
+ --b2-versions Include old versions in directory listings.
+ --backup-dir string Make backups into hierarchy based in DIR.
+ --bind string Local address to bind to for outgoing connections, IPv4, IPv6 or name.
+ --box-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ --buffer-size int Buffer size when copying files. (default 16M)
+ --bwlimit BwTimetable Bandwidth limit in kBytes/s, or use suffix b|k|M|G or a full timetable.
+ --cache-chunk-clean-interval string Interval at which chunk cleanup runs (default "1m")
+ --cache-chunk-no-memory Disable the in-memory cache for storing chunks during streaming
+ --cache-chunk-path string Directory to cached chunk files (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-chunk-size string The size of a chunk (default "5M")
+ --cache-db-path string Directory to cache DB (default "/home/ncw/.cache/rclone/cache-backend")
+ --cache-db-purge Purge the cache DB before
+ --cache-dir string Directory rclone will use for caching. (default "/home/ncw/.cache/rclone")
+ --cache-info-age string How much time should object info be stored in cache (default "6h")
+ --cache-read-retries int How many times to retry a read from a cache storage (default 10)
+ --cache-rps int Limits the number of requests per second to the source FS. -1 disables the rate limiter (default -1)
+ --cache-total-chunk-size string The total size which the chunks can take up from the disk (default "10G")
+ --cache-workers int How many workers should run in parallel to download chunks (default 4)
+ --cache-writes Will cache file data on writes through the FS
+ --checkers int Number of checkers to run in parallel. (default 8)
+ -c, --checksum Skip based on checksum & size, not mod-time & size
+ --config string Config file. (default "/home/ncw/.rclone.conf")
+ --contimeout duration Connect timeout (default 1m0s)
+ -L, --copy-links Follow symlinks and copy the pointed to item.
+ --cpuprofile string Write cpu profile to file
+ --crypt-show-mapping For all files listed show how the names encrypt.
+ --delete-after When synchronizing, delete files on destination after transfering
+ --delete-before When synchronizing, delete files on destination before transfering
+ --delete-during When synchronizing, delete files during transfer (default)
+ --delete-excluded Delete files on dest excluded from sync
+ --disable string Disable a comma separated list of features. Use help to see a list.
+ --drive-auth-owner-only Only consider files owned by the authenticated user.
+ --drive-chunk-size int Upload chunk size. Must a power of 2 >= 256k. (default 8M)
+ --drive-formats string Comma separated list of preferred formats for downloading Google docs. (default "docx,xlsx,pptx,svg")
+ --drive-list-chunk int Size of listing chunk 100-1000. 0 to disable. (default 1000)
+ --drive-shared-with-me Only show files that are shared with me
+ --drive-skip-gdocs Skip google documents in all listings.
+ --drive-trashed-only Only show files that are in the trash
+ --drive-upload-cutoff int Cutoff for switching to chunked upload (default 8M)
+ --drive-use-trash Send files to the trash instead of deleting permanently. (default true)
+ --dropbox-chunk-size int Upload chunk size. Max 150M. (default 48M)
+ -n, --dry-run Do a trial run with no permanent changes
+ --dump string List of items to dump from:
+ --dump-bodies Dump HTTP headers and bodies - may contain sensitive info
+ --dump-headers Dump HTTP headers - may contain sensitive info
+ --exclude stringArray Exclude files matching pattern
+ --exclude-from stringArray Read exclude patterns from file
+ --exclude-if-present string Exclude directories if filename is present
+ --fast-list Use recursive list if available. Uses more memory but fewer transactions.
+ --files-from stringArray Read list of source-file names from file
+ -f, --filter stringArray Add a file-filtering rule
+ --filter-from stringArray Read filtering patterns from a file
+ --gcs-location string Default location for buckets (us|eu|asia|us-central1|us-east1|us-east4|us-west1|asia-east1|asia-noetheast1|asia-southeast1|australia-southeast1|europe-west1|europe-west2).
+ --gcs-storage-class string Default storage class for buckets (MULTI_REGIONAL|REGIONAL|STANDARD|NEARLINE|COLDLINE|DURABLE_REDUCED_AVAILABILITY).
+ --ignore-checksum Skip post copy check of checksums.
+ --ignore-existing Skip all files that exist on destination
+ --ignore-size Ignore size when skipping use mod-time or checksum.
+ -I, --ignore-times Don't skip files that match size and time - transfer all files
+ --immutable Do not modify files. Fail if existing files have been modified.
+ --include stringArray Include files matching pattern
+ --include-from stringArray Read include patterns from file
+ --local-no-unicode-normalization Don't apply unicode normalization to paths and filenames
+ --log-file string Log everything to this file
+ --log-level string Log level DEBUG|INFO|NOTICE|ERROR (default "NOTICE")
+ --low-level-retries int Number of low level retries to do. (default 10)
+ --max-age string Don't transfer any file older than this in s or suffix ms|s|m|h|d|w|M|y
+ --max-depth int If set limits the recursion depth to this. (default -1)
+ --max-size int Don't transfer any file larger than this in k or suffix b|k|M|G (default off)
+ --memprofile string Write memory profile to file
+ --min-age string Don't transfer any file younger than this in s or suffix ms|s|m|h|d|w|M|y
+ --min-size int Don't transfer any file smaller than this in k or suffix b|k|M|G (default off)
+ --modify-window duration Max time diff to be considered the same (default 1ns)
+ --no-check-certificate Do not verify the server SSL certificate. Insecure.
+ --no-gzip-encoding Don't set Accept-Encoding: gzip.
+ --no-traverse Don't traverse destination file system on copy.
+ --no-update-modtime Don't update destination mod-time if files identical.
+ --old-sync-method Deprecated - use --fast-list instead
+ -x, --one-file-system Don't cross filesystem boundaries.
+ --onedrive-chunk-size int Above this size files will be chunked - must be multiple of 320k. (default 10M)
+ --onedrive-upload-cutoff int Cutoff for switching to chunked upload - must be <= 100MB (default 10M)
+ --pcloud-upload-cutoff int Cutoff for switching to multipart upload (default 50M)
+ -q, --quiet Print as little stuff as possible
+ --retries int Retry operations this many times if they fail (default 3)
+ --s3-acl string Canned ACL used when creating buckets and/or storing objects in S3
+ --s3-storage-class string Storage class to use when uploading S3 objects (STANDARD|REDUCED_REDUNDANCY|STANDARD_IA)
+ --size-only Skip based on size only, not mod-time or checksum
+ --skip-links Don't warn about skipped symlinks.
+ --stats duration Interval between printing stats, e.g 500ms, 60s, 5m. (0 to disable) (default 1m0s)
+ --stats-log-level string Log level to show --stats output DEBUG|INFO|NOTICE|ERROR (default "INFO")
+ --stats-unit string Show data rate in stats as either 'bits' or 'bytes'/s (default "bytes")
+ --streaming-upload-cutoff int Cutoff for switching to chunked upload if file size is unknown. Upload starts after reaching cutoff or when file ends. (default 100k)
+ --suffix string Suffix for use with --backup-dir.
+ --swift-chunk-size int Above this size files will be chunked into a _segments container. (default 5G)
+ --syslog Use Syslog for logging
+ --syslog-facility string Facility for syslog, eg KERN,USER,... (default "DAEMON")
+ --timeout duration IO idle timeout (default 5m0s)
+ --tpslimit float Limit HTTP transactions per second to this.
+ --tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)
+ --track-renames When synchronizing, track file renames and do a server side move if possible
+ --transfers int Number of file transfers to run in parallel. (default 4)
+ -u, --update Skip files that are newer on the destination.
+ --user-agent string Set the user-agent to a specified string. The default is rclone/ version (default "rclone/v1.39")
+ -v, --verbose count[=-1] Print lots more stuff (repeat for more)
```
### SEE ALSO
-* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.38
+* [rclone](/commands/rclone/) - Sync files and directories to and from local and remote object stores - v1.39
-###### Auto generated by spf13/cobra on 30-Sep-2017
+###### Auto generated by spf13/cobra on 23-Dec-2017
diff --git a/docs/layouts/shortcodes/version.html b/docs/layouts/shortcodes/version.html
index f3e194118..fe40bd64f 100644
--- a/docs/layouts/shortcodes/version.html
+++ b/docs/layouts/shortcodes/version.html
@@ -1 +1 @@
-v1.38
\ No newline at end of file
+v1.39
\ No newline at end of file
diff --git a/fs/version.go b/fs/version.go
index e274b6a18..f845fe091 100644
--- a/fs/version.go
+++ b/fs/version.go
@@ -1,4 +1,4 @@
package fs
// Version of rclone
-var Version = "v1.38-DEV"
+var Version = "v1.39"
diff --git a/rclone.1 b/rclone.1
index d04013702..5472e4a4c 100644
--- a/rclone.1
+++ b/rclone.1
@@ -1,7 +1,7 @@
.\"t
.\" Automatically generated by Pandoc 1.17.2
.\"
-.TH "rclone" "1" "Sep 30, 2017" "User Manual" ""
+.TH "rclone" "1" "Dec 23, 2017" "User Manual" ""
.hy
.SH Rclone
.PP
@@ -20,6 +20,8 @@ Box
.IP \[bu] 2
Ceph
.IP \[bu] 2
+DigitalOcean Spaces
+.IP \[bu] 2
Dreamhost
.IP \[bu] 2
Dropbox
@@ -42,12 +44,20 @@ Microsoft OneDrive
.IP \[bu] 2
Minio
.IP \[bu] 2
+Nextloud
+.IP \[bu] 2
OVH
.IP \[bu] 2
Openstack Swift
.IP \[bu] 2
Oracle Cloud Storage
.IP \[bu] 2
+Ownloud
+.IP \[bu] 2
+pCloud
+.IP \[bu] 2
+put.io
+.IP \[bu] 2
QingStor
.IP \[bu] 2
Rackspace Cloud Files
@@ -56,6 +66,8 @@ SFTP
.IP \[bu] 2
Wasabi
.IP \[bu] 2
+WebDAV
+.IP \[bu] 2
Yandex Disk
.IP \[bu] 2
The local filesystem
@@ -81,6 +93,8 @@ Can sync to and from network, eg two different cloud accounts
.IP \[bu] 2
Optional encryption (Crypt (https://rclone.org/crypt/))
.IP \[bu] 2
+Optional cache (Cache (https://rclone.org/cache/))
+.IP \[bu] 2
Optional FUSE mount (rclone
mount (https://rclone.org/commands/rclone_mount/))
.PP
@@ -112,6 +126,26 @@ See below for some expanded Linux / macOS instructions.
.PP
See the Usage section (https://rclone.org/docs/) of the docs for how to
use rclone, or run \f[C]rclone\ \-h\f[].
+.SS Script installation
+.PP
+To install rclone on Linux/MacOs/BSD systems, run:
+.IP
+.nf
+\f[C]
+curl\ https://rclone.org/install.sh\ |\ sudo\ bash
+\f[]
+.fi
+.PP
+For beta installation, run:
+.IP
+.nf
+\f[C]
+curl\ https://rclone.org/install.sh\ |\ sudo\ bash\ \-s\ beta
+\f[]
+.fi
+.PP
+Note that this script checks the version of rclone installed first and
+won\[aq]t re\-download if not needed.
.SS Linux installation from precompiled binary
.PP
Fetch and unpack
@@ -265,8 +299,12 @@ Backblaze B2 (https://rclone.org/b2/)
.IP \[bu] 2
Box (https://rclone.org/box/)
.IP \[bu] 2
+Cache (https://rclone.org/cache/)
+.IP \[bu] 2
Crypt (https://rclone.org/crypt/) \- to encrypt other remotes
.IP \[bu] 2
+DigitalOcean Spaces (/s3/#digitalocean-spaces)
+.IP \[bu] 2
Dropbox (https://rclone.org/dropbox/)
.IP \[bu] 2
FTP (https://rclone.org/ftp/)
@@ -286,10 +324,14 @@ Microsoft OneDrive (https://rclone.org/onedrive/)
Openstack Swift / Rackspace Cloudfiles / Memset
Memstore (https://rclone.org/swift/)
.IP \[bu] 2
+Pcloud (https://rclone.org/pcloud/)
+.IP \[bu] 2
QingStor (https://rclone.org/qingstor/)
.IP \[bu] 2
SFTP (https://rclone.org/sftp/)
.IP \[bu] 2
+WebDAV (https://rclone.org/webdav/)
+.IP \[bu] 2
Yandex Disk (https://rclone.org/yandex/)
.IP \[bu] 2
The local filesystem (https://rclone.org/local/)
@@ -327,22 +369,13 @@ rclone\ sync\ /local/path\ remote:path\ #\ syncs\ /local/path\ to\ the\ remote
Enter an interactive configuration session.
.SS Synopsis
.PP
-\f[C]rclone\ config\f[] enters an interactive configuration sessions
-where you can setup new remotes and manage existing ones.
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones.
You may also set or remove a password to protect your configuration.
-.PP
-Additional functions:
-.IP \[bu] 2
-\f[C]rclone\ config\ edit\f[] \[en] same as above
-.IP \[bu] 2
-\f[C]rclone\ config\ file\f[] \[en] show path of configuration file in
-use
-.IP \[bu] 2
-\f[C]rclone\ config\ show\f[] \[en] print (decrypted) config file
.IP
.nf
\f[C]
-rclone\ config\ [function]\ [flags]
+rclone\ config\ [flags]
\f[]
.fi
.SS Options
@@ -482,6 +515,9 @@ If possible a server side move will be used, otherwise it will copy it
(server side if possible) into \f[C]dest:path\f[] then delete the
original (if no errors on copy) in \f[C]source:path\f[].
.PP
+If you want to delete empty source directories after move, use the
+\-\-delete\-empty\-src\-dirs flag.
+.PP
\f[B]Important\f[]: Since this can cause data loss, test first with the
\-\-dry\-run flag.
.IP
@@ -494,7 +530,8 @@ rclone\ move\ source:path\ dest:path\ [flags]
.IP
.nf
\f[C]
-\ \ \-h,\ \-\-help\ \ \ help\ for\ move
+\ \ \ \ \ \ \-\-delete\-empty\-src\-dirs\ \ \ Delete\ empty\ source\ dirs\ after\ move
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ move
\f[]
.fi
.SS rclone delete
@@ -792,7 +829,7 @@ rclone\ cleanup\ remote:path\ [flags]
.fi
.SS rclone dedupe
.PP
-Interactively find duplicate files delete/rename them.
+Interactively find duplicate files and delete/rename them.
.SS Synopsis
.PP
By default \f[C]dedupe\f[] interactively finds duplicate files and
@@ -942,6 +979,25 @@ rclone\ authorize\ [flags]
\ \ \-h,\ \-\-help\ \ \ help\ for\ authorize
\f[]
.fi
+.SS rclone cachestats
+.PP
+Print cache stats for a remote
+.SS Synopsis
+.PP
+Print cache stats for a remote in JSON format
+.IP
+.nf
+\f[C]
+rclone\ cachestats\ source:\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ cachestats
+\f[]
+.fi
.SS rclone cat
.PP
Concatenates any files and sends them to stdout.
@@ -996,6 +1052,208 @@ rclone\ cat\ remote:path\ [flags]
\ \ \ \ \ \ \-\-tail\ int\ \ \ \ \ Only\ print\ the\ last\ N\ characters.
\f[]
.fi
+.SS rclone config create
+.PP
+Create a new remote with name, type and options.
+.SS Synopsis
+.PP
+Create a new remote of with and options.
+The options should be passed in in pairs of .
+.PP
+For example to make a swift remote of name myremote using auto config
+you would do:
+.IP
+.nf
+\f[C]
+rclone\ config\ create\ myremote\ swift\ env_auth\ true
+\f[]
+.fi
+.IP
+.nf
+\f[C]
+rclone\ config\ create\ \ \ [\ ]*\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ create
+\f[]
+.fi
+.SS rclone config delete
+.PP
+Delete an existing remote .
+.SS Synopsis
+.PP
+Delete an existing remote .
+.IP
+.nf
+\f[C]
+rclone\ config\ delete\ \ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ delete
+\f[]
+.fi
+.SS rclone config dump
+.PP
+Dump the config file as JSON.
+.SS Synopsis
+.PP
+Dump the config file as JSON.
+.IP
+.nf
+\f[C]
+rclone\ config\ dump\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ dump
+\f[]
+.fi
+.SS rclone config edit
+.PP
+Enter an interactive configuration session.
+.SS Synopsis
+.PP
+Enter an interactive configuration session where you can setup new
+remotes and manage existing ones.
+You may also set or remove a password to protect your configuration.
+.IP
+.nf
+\f[C]
+rclone\ config\ edit\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ edit
+\f[]
+.fi
+.SS rclone config file
+.PP
+Show path of configuration file in use.
+.SS Synopsis
+.PP
+Show path of configuration file in use.
+.IP
+.nf
+\f[C]
+rclone\ config\ file\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ file
+\f[]
+.fi
+.SS rclone config password
+.PP
+Update password in an existing remote.
+.SS Synopsis
+.PP
+Update an existing remote\[aq]s password.
+The password should be passed in in pairs of .
+.PP
+For example to set password of a remote of name myremote you would do:
+.IP
+.nf
+\f[C]
+rclone\ config\ password\ myremote\ fieldname\ mypassword
+\f[]
+.fi
+.IP
+.nf
+\f[C]
+rclone\ config\ password\ \ [\ ]+\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ password
+\f[]
+.fi
+.SS rclone config providers
+.PP
+List in JSON format all the providers and options.
+.SS Synopsis
+.PP
+List in JSON format all the providers and options.
+.IP
+.nf
+\f[C]
+rclone\ config\ providers\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ providers
+\f[]
+.fi
+.SS rclone config show
+.PP
+Print (decrypted) config file, or the config for a single remote.
+.SS Synopsis
+.PP
+Print (decrypted) config file, or the config for a single remote.
+.IP
+.nf
+\f[C]
+rclone\ config\ show\ []\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ show
+\f[]
+.fi
+.SS rclone config update
+.PP
+Update options in an existing remote.
+.SS Synopsis
+.PP
+Update an existing remote\[aq]s options.
+The options should be passed in in pairs of .
+.PP
+For example to update the env_auth field of a remote of name myremote
+you would do:
+.IP
+.nf
+\f[C]
+rclone\ config\ update\ myremote\ swift\ env_auth\ true
+\f[]
+.fi
+.IP
+.nf
+\f[C]
+rclone\ config\ update\ \ [\ ]+\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ update
+\f[]
+.fi
.SS rclone copyto
.PP
Copy files from source to dest, skipping already copied
@@ -1126,7 +1384,7 @@ rclone\ cryptdecode\ encryptedremote:\ encryptedfilename\ [flags]
.fi
.SS rclone dbhashsum
.PP
-Produces a Dropbbox hash file for all the objects in the path.
+Produces a Dropbox hash file for all the objects in the path.
.SS Synopsis
.PP
Produces a Dropbox hash file for all the objects in the path.
@@ -1423,6 +1681,14 @@ won\[aq]t do that, so will be less reliable than the rclone command.
.PP
Note that all the rclone filters can be used to select a subset of the
files to be visible in the mount.
+.SS systemd
+.PP
+When running rclone mount as a systemd service, it is possible to use
+Type=notify.
+In this case the service will enter the started state after the
+mountpoint has been successfully set up.
+Units having the rclone mount service specified as a requirement will
+see all files and folders immediately in this mode.
.SS Directory Cache
.PP
Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a
@@ -1443,6 +1709,109 @@ like this:
kill\ \-SIGHUP\ $(pidof\ rclone)
\f[]
.fi
+.SS File Caching
+.PP
+\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
+.PP
+These flags control the VFS file caching options.
+The VFS layer is used by rclone mount to make a cloud storage systm work
+more like a normal file system.
+.PP
+You\[aq]ll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file.
+See below for more details.
+.PP
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+.IP
+.nf
+\f[C]
+\-\-vfs\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
+\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\f[]
+.fi
+.PP
+If run with \f[C]\-vv\f[] rclone will print the location of the file
+cache.
+The files are stored in the user cache file area which is OS dependent
+but can be controlled with \f[C]\-\-cache\-dir\f[] or setting the
+appropriate environment variable.
+.PP
+The cache has 4 different modes selected by
+\f[C]\-\-vfs\-cache\-mode\f[].
+The higher the cache mode the more compatible rclone becomes at the cost
+of using disk space.
+.PP
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won\[aq]t get
+written back to the remote.
+However they will still be in the on disk cache.
+.SS \-\-vfs\-cache\-mode off
+.PP
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+.PP
+This will mean some operations are not possible
+.IP \[bu] 2
+Files can\[aq]t be opened for both read AND write
+.IP \[bu] 2
+Files opened for write can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files open for read with O_TRUNC will be opened write only
+.IP \[bu] 2
+Files open for write only will behave as if O_TRUNC was supplied
+.IP \[bu] 2
+Open modes O_APPEND, O_TRUNC are ignored
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode minimal
+.PP
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks.
+This means that files opened for write will be a lot more compatible,
+but uses the minimal disk space.
+.PP
+These operations are not possible
+.IP \[bu] 2
+Files opened for write only can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files opened for write only will ignore O_APPEND, O_TRUNC
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode writes
+.PP
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload fails it will be retried up to \-\-low\-level\-retries
+times.
+.SS \-\-vfs\-cache\-mode full
+.PP
+In this mode all reads and writes are buffered to and from disk.
+When a file is opened for read it will be downloaded in its entirety
+first.
+.PP
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+.PP
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote.
+It will be purged on a schedule according to
+\f[C]\-\-vfs\-cache\-max\-age\f[].
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload or download fails it will be retried up to
+\-\-low\-level\-retries times.
.IP
.nf
\f[C]
@@ -1453,25 +1822,28 @@ rclone\ mount\ remote:path\ /path/to/mountpoint\ [flags]
.IP
.nf
\f[C]
-\ \ \ \ \ \ \-\-allow\-non\-empty\ \ \ \ \ \ \ \ \ \ \ Allow\ mounting\ over\ a\ non\-empty\ directory.
-\ \ \ \ \ \ \-\-allow\-other\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ other\ users.
-\ \ \ \ \ \ \-\-allow\-root\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ root\ user.
-\ \ \ \ \ \ \-\-debug\-fuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v.
-\ \ \ \ \ \ \-\-default\-permissions\ \ \ \ \ \ \ Makes\ kernel\ enforce\ access\ control\ based\ on\ the\ file\ mode.
-\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
-\ \ \ \ \ \ \-\-fuse\-flag\ stringArray\ \ \ \ \ Flags\ or\ arguments\ to\ be\ passed\ direct\ to\ libfuse/WinFsp.\ Repeat\ if\ required.
-\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
-\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ mount
-\ \ \ \ \ \ \-\-max\-read\-ahead\ int\ \ \ \ \ \ \ \ The\ number\ of\ bytes\ that\ can\ be\ prefetched\ for\ sequential\ reads.\ (default\ 128k)
-\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
-\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
-\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
-\ \ \-o,\ \-\-option\ stringArray\ \ \ \ \ \ \ \ Option\ for\ libfuse/WinFsp.\ Repeat\ if\ required.
-\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
-\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
-\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
-\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.
-\ \ \ \ \ \ \-\-write\-back\-cache\ \ \ \ \ \ \ \ \ \ Makes\ kernel\ buffer\ writes\ before\ sending\ them\ to\ rclone.\ Without\ this,\ writethrough\ caching\ is\ used.
+\ \ \ \ \ \ \-\-allow\-non\-empty\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ mounting\ over\ a\ non\-empty\ directory.
+\ \ \ \ \ \ \-\-allow\-other\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ other\ users.
+\ \ \ \ \ \ \-\-allow\-root\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Allow\ access\ to\ root\ user.
+\ \ \ \ \ \ \-\-debug\-fuse\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Debug\ the\ FUSE\ internals\ \-\ needs\ \-v.
+\ \ \ \ \ \ \-\-default\-permissions\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ enforce\ access\ control\ based\ on\ the\ file\ mode.
+\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
+\ \ \ \ \ \ \-\-fuse\-flag\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ Flags\ or\ arguments\ to\ be\ passed\ direct\ to\ libfuse/WinFsp.\ Repeat\ if\ required.
+\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ mount
+\ \ \ \ \ \ \-\-max\-read\-ahead\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ The\ number\ of\ bytes\ that\ can\ be\ prefetched\ for\ sequential\ reads.\ (default\ 128k)
+\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
+\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
+\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
+\ \ \-o,\ \-\-option\ stringArray\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Option\ for\ libfuse/WinFsp.\ Repeat\ if\ required.
+\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
+\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
+\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.
+\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\ \ \ \ \ \ \-\-write\-back\-cache\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Makes\ kernel\ buffer\ writes\ before\ sending\ them\ to\ rclone.\ Without\ this,\ writethrough\ caching\ is\ used.
\f[]
.fi
.SS rclone moveto
@@ -1651,6 +2023,9 @@ This removes any empty directories (or directories that only contain
empty directories) under the path that it finds, including the path if
it has nothing in.
.PP
+If you supply the \-\-leave\-root flag, it will not remove the root
+directory.
+.PP
This is useful for tidying up remotes that rclone has left a lot of
empty directories in.
.IP
@@ -1663,7 +2038,391 @@ rclone\ rmdirs\ remote:path\ [flags]
.IP
.nf
\f[C]
-\ \ \-h,\ \-\-help\ \ \ help\ for\ rmdirs
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ help\ for\ rmdirs
+\ \ \ \ \ \ \-\-leave\-root\ \ \ Do\ not\ remove\ root\ directory\ if\ empty
+\f[]
+.fi
+.SS rclone serve
+.PP
+Serve a remote over a protocol.
+.SS Synopsis
+.PP
+rclone serve is used to serve a remote over a given protocol.
+This command requires the use of a subcommand to specify the protocol,
+eg
+.IP
+.nf
+\f[C]
+rclone\ serve\ http\ remote:
+\f[]
+.fi
+.PP
+Each subcommand has its own options which you can see in their help.
+.IP
+.nf
+\f[C]
+rclone\ serve\ \ [opts]\ \ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ help\ for\ serve
+\f[]
+.fi
+.SS rclone serve http
+.PP
+Serve the remote over HTTP.
+.SS Synopsis
+.PP
+rclone serve http implements a basic web server to serve the remote over
+HTTP.
+This can be viewed in a web browser or you can make a remote of type
+http read from it.
+.PP
+Use \-\-addr to specify which IP address and port the server should
+listen on, eg \-\-addr 1.2.3.4:8000 or \-\-addr :8080 to listen to all
+IPs.
+By default it only listens on localhost.
+.PP
+You can use the filter flags (eg \-\-include, \-\-exclude) to control
+what is served.
+.PP
+The server will log errors.
+Use \-v to see access logs.
+.PP
+\-\-bwlimit will be respected for file transfers.
+Use \-\-stats to control the stats printing.
+.SS Directory Cache
+.PP
+Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend.
+Changes made locally in the mount may appear immediately or invalidate
+the cache.
+However, changes done on the remote will only be picked up once the
+cache expires.
+.PP
+Alternatively, you can send a \f[C]SIGHUP\f[] signal to rclone for it to
+flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+.IP
+.nf
+\f[C]
+kill\ \-SIGHUP\ $(pidof\ rclone)
+\f[]
+.fi
+.SS File Caching
+.PP
+\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
+.PP
+These flags control the VFS file caching options.
+The VFS layer is used by rclone mount to make a cloud storage systm work
+more like a normal file system.
+.PP
+You\[aq]ll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file.
+See below for more details.
+.PP
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+.IP
+.nf
+\f[C]
+\-\-vfs\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
+\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\f[]
+.fi
+.PP
+If run with \f[C]\-vv\f[] rclone will print the location of the file
+cache.
+The files are stored in the user cache file area which is OS dependent
+but can be controlled with \f[C]\-\-cache\-dir\f[] or setting the
+appropriate environment variable.
+.PP
+The cache has 4 different modes selected by
+\f[C]\-\-vfs\-cache\-mode\f[].
+The higher the cache mode the more compatible rclone becomes at the cost
+of using disk space.
+.PP
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won\[aq]t get
+written back to the remote.
+However they will still be in the on disk cache.
+.SS \-\-vfs\-cache\-mode off
+.PP
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+.PP
+This will mean some operations are not possible
+.IP \[bu] 2
+Files can\[aq]t be opened for both read AND write
+.IP \[bu] 2
+Files opened for write can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files open for read with O_TRUNC will be opened write only
+.IP \[bu] 2
+Files open for write only will behave as if O_TRUNC was supplied
+.IP \[bu] 2
+Open modes O_APPEND, O_TRUNC are ignored
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode minimal
+.PP
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks.
+This means that files opened for write will be a lot more compatible,
+but uses the minimal disk space.
+.PP
+These operations are not possible
+.IP \[bu] 2
+Files opened for write only can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files opened for write only will ignore O_APPEND, O_TRUNC
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode writes
+.PP
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload fails it will be retried up to \-\-low\-level\-retries
+times.
+.SS \-\-vfs\-cache\-mode full
+.PP
+In this mode all reads and writes are buffered to and from disk.
+When a file is opened for read it will be downloaded in its entirety
+first.
+.PP
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+.PP
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote.
+It will be purged on a schedule according to
+\f[C]\-\-vfs\-cache\-max\-age\f[].
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload or download fails it will be retried up to
+\-\-low\-level\-retries times.
+.IP
+.nf
+\f[C]
+rclone\ serve\ http\ remote:path\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ to\ bind\ server\ to.\ (default\ "localhost:8080")
+\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
+\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ http
+\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
+\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
+\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
+\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
+\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
+\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2)
+\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\f[]
+.fi
+.SS rclone serve webdav
+.PP
+Serve remote:path over webdav.
+.SS Synopsis
+.PP
+rclone serve webdav implements a basic webdav server to serve the remote
+over HTTP via the webdav protocol.
+This can be viewed with a webdav client or you can make a remote of type
+webdav to read and write it.
+.PP
+NB at the moment each directory listing reads the start of each file
+which is undesirable: see https://github.com/golang/go/issues/22577
+.SS Directory Cache
+.PP
+Using the \f[C]\-\-dir\-cache\-time\f[] flag, you can set how long a
+directory should be considered up to date and not refreshed from the
+backend.
+Changes made locally in the mount may appear immediately or invalidate
+the cache.
+However, changes done on the remote will only be picked up once the
+cache expires.
+.PP
+Alternatively, you can send a \f[C]SIGHUP\f[] signal to rclone for it to
+flush all directory caches, regardless of how old they are.
+Assuming only one rclone instance is running, you can reset the cache
+like this:
+.IP
+.nf
+\f[C]
+kill\ \-SIGHUP\ $(pidof\ rclone)
+\f[]
+.fi
+.SS File Caching
+.PP
+\f[B]NB\f[] File caching is \f[B]EXPERIMENTAL\f[] \- use with care!
+.PP
+These flags control the VFS file caching options.
+The VFS layer is used by rclone mount to make a cloud storage systm work
+more like a normal file system.
+.PP
+You\[aq]ll need to enable VFS caching if you want, for example, to read
+and write simultaneously to a file.
+See below for more details.
+.PP
+Note that the VFS cache works in addition to the cache backend and you
+may find that you need one or the other or both.
+.IP
+.nf
+\f[C]
+\-\-vfs\-cache\-dir\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Directory\ rclone\ will\ use\ for\ caching.
+\-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\f[]
+.fi
+.PP
+If run with \f[C]\-vv\f[] rclone will print the location of the file
+cache.
+The files are stored in the user cache file area which is OS dependent
+but can be controlled with \f[C]\-\-cache\-dir\f[] or setting the
+appropriate environment variable.
+.PP
+The cache has 4 different modes selected by
+\f[C]\-\-vfs\-cache\-mode\f[].
+The higher the cache mode the more compatible rclone becomes at the cost
+of using disk space.
+.PP
+Note that files are written back to the remote only when they are closed
+so if rclone is quit or dies with open files then these won\[aq]t get
+written back to the remote.
+However they will still be in the on disk cache.
+.SS \-\-vfs\-cache\-mode off
+.PP
+In this mode the cache will read directly from the remote and write
+directly to the remote without caching anything on disk.
+.PP
+This will mean some operations are not possible
+.IP \[bu] 2
+Files can\[aq]t be opened for both read AND write
+.IP \[bu] 2
+Files opened for write can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files open for read with O_TRUNC will be opened write only
+.IP \[bu] 2
+Files open for write only will behave as if O_TRUNC was supplied
+.IP \[bu] 2
+Open modes O_APPEND, O_TRUNC are ignored
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode minimal
+.PP
+This is very similar to "off" except that files opened for read AND
+write will be buffered to disks.
+This means that files opened for write will be a lot more compatible,
+but uses the minimal disk space.
+.PP
+These operations are not possible
+.IP \[bu] 2
+Files opened for write only can\[aq]t be seeked
+.IP \[bu] 2
+Existing files opened for write must have O_TRUNC set
+.IP \[bu] 2
+Files opened for write only will ignore O_APPEND, O_TRUNC
+.IP \[bu] 2
+If an upload fails it can\[aq]t be retried
+.SS \-\-vfs\-cache\-mode writes
+.PP
+In this mode files opened for read only are still read directly from the
+remote, write only and read/write files are buffered to disk first.
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload fails it will be retried up to \-\-low\-level\-retries
+times.
+.SS \-\-vfs\-cache\-mode full
+.PP
+In this mode all reads and writes are buffered to and from disk.
+When a file is opened for read it will be downloaded in its entirety
+first.
+.PP
+This may be appropriate for your needs, or you may prefer to look at the
+cache backend which does a much more sophisticated job of caching,
+including caching directory heirachies and chunks of files.q
+.PP
+In this mode, unlike the others, when a file is written to the disk, it
+will be kept on the disk after it is written to the remote.
+It will be purged on a schedule according to
+\f[C]\-\-vfs\-cache\-max\-age\f[].
+.PP
+This mode should support all normal file system operations.
+.PP
+If an upload or download fails it will be retried up to
+\-\-low\-level\-retries times.
+.IP
+.nf
+\f[C]
+rclone\ serve\ webdav\ remote:path\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \ \ \ \ \-\-addr\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IPaddress:Port\ to\ bind\ server\ to.\ (default\ "localhost:8081")
+\ \ \ \ \ \ \-\-dir\-cache\-time\ duration\ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ cache\ directory\ entries\ for.\ (default\ 5m0s)
+\ \ \ \ \ \ \-\-gid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ gid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ webdav
+\ \ \ \ \ \ \-\-no\-checksum\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ compare\ checksums\ on\ up/download.
+\ \ \ \ \ \ \-\-no\-modtime\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ read/write\ the\ modification\ time\ (can\ speed\ things\ up).
+\ \ \ \ \ \ \-\-no\-seek\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Don\[aq]t\ allow\ seeking\ in\ files.
+\ \ \ \ \ \ \-\-poll\-interval\ duration\ \ \ \ \ \ \ \ \ \ \ \ \ Time\ to\ wait\ between\ polling\ for\ changes.\ Must\ be\ smaller\ than\ dir\-cache\-time.\ Only\ on\ supported\ remotes.\ Set\ to\ 0\ to\ disable.\ (default\ 1m0s)
+\ \ \ \ \ \ \-\-read\-only\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Mount\ read\-only.
+\ \ \ \ \ \ \-\-uid\ uint32\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ uid\ field\ set\ by\ the\ filesystem.\ (default\ 502)
+\ \ \ \ \ \ \-\-umask\ int\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Override\ the\ permission\ bits\ set\ by\ the\ filesystem.\ (default\ 2)
+\ \ \ \ \ \ \-\-vfs\-cache\-max\-age\ duration\ \ \ \ \ \ \ \ \ Max\ age\ of\ objects\ in\ the\ cache.\ (default\ 1h0m0s)
+\ \ \ \ \ \ \-\-vfs\-cache\-mode\ string\ \ \ \ \ \ \ \ \ \ \ \ \ \ Cache\ mode\ off|minimal|writes|full\ (default\ "off")
+\ \ \ \ \ \ \-\-vfs\-cache\-poll\-interval\ duration\ \ \ Interval\ to\ poll\ the\ cache\ for\ stale\ objects.\ (default\ 1m0s)
+\f[]
+.fi
+.SS rclone touch
+.PP
+Create new file or change file modification time.
+.SS Synopsis
+.PP
+Create new file or change file modification time.
+.IP
+.nf
+\f[C]
+rclone\ touch\ remote:path\ [flags]
+\f[]
+.fi
+.SS Options
+.IP
+.nf
+\f[C]
+\ \ \-h,\ \-\-help\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ help\ for\ touch
+\ \ \-C,\ \-\-no\-create\ \ \ \ \ \ \ \ \ \ Do\ not\ create\ the\ file\ if\ it\ does\ not\ exist.
+\ \ \-t,\ \-\-timestamp\ string\ \ \ Change\ the\ modification\ times\ to\ the\ specified\ time\ instead\ of\ the\ current\ time\ of\ day.\ The\ argument\ is\ of\ the\ form\ \[aq]YYMMDD\[aq]\ (ex.\ 17.10.30)\ or\ \[aq]YYYY\-MM\-DDTHH:MM:SS\[aq]\ (ex.\ 2006\-01\-02T15:04:05)
\f[]
.fi
.SS rclone tree
@@ -1938,7 +2697,7 @@ today\[aq]s date.
Local address to bind to for outgoing connections.
This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or
host name.
-If the host name doesn\[aq]t resolve or resoves to more than one IP
+If the host name doesn\[aq]t resolve or resolves to more than one IP
address it will give an error.
.SS \-\-bwlimit=BANDWIDTH_SPEC
.PP
@@ -2158,6 +2917,9 @@ well as modification.
This can be useful as an additional layer of protection for immutable or
append\-only data sets (notably backup archives), where modification
implies corruption and should not be propagated.
+.SS \-\-leave\-root
+.PP
+During rmdirs it will not remove root directory, even if it\[aq]s empty.
.SS \-\-log\-file=FILE
.PP
Log all of rclone\[aq]s output to FILE.
@@ -2168,7 +2930,7 @@ See the Logging section (#logging) for more info.
.SS \-\-log\-level LEVEL
.PP
This sets the log level for rclone.
-The default log level is \f[C]INFO\f[].
+The default log level is \f[C]NOTICE\f[].
.PP
\f[C]DEBUG\f[] is equivalent to \f[C]\-vv\f[].
It outputs lots of debug info \- useful for bug reports and really
@@ -2613,14 +3375,21 @@ the docs for the remote in question.
.PP
Write CPU profile to file.
This can be analysed with \f[C]go\ tool\ pprof\f[].
-.SS \-\-dump\-auth
+.SS \-\-dump flag,flag,flag
.PP
-Dump HTTP headers \- will contain sensitive info such as
-\f[C]Authorization:\f[] headers \- use \f[C]\-\-dump\-headers\f[] to
-dump without \f[C]Authorization:\f[] headers.
+The \f[C]\-\-dump\f[] flag takes a comma separated list of flags to dump
+info about.
+These are:
+.SS \-\-dump headers
+.PP
+Dump HTTP headers with \f[C]Authorization:\f[] lines removed.
+May still contain sensitive info.
Can be very verbose.
Useful for debugging only.
-.SS \-\-dump\-bodies
+.PP
+Use \f[C]\-\-dump\ auth\f[] if you do want the \f[C]Authorization:\f[]
+headers.
+.SS \-\-dump bodies
.PP
Dump HTTP headers and bodies \- may contain sensitive info.
Can be very verbose.
@@ -2628,19 +3397,27 @@ Useful for debugging only.
.PP
Note that the bodies are buffered in memory so don\[aq]t use this for
enormous files.
-.SS \-\-dump\-filters
+.SS \-\-dump requests
+.PP
+Like \f[C]\-\-dump\ bodies\f[] but dumps the request bodies and the
+response headers.
+Useful for debugging download problems.
+.SS \-\-dump responses
+.PP
+Like \f[C]\-\-dump\ bodies\f[] but dumps the response bodies and the
+request headers.
+Useful for debugging upload problems.
+.SS \-\-dump auth
+.PP
+Dump HTTP headers \- will contain sensitive info such as
+\f[C]Authorization:\f[] headers \- use \f[C]\-\-dump\ headers\f[] to
+dump without \f[C]Authorization:\f[] headers.
+Can be very verbose.
+Useful for debugging only.
+.SS \-\-dump filters
.PP
Dump the filters to the output.
Useful to see exactly what include and exclude options are filtering on.
-.SS \-\-dump\-headers
-.PP
-Dump HTTP headers with \f[C]Authorization:\f[] lines removed.
-May still contain sensitive info.
-Can be very verbose.
-Useful for debugging only.
-.PP
-Use \f[C]\-\-dump\-auth\f[] if you do want the \f[C]Authorization:\f[]
-headers.
.SS \-\-memprofile=FILE
.PP
Write memory profile to file.
@@ -2706,7 +3483,7 @@ For the filtering options
.IP \[bu] 2
\f[C]\-\-max\-age\f[]
.IP \[bu] 2
-\f[C]\-\-dump\-filters\f[]
+\f[C]\-\-dump\ filters\f[]
.PP
See the filtering section (https://rclone.org/filtering/).
.SS Logging
@@ -2764,6 +3541,26 @@ can see that any previous error messages may not be valid after the
retry.
If rclone has done a retry it will log a high priority message if the
retry was successful.
+.SS List of exit codes
+.IP \[bu] 2
+\f[C]0\f[] \- success
+.IP \[bu] 2
+\f[C]1\f[] \- Syntax or usage error
+.IP \[bu] 2
+\f[C]2\f[] \- Error not otherwise categorised
+.IP \[bu] 2
+\f[C]3\f[] \- Directory not found
+.IP \[bu] 2
+\f[C]4\f[] \- File not found
+.IP \[bu] 2
+\f[C]5\f[] \- Temporary error (one that more retries might fix) (Retry
+errors)
+.IP \[bu] 2
+\f[C]6\f[] \- Less serious errors (like 461 errors from dropbox)
+(NoRetry errors)
+.IP \[bu] 2
+\f[C]7\f[] \- Fatal error (one that more retries won\[aq]t fix, like
+account suspended) (Fatal errors)
.SS Environment Variables
.PP
Rclone can be configured entirely using environment variables.
@@ -2798,8 +3595,8 @@ be found by looking at the help for \f[C]\-\-config\f[] in
\f[C]rclone\ help\f[]).
.PP
To find the name of the environment variable, you need to set, take
-\f[C]RCLONE_\f[] + name of remote + \f[C]_\f[] + name of config file
-option and make it all uppercase.
+\f[C]RCLONE_CONFIG_\f[] + name of remote + \f[C]_\f[] + name of config
+file option and make it all uppercase.
.PP
For example, to configure an S3 remote named \f[C]mys3:\f[] without a
config file (using unix ways of setting environment variables):
@@ -3015,7 +3812,7 @@ h[ae]llo\ \-\ matches\ "hello"
.fi
.PP
A \f[C]{\f[] and \f[C]}\f[] define a choice between elements.
-It should contain a comma seperated list of patterns, any of which might
+It should contain a comma separated list of patterns, any of which might
match.
These patterns can contain wildcards.
.IP
@@ -3141,6 +3938,11 @@ type.
.IP \[bu] 2
\f[C]\-\-filter\-from\f[]
.PP
+\f[B]Important\f[] You should not use \f[C]\-\-include*\f[] together
+with \f[C]\-\-exclude*\f[].
+It may produce different results than you expected.
+In that case try to use: \f[C]\-\-filter*\f[].
+.PP
Note that all the options of the same type are processed together in the
order above, regardless of what order they were placed on the command
line.
@@ -3435,7 +4237,7 @@ as these are now excluded from the sync.
.PP
Always test first with \f[C]\-\-dry\-run\f[] and \f[C]\-v\f[] before
using this flag.
-.SS \f[C]\-\-dump\-filters\f[] \- dump the filters to the output
+.SS \f[C]\-\-dump\ filters\f[] \- dump the filters to the output
.PP
This dumps the defined filters to the output as regular expressions.
.PP
@@ -3457,9 +4259,39 @@ In Windows the expansion is done by the command not the shell so this
should work fine
.IP \[bu] 2
\f[C]\-\-include\ *.jpg\f[]
+.SS Exclude directory based on a file
+.PP
+It is possible to exclude a directory based on a file, which is present
+in this directory.
+Filename should be specified using the \f[C]\-\-exclude\-if\-present\f[]
+flag.
+This flag has a priority over the other filtering flags.
+.PP
+Imagine, you have the following directory structure:
+.IP
+.nf
+\f[C]
+dir1/file1
+dir1/dir2/file2
+dir1/dir2/dir3/file3
+dir1/dir2/dir3/.ignore
+\f[]
+.fi
+.PP
+You can exclude \f[C]dir3\f[] from sync by running the following
+command:
+.IP
+.nf
+\f[C]
+rclone\ sync\ \-\-exclude\-if\-present\ .ignore\ dir1\ remote:backup
+\f[]
+.fi
+.PP
+Currently only one filename is supported, i.e.
+\f[C]\-\-exclude\-if\-present\f[] should not be used multiple times.
.SH Overview of cloud storage systems
.PP
-Each cloud storage system is slighly different.
+Each cloud storage system is slightly different.
Rclone attempts to provide a unified interface to them, but some
underlying differences show through.
.SS Features
@@ -3653,6 +4485,19 @@ T}@T{
R/W
T}
T{
+pCloud
+T}@T{
+MD5, SHA1
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+W
+T}
+T{
QingStor
T}@T{
MD5
@@ -3679,6 +4524,19 @@ T}@T{
\-
T}
T{
+WebDAV
+T}@T{
+\-
+T}@T{
+Yes ††
+T}@T{
+Depends
+T}@T{
+No
+T}@T{
+\-
+T}
+T{
Yandex Disk
T}@T{
MD5
@@ -3722,6 +4580,8 @@ This is an SHA256 sum of all the 4MB block SHA256s.
‡ SFTP supports checksums if the same login has shell access and
\f[C]md5sum\f[] or \f[C]sha1sum\f[] as well as \f[C]echo\f[] are in the
remote\[aq]s PATH.
+.PP
+†† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
.SS ModTime
.PP
The cloud storage system supports setting modification times on objects.
@@ -4032,6 +4892,23 @@ T}@T{
Yes
T}
T{
+pCloud
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}
+T{
QingStor
T}@T{
No
@@ -4066,6 +4943,23 @@ T}@T{
Yes
T}
T{
+WebDAV
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+Yes
+T}@T{
+No
+T}@T{
+No
+T}@T{
+Yes ‡
+T}
+T{
Yandex Disk
T}@T{
Yes
@@ -4108,6 +5002,8 @@ directory.
† Note Swift and Hubic implement this in order to delete directory
markers but they don\[aq]t actually have a quicker way of deleting files
other than deleting them individually.
+.PP
+‡ StreamUpload is not supported with Nextcloud
.SS Copy
.PP
Used when copying an object to and from the same remote.
@@ -4444,7 +5340,7 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
13\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
Storage>\ 2
-Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
+Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step
\ \ \ \\\ "false"
@@ -4670,6 +5566,8 @@ Secret Access Key: \f[C]AWS_SECRET_ACCESS_KEY\f[] or
Session Token: \f[C]AWS_SESSION_TOKEN\f[]
.RE
.IP \[bu] 2
+Running \f[C]rclone\f[] in an ECS task with an IAM role
+.IP \[bu] 2
Running \f[C]rclone\f[] on an EC2 instance with an IAM role
.PP
If none of these option actually end up providing \f[C]rclone\f[] with
@@ -4796,7 +5694,7 @@ Choose\ a\ number\ from\ below
10)\ s3
11)\ yandex
type>\ 10
-Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
+Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ *\ Enter\ AWS\ credentials\ in\ the\ next\ step
\ 1)\ false
@@ -4864,6 +5762,70 @@ removed).
Because this is a json dump, it is encoding the \f[C]/\f[] as
\f[C]\\/\f[], so if you use the secret key as \f[C]xxxxxx/xxxx\f[] it
will work fine.
+.SS DigitalOcean Spaces
+.PP
+Spaces (https://www.digitalocean.com/products/object-storage/) is an
+S3\-interoperable (https://developers.digitalocean.com/documentation/spaces/)
+object storage service from cloud provider DigitalOcean.
+.PP
+To connect to DigitalOcean Spaces you will need an access key and secret
+key.
+These can be retrieved on the "Applications &
+API (https://cloud.digitalocean.com/settings/api/tokens)" page of the
+DigitalOcean control panel.
+They will be needed when promted by \f[C]rclone\ config\f[] for your
+\f[C]access_key_id\f[] and \f[C]secret_access_key\f[].
+.PP
+When prompted for a \f[C]region\f[] or \f[C]location_constraint\f[],
+press enter to use the default value.
+The region must be included in the \f[C]endpoint\f[] setting (e.g.
+\f[C]nyc3.digitaloceanspaces.com\f[]).
+The defualt values can be used for other settings.
+.PP
+Going through the whole process of creating a new remote by running
+\f[C]rclone\ config\f[], each prompt should be answered as shown below:
+.IP
+.nf
+\f[C]
+Storage>\ 2
+env_auth>\ 1
+access_key_id>\ YOUR_ACCESS_KEY
+secret_access_key>\ YOUR_SECRET_KEY
+region>\
+endpoint>\ nyc3.digitaloceanspaces.com
+location_constraint>\
+acl>\
+storage_class>\
+\f[]
+.fi
+.PP
+The resulting configuration file should look like:
+.IP
+.nf
+\f[C]
+[spaces]
+type\ =\ s3
+env_auth\ =\ false
+access_key_id\ =\ YOUR_ACCESS_KEY
+secret_access_key\ =\ YOUR_SECRET_KEY
+region\ =\
+endpoint\ =\ nyc3.digitaloceanspaces.com
+location_constraint\ =\
+acl\ =\
+server_side_encryption\ =\
+storage_class\ =\
+\f[]
+.fi
+.PP
+Once configured, you can create a new Space and begin copying files.
+For example:
+.IP
+.nf
+\f[C]
+rclone\ mkdir\ spaces:my\-new\-space
+rclone\ copy\ /path/to/files\ spaces:my\-new\-space
+\f[]
+.fi
.SS Minio
.PP
Minio (https://minio.io/) is an object storage server built for cloud
@@ -4965,7 +5927,7 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "s3"
[snip]
Storage>\ s3
-Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
+Get\ AWS\ credentials\ from\ runtime\ (environment\ variables\ or\ EC2/ECS\ meta\ data\ if\ no\ env\ vars).\ Only\ applies\ if\ access_key_id\ and\ secret_access_key\ is\ blank.
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Enter\ AWS\ credentials\ in\ the\ next\ step
\ \ \ \\\ "false"
@@ -5188,14 +6150,29 @@ API method to set the modification time independent of doing an upload.
The SHA1 checksums of the files are checked on upload and download and
will be used in the syncing process.
.PP
-Large files which are uploaded in chunks will store their SHA1 on the
-object as \f[C]X\-Bz\-Info\-large_file_sha1\f[] as recommended by
-Backblaze.
+Large files (bigger than the limit in \f[C]\-\-b2\-upload\-cutoff\f[])
+which are uploaded in chunks will store their SHA1 on the object as
+\f[C]X\-Bz\-Info\-large_file_sha1\f[] as recommended by Backblaze.
+.PP
+For a large file to be uploaded with an SHA1 checksum, the source needs
+to support SHA1 checksums.
+The local disk supports SHA1 checksums so large file transfers from
+local disk will have an SHA1.
+See the overview (/overview/#features) for exactly which remotes support
+SHA1.
+.PP
+Sources which don\[aq]t support SHA1, in particular \f[C]crypt\f[] will
+upload large files without SHA1 checksums.
+This may be fixed in the future (see
+#1767 (https://github.com/ncw/rclone/issues/1767)).
+.PP
+Files sizes below \f[C]\-\-b2\-upload\-cutoff\f[] will always have an
+SHA1 regardless of the source.
.SS Transfers
.PP
Backblaze recommends that you do lots of transfers simultaneously for
maximum speed.
-In tests from my SSD equiped laptop the optimum setting is about
+In tests from my SSD equipped laptop the optimum setting is about
\f[C]\-\-transfers\ 32\f[] though higher numbers may be used for a
slight speed improvement.
The optimum number for you may vary depending on your hardware, how big
@@ -5232,8 +6209,8 @@ be deleted then the bucket will be deleted.
However \f[C]delete\f[] will cause the current versions of the files to
become hidden old versions.
.PP
-Here is a session showing the listing and and retreival of an old
-version followed by a \f[C]cleanup\f[] of the old versions.
+Here is a session showing the listing and retrieval of an old version
+followed by a \f[C]cleanup\f[] of the old versions.
.PP
Show current version and all the versions with \f[C]\-\-b2\-versions\f[]
flag.
@@ -5251,7 +6228,7 @@ $\ rclone\ \-q\ \-\-b2\-versions\ ls\ b2:cleanup\-test
\f[]
.fi
.PP
-Retreive an old verson
+Retrieve an old version
.IP
.nf
\f[C]
@@ -5320,16 +6297,6 @@ start and finish the upload) and another 2 requests for each chunk:
/b2api/v1/b2_finish_large_file
\f[]
.fi
-.SS B2 with crypt
-.PP
-When using B2 with \f[C]crypt\f[] files are encrypted into a temporary
-location and streamed from there.
-This is required to calculate the encrypted file\[aq]s checksum before
-beginning the upload.
-On Windows the %TMPDIR% environment variable is used as the temporary
-location.
-If the file requires chunking, both the chunking and encryption will
-take place in memory.
.SS Specific options
.PP
Here are the command line options specific to this cloud storage system.
@@ -5651,6 +6618,366 @@ rclone maps this to and from an identical looking unicode equivalent
\f[C]\\f[].
.PP
Box only supports filenames up to 255 characters in length.
+.SS Cache (BETA)
+.PP
+The \f[C]cache\f[] remote wraps another existing remote and stores file
+structure and its data for long running tasks like
+\f[C]rclone\ mount\f[].
+.PP
+To get started you just need to have an existing remote which can be
+configured with \f[C]cache\f[].
+.PP
+Here is an example of how to make a remote called \f[C]test\-cache\f[].
+First run:
+.IP
+.nf
+\f[C]
+\ rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+r)\ Rename\ remote
+c)\ Copy\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/r/c/s/q>\ n
+name>\ test\-cache
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\&...
+\ 5\ /\ Cache\ a\ remote
+\ \ \ \\\ "cache"
+\&...
+Storage>\ 5
+Remote\ to\ cache.
+Normally\ should\ contain\ a\ \[aq]:\[aq]\ and\ a\ path,\ eg\ "myremote:path/to/dir",
+"myremote:bucket"\ or\ maybe\ "myremote:"\ (not\ recommended).
+remote>\ local:/test
+Optional:\ The\ URL\ of\ the\ Plex\ server
+plex_url>\ http://127.0.0.1:32400
+Optional:\ The\ username\ of\ the\ Plex\ user
+plex_username>\ dummyusername
+Optional:\ The\ password\ of\ the\ Plex\ user
+y)\ Yes\ type\ in\ my\ own\ password
+g)\ Generate\ random\ password
+n)\ No\ leave\ this\ optional\ password\ blank
+y/g/n>\ y
+Enter\ the\ password:
+password:
+Confirm\ the\ password:
+password:
+The\ size\ of\ a\ chunk.\ Lower\ value\ good\ for\ slow\ connections\ but\ can\ affect\ seamless\ reading.
+Default:\ 5M
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ 1MB
+\ \ \ \\\ "1m"
+\ 2\ /\ 5\ MB
+\ \ \ \\\ "5M"
+\ 3\ /\ 10\ MB
+\ \ \ \\\ "10M"
+chunk_size>\ 2
+How\ much\ time\ should\ object\ info\ (file\ size,\ file\ hashes\ etc)\ be\ stored\ in\ cache.\ Use\ a\ very\ high\ value\ if\ you\ don\[aq]t\ plan\ on\ changing\ the\ source\ FS\ from\ outside\ the\ cache.
+Accepted\ units\ are:\ "s",\ "m",\ "h".
+Default:\ 5m
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ 1\ hour
+\ \ \ \\\ "1h"
+\ 2\ /\ 24\ hours
+\ \ \ \\\ "24h"
+\ 3\ /\ 24\ hours
+\ \ \ \\\ "48h"
+info_age>\ 2
+The\ maximum\ size\ of\ stored\ chunks.\ When\ the\ storage\ grows\ beyond\ this\ size,\ the\ oldest\ chunks\ will\ be\ deleted.
+Default:\ 10G
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ 500\ MB
+\ \ \ \\\ "500M"
+\ 2\ /\ 1\ GB
+\ \ \ \\\ "1G"
+\ 3\ /\ 10\ GB
+\ \ \ \\\ "10G"
+chunk_total_size>\ 3
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[test\-cache]
+remote\ =\ local:/test
+plex_url\ =\ http://127.0.0.1:32400
+plex_username\ =\ dummyusername
+plex_password\ =\ ***\ ENCRYPTED\ ***
+chunk_size\ =\ 5M
+info_age\ =\ 48h
+chunk_total_size\ =\ 10G
+\f[]
+.fi
+.PP
+You can then use it like this,
+.PP
+List directories in top level of your drive
+.IP
+.nf
+\f[C]
+rclone\ lsd\ test\-cache:
+\f[]
+.fi
+.PP
+List all the files in your drive
+.IP
+.nf
+\f[C]
+rclone\ ls\ test\-cache:
+\f[]
+.fi
+.PP
+To start a cached mount
+.IP
+.nf
+\f[C]
+rclone\ mount\ \-\-allow\-other\ test\-cache:\ /var/tmp/test\-cache
+\f[]
+.fi
+.SS Write Support
+.PP
+Writes are supported through \f[C]cache\f[].
+One caveat is that a mounted cache remote does not add any retry or
+fallback mechanism to the upload operation.
+This will depend on the implementation of the wrapped remote.
+.PP
+One special case is covered with \f[C]cache\-writes\f[] which will cache
+the file data at the same time as the upload when it is enabled making
+it available from the cache store immediately once the upload is
+finished.
+.SS Read Features
+.SS Multiple connections
+.PP
+To counter the high latency between a local PC where rclone is running
+and cloud providers, the cache remote can split multiple requests to the
+cloud provider for smaller file chunks and combines them together
+locally where they can be available almost immediately before the reader
+usually needs them.
+.PP
+This is similar to buffering when media files are played online.
+Rclone will stay around the current marker but always try its best to
+stay ahead and prepare the data before.
+.SS Plex Integration
+.PP
+There is a direct integration with Plex which allows cache to detect
+during reading if the file is in playback or not.
+This helps cache to adapt how it queries the cloud provider depending on
+what is needed for.
+.PP
+Scans will have a minimum amount of workers (1) while in a confirmed
+playback cache will deploy the configured number of workers.
+.PP
+This integration opens the doorway to additional performance
+improvements which will be explored in the near future.
+.PP
+\f[B]Note:\f[] If Plex options are not configured, \f[C]cache\f[] will
+function with its configured options without adapting any of its
+settings.
+.PP
+How to enable?
+Run \f[C]rclone\ config\f[] and add all the Plex options (endpoint,
+username and password) in your remote and it will be automatically
+enabled.
+.PP
+Affected settings: \- \f[C]cache\-workers\f[]: \f[I]Configured value\f[]
+during confirmed playback or \f[I]1\f[] all the other times
+.SS Known issues
+.SS Windows support \- Experimental
+.PP
+There are a couple of issues with Windows \f[C]mount\f[] functionality
+that still require some investigations.
+It should be considered as experimental thus far as fixes come in for
+this OS.
+.PP
+Most of the issues seem to be related to the difference between
+filesystems on Linux flavors and Windows as cache is heavily dependant
+on them.
+.PP
+Any reports or feedback on how cache behaves on this OS is greatly
+appreciated.
+.IP \[bu] 2
+https://github.com/ncw/rclone/issues/1935
+.IP \[bu] 2
+https://github.com/ncw/rclone/issues/1907
+.IP \[bu] 2
+https://github.com/ncw/rclone/issues/1834
+.SS Risk of throttling
+.PP
+Future iterations of the cache backend will make use of the pooling
+functionality of the cloud provider to synchronize and at the same time
+make writing through it more tolerant to failures.
+.PP
+There are a couple of enhancements in track to add these but in the
+meantime there is a valid concern that the expiring cache listings can
+lead to cloud provider throttles or bans due to repeated queries on it
+for very large mounts.
+.PP
+Some recommendations: \- don\[aq]t use a very small interval for entry
+informations (\f[C]\-\-cache\-info\-age\f[]) \- while writes aren\[aq]t
+yet optimised, you can still write through \f[C]cache\f[] which gives
+you the advantage of adding the file in the cache at the same time if
+configured to do so.
+.PP
+Future enhancements:
+.IP \[bu] 2
+https://github.com/ncw/rclone/issues/1937
+.IP \[bu] 2
+https://github.com/ncw/rclone/issues/1936
+.SS cache and crypt
+.PP
+One common scenario is to keep your data encrypted in the cloud provider
+using the \f[C]crypt\f[] remote.
+\f[C]crypt\f[] uses a similar technique to wrap around an existing
+remote and handles this translation in a seamless way.
+.PP
+There is an issue with wrapping the remotes in this order: \f[B]cloud
+remote\f[] \-> \f[B]crypt\f[] \-> \f[B]cache\f[]
+.PP
+During testing, I experienced a lot of bans with the remotes in this
+order.
+I suspect it might be related to how crypt opens files on the cloud
+provider which makes it think we\[aq]re downloading the full file
+instead of small chunks.
+Organizing the remotes in this order yelds better results: \f[B]cloud
+remote\f[] \-> \f[B]cache\f[] \-> \f[B]crypt\f[]
+.SS Specific options
+.PP
+Here are the command line options specific to this cloud storage system.
+.SS \-\-cache\-chunk\-path=PATH
+.PP
+Path to where partial file data (chunks) is stored locally.
+The remote name is appended to the final path.
+.PP
+This config follows the \f[C]\-\-cache\-db\-path\f[].
+If you specify a custom location for \f[C]\-\-cache\-db\-path\f[] and
+don\[aq]t specify one for \f[C]\-\-cache\-chunk\-path\f[] then
+\f[C]\-\-cache\-chunk\-path\f[] will use the same path as
+\f[C]\-\-cache\-db\-path\f[].
+.PP
+\f[B]Default\f[]: /cache\-backend/ \f[B]Example\f[]:
+/.cache/cache\-backend/test\-cache
+.SS \-\-cache\-db\-path=PATH
+.PP
+Path to where the file structure metadata (DB) is stored locally.
+The remote name is used as the DB file name.
+.PP
+\f[B]Default\f[]: /cache\-backend/ \f[B]Example\f[]:
+/.cache/cache\-backend/test\-cache
+.SS \-\-cache\-db\-purge
+.PP
+Flag to clear all the cached data for this remote before.
+.PP
+\f[B]Default\f[]: not set
+.SS \-\-cache\-chunk\-size=SIZE
+.PP
+The size of a chunk (partial file data).
+Use lower numbers for slower connections.
+.PP
+\f[B]Default\f[]: 5M
+.SS \-\-cache\-total\-chunk\-size=SIZE
+.PP
+The total size that the chunks can take up on the local disk.
+If \f[C]cache\f[] exceeds this value then it will start to the delete
+the oldest chunks until it goes under this value.
+.PP
+\f[B]Default\f[]: 10G
+.SS \-\-cache\-chunk\-clean\-interval=DURATION
+.PP
+How often should \f[C]cache\f[] perform cleanups of the chunk storage.
+The default value should be ok for most people.
+If you find that \f[C]cache\f[] goes over
+\f[C]cache\-total\-chunk\-size\f[] too often then try to lower this
+value to force it to perform cleanups more often.
+.PP
+\f[B]Default\f[]: 1m
+.SS \-\-cache\-info\-age=DURATION
+.PP
+How long to keep file structure information (directory listings, file
+size, mod times etc) locally.
+.PP
+If all write operations are done through \f[C]cache\f[] then you can
+safely make this value very large as the cache store will also be
+updated in real time.
+.PP
+\f[B]Default\f[]: 6h
+.SS \-\-cache\-read\-retries=RETRIES
+.PP
+How many times to retry a read from a cache storage.
+.PP
+Since reading from a \f[C]cache\f[] stream is independent from
+downloading file data, readers can get to a point where there\[aq]s no
+more data in the cache.
+Most of the times this can indicate a connectivity issue if
+\f[C]cache\f[] isn\[aq]t able to provide file data anymore.
+.PP
+For really slow connections, increase this to a point where the stream
+is able to provide data but your experience will be very stuttering.
+.PP
+\f[B]Default\f[]: 10
+.SS \-\-cache\-workers=WORKERS
+.PP
+How many workers should run in parallel to download chunks.
+.PP
+Higher values will mean more parallel processing (better CPU needed) and
+more concurrent requests on the cloud provider.
+This impacts several aspects like the cloud provider API limits, more
+stress on the hardware that rclone runs on but it also means that
+streams will be more fluid and data will be available much more faster
+to readers.
+.PP
+\f[B]Note\f[]: If the optional Plex integration is enabled then this
+setting will adapt to the type of reading performed and the value
+specified here will be used as a maximum number of workers to use.
+\f[B]Default\f[]: 4
+.SS \-\-cache\-chunk\-no\-memory
+.PP
+By default, \f[C]cache\f[] will keep file data during streaming in RAM
+as well to provide it to readers as fast as possible.
+.PP
+This transient data is evicted as soon as it is read and the number of
+chunks stored doesn\[aq]t exceed the number of workers.
+However, depending on other settings like \f[C]cache\-chunk\-size\f[]
+and \f[C]cache\-workers\f[] this footprint can increase if there are
+parallel streams too (multiple files being read at the same time).
+.PP
+If the hardware permits it, use this feature to provide an overall
+better performance during streaming but it can also be disabled if RAM
+is not available on the local machine.
+.PP
+\f[B]Default\f[]: not set
+.SS \-\-cache\-rps=NUMBER
+.PP
+This setting places a hard limit on the number of requests per second
+that \f[C]cache\f[] will be doing to the cloud provider remote and try
+to respect that value by setting waits between reads.
+.PP
+If you find that you\[aq]re getting banned or limited on the cloud
+provider through cache and know that a smaller number of requests per
+second will allow you to work with it then you can use this setting for
+that.
+.PP
+A good balance of all the other settings should make this setting
+useless but it is available to set for more special cases.
+.PP
+\f[B]NOTE\f[]: This will limit the number of requests during streams but
+other API calls to the cloud provider like directory listings will still
+pass.
+.PP
+\f[B]Default\f[]: disabled
+.SS \-\-cache\-writes
+.PP
+If you need to read files immediately after you upload them through
+\f[C]cache\f[] you can enable this flag to have their data stored in the
+cache store at the same time during upload.
+.PP
+\f[B]Default\f[]: not set
.SS Crypt
.PP
The \f[C]crypt\f[] remote encrypts and decrypts another remote.
@@ -5726,6 +7053,13 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 3\ /\ Very\ simple\ filename\ obfuscation.
\ \ \ \\\ "obfuscate"
filename_encryption>\ 2
+Option\ to\ either\ encrypt\ directory\ names\ or\ leave\ them\ intact.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Encrypt\ directory\ names.
+\ \ \ \\\ "true"
+\ 2\ /\ Don\[aq]t\ encrypt\ directory\ names,\ leave\ them\ intact.
+\ \ \ \\\ "false"
+filename_encryption>\ 1
Password\ or\ pass\ phrase\ for\ encryption.
y)\ Yes\ type\ in\ my\ own\ password
g)\ Generate\ random\ password
@@ -5892,7 +7226,7 @@ file names can\[aq]t be as long (~156 characters)
.IP \[bu] 2
can use sub paths and copy single files
.IP \[bu] 2
-directory structure visibile
+directory structure visible
.IP \[bu] 2
identical files names will have identical uploaded names
.IP \[bu] 2
@@ -5921,7 +7255,7 @@ file names can be longer than standard encryption
.IP \[bu] 2
can use sub paths and copy single files
.IP \[bu] 2
-directory structure visibile
+directory structure visible
.IP \[bu] 2
identical files names will have identical uploaded names
.PP
@@ -5933,6 +7267,22 @@ should be OK on all providers.
.PP
There may be an even more secure file name encryption mode in the future
which will address the long file name problem.
+.SS Directory name encryption
+.PP
+Crypt offers the option of encrypting dir names or leaving them intact.
+There are two options:
+.PP
+True
+.PP
+Encrypts the whole file path including directory names Example:
+\f[C]1/12/123.txt\f[] is encrypted to
+\f[C]p0e52nreeaj0a5ea7s64m4j72s/l42g6771hnv3an9cgc8cr2n1ng/qgm4avr35m5loi1th53ato71v0\f[]
+.PP
+False
+.PP
+Only encrypts file names, skips directory names Example:
+\f[C]1/12/123/txt\f[] is encrypted to
+\f[C]1/12/qgm4avr35m5loi1th53ato71v0\f[]
.SS Modified time and hashes
.PP
Crypt stores modification times using the underlying remote so support
@@ -5969,7 +7319,7 @@ This will have the following advantages
.IP \[bu] 2
you can use \f[C]rclone\ check\f[] between the encrypted remotes
.IP \[bu] 2
-you don\[aq]t decrypt and encrypt unecessarily
+you don\[aq]t decrypt and encrypt unnecessarily
.PP
For example, let\[aq]s say you have your original remote at
\f[C]remote:\f[] with the encrypted version at \f[C]eremote:\f[] with
@@ -6005,10 +7355,10 @@ The file has a header and is divided into chunks.
24 bytes Nonce (IV)
.PP
The initial nonce is generated from the operating systems crypto strong
-random number genrator.
+random number generator.
The nonce is incremented for each chunk read making sure each nonce is
unique for each block written.
-The chance of a nonce being re\-used is miniscule.
+The chance of a nonce being re\-used is minuscule.
If you wrote an exabyte of data (10¹⁸ bytes) you would have a
probability of approximately 2×10⁻³² of re\-using a nonce.
.SS Chunk
@@ -6062,9 +7412,9 @@ They are then encrypted with EME using AES with 256 bit key.
EME (ECB\-Mix\-ECB) is a wide\-block encryption mode presented in the
2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
.PP
-This makes for determinstic encryption which is what we want \- the same
-filename must encrypt to the same thing otherwise we can\[aq]t find it
-on the cloud storage system.
+This makes for deterministic encryption which is what we want \- the
+same filename must encrypt to the same thing otherwise we can\[aq]t find
+it on the cloud storage system.
.PP
This means that
.IP \[bu] 2
@@ -6089,13 +7439,13 @@ Drive).
.SS Key derivation
.PP
Rclone uses \f[C]scrypt\f[] with parameters \f[C]N=16384,\ r=8,\ p=1\f[]
-with a an optional user supplied salt (password2) to derive the 32+32+16
-= 80 bytes of key material required.
+with an optional user supplied salt (password2) to derive the 32+32+16 =
+80 bytes of key material required.
If the user doesn\[aq]t supply a salt then rclone uses an internal one.
.PP
\f[C]scrypt\f[] makes it impractical to mount a dictionary attack on
rclone encrypted data.
-For full protection agains this you should always use a salt.
+For full protection against this you should always use a salt.
.SS Dropbox
.PP
Paths are specified as \f[C]remote:path\f[]
@@ -6219,10 +7569,15 @@ is checked for all transfers.
Here are the command line options specific to this cloud storage system.
.SS \-\-dropbox\-chunk\-size=SIZE
.PP
-Upload chunk size.
-Max 150M.
-The default is 128MB.
-Note that this isn\[aq]t buffered into memory.
+Any files larger than this will be uploaded in chunks of this size.
+The default is 48MB.
+The maximum is 150MB.
+.PP
+Note that chunks are buffered in memory (one at a time) so rclone can
+deal with retries.
+Setting this larger will increase the speed slightly (at most 10% for
+128MB in tests) at the cost of using more memory.
+It can be set smaller if you are tight on memory.
.SS Limitations
.PP
Note that Dropbox is case insensitive so you can\[aq]t have a file
@@ -6700,6 +8055,8 @@ Google\ Application\ Client\ Id\ \-\ leave\ blank\ normally.
client_id>
Google\ Application\ Client\ Secret\ \-\ leave\ blank\ normally.
client_secret>
+Service\ Account\ Credentials\ JSON\ file\ path\ \-\ needed\ only\ if\ you\ want\ use\ SA\ instead\ of\ interactive\ login.
+service_account_file>
Remote\ config
Use\ auto\ config?
\ *\ Say\ Y\ if\ not\ sure
@@ -6761,6 +8118,23 @@ To copy a local directory to a drive directory called backup
rclone\ copy\ /home/source\ remote:backup
\f[]
.fi
+.SS Service Account support
+.PP
+You can set up rclone with Google Drive in an unattended mode, i.e.
+not tied to a specific end\-user Google account.
+This is useful when you want to synchronise files onto machines that
+don\[aq]t have actively logged\-in users, for example build machines.
+.PP
+To create a service account and obtain its credentials, go to the Google
+Developer Console (https://console.developers.google.com) and use the
+"Create Credentials" button.
+After creating an account, a JSON file containing the Service
+Account\[aq]s credentials will be downloaded onto your machine.
+These credentials are what rclone will use for authentication.
+.PP
+To use a Service Account instead of OAuth2 token flow, enter the path to
+your Service Account credentials at the \f[C]service_account_file\f[]
+prompt and rclone won\[aq]t use the browser based authentication flow.
.SS Team drives
.PP
If you want to configure the remote to point to a Google Team Drive then
@@ -7118,10 +8492,10 @@ It doesn\[aq]t matter what Google account you use.
.IP "2." 3
Select a project or create a new project.
.IP "3." 3
-Under Overview, Google APIs, Google Apps APIs, click "Drive API", then
-"Enable".
+Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the then
+"Google Drive API".
.IP "4." 3
-Click "Credentials" in the left\-side panel (not "Go to credentials",
+Click "Credentials" in the left\-side panel (not "Create credentials",
which opens the wizard), then "Create credentials", then "OAuth client
ID".
It will prompt you to set the OAuth consent screen product name, if you
@@ -7995,6 +9369,9 @@ OVH Object
Storage (https://www.ovh.co.uk/public-cloud/storage/object-storage/)
.IP \[bu] 2
Oracle Cloud Storage (https://cloud.oracle.com/storage-opc)
+.IP \[bu] 2
+IBM Bluemix Cloud ObjectStorage
+Swift (https://console.bluemix.net/docs/infrastructure/objectstorage-swift/index.html)
.PP
Paths are specified as \f[C]remote:container\f[] (or \f[C]remote:\f[]
for the \f[C]lsd\f[] command.) You may put subdirectories in too, eg
@@ -8029,33 +9406,39 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "b2"
\ 4\ /\ Box
\ \ \ \\\ "box"
-\ 5\ /\ Dropbox
+\ 5\ /\ Cache\ a\ remote
+\ \ \ \\\ "cache"
+\ 6\ /\ Dropbox
\ \ \ \\\ "dropbox"
-\ 6\ /\ Encrypt/Decrypt\ a\ remote
+\ 7\ /\ Encrypt/Decrypt\ a\ remote
\ \ \ \\\ "crypt"
-\ 7\ /\ FTP\ Connection
+\ 8\ /\ FTP\ Connection
\ \ \ \\\ "ftp"
-\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ 9\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
\ \ \ \\\ "google\ cloud\ storage"
-\ 9\ /\ Google\ Drive
+10\ /\ Google\ Drive
\ \ \ \\\ "drive"
-10\ /\ Hubic
+11\ /\ Hubic
\ \ \ \\\ "hubic"
-11\ /\ Local\ Disk
+12\ /\ Local\ Disk
\ \ \ \\\ "local"
-12\ /\ Microsoft\ Azure\ Blob\ Storage
+13\ /\ Microsoft\ Azure\ Blob\ Storage
\ \ \ \\\ "azureblob"
-13\ /\ Microsoft\ OneDrive
+14\ /\ Microsoft\ OneDrive
\ \ \ \\\ "onedrive"
-14\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+15\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
\ \ \ \\\ "swift"
-15\ /\ QingClound\ Object\ Storage
+16\ /\ Pcloud
+\ \ \ \\\ "pcloud"
+17\ /\ QingCloud\ Object\ Storage
\ \ \ \\\ "qingstor"
-16\ /\ SSH/SFTP\ Connection
+18\ /\ SSH/SFTP\ Connection
\ \ \ \\\ "sftp"
-17\ /\ Yandex\ Disk
+19\ /\ Webdav
+\ \ \ \\\ "webdav"
+20\ /\ Yandex\ Disk
\ \ \ \\\ "yandex"
-18\ /\ http\ Connection
+21\ /\ http\ Connection
\ \ \ \\\ "http"
Storage>\ swift
Get\ swift\ credentials\ from\ environment\ variables\ in\ standard\ OpenStack\ form.
@@ -8064,12 +9447,12 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "false"
\ 2\ /\ Get\ swift\ credentials\ from\ environment\ vars.\ Leave\ other\ fields\ blank\ if\ using\ this.
\ \ \ \\\ "true"
-env_auth>\ 1
-User\ name\ to\ log\ in.
-user>\ user_name
-API\ key\ or\ password.
-key>\ password_or_api_key
-Authentication\ URL\ for\ server.
+env_auth>\ true
+User\ name\ to\ log\ in\ (OS_USERNAME).
+user>\
+API\ key\ or\ password\ (OS_PASSWORD).
+key>\
+Authentication\ URL\ for\ server\ (OS_AUTH_URL).
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Rackspace\ US
\ \ \ \\\ "https://auth.api.rackspacecloud.com/v1.0"
@@ -8083,20 +9466,26 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "https://auth.storage.memset.com/v2.0"
\ 6\ /\ OVH
\ \ \ \\\ "https://auth.cloud.ovh.net/v2.0"
-auth>\ 1
-User\ domain\ \-\ optional\ (v3\ auth)
-domain>\ Default
-Tenant\ name\ \-\ optional\ for\ v1\ auth,\ required\ otherwise
-tenant>\ tenant_name
-Tenant\ domain\ \-\ optional\ (v3\ auth)
-tenant_domain>
-Region\ name\ \-\ optional
-region>
-Storage\ URL\ \-\ optional
-storage_url>
-AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version
-auth_version>
-Endpoint\ type\ to\ choose\ from\ the\ service\ catalogue
+auth>\
+User\ ID\ to\ log\ in\ \-\ optional\ \-\ most\ swift\ systems\ use\ user\ and\ leave\ this\ blank\ (v3\ auth)\ (OS_USER_ID).
+user_id>\
+User\ domain\ \-\ optional\ (v3\ auth)\ (OS_USER_DOMAIN_NAME)
+domain>\
+Tenant\ name\ \-\ optional\ for\ v1\ auth,\ this\ or\ tenant_id\ required\ otherwise\ (OS_TENANT_NAME\ or\ OS_PROJECT_NAME)
+tenant>\
+Tenant\ ID\ \-\ optional\ for\ v1\ auth,\ this\ or\ tenant\ required\ otherwise\ (OS_TENANT_ID)
+tenant_id>\
+Tenant\ domain\ \-\ optional\ (v3\ auth)\ (OS_PROJECT_DOMAIN_NAME)
+tenant_domain>\
+Region\ name\ \-\ optional\ (OS_REGION_NAME)
+region>\
+Storage\ URL\ \-\ optional\ (OS_STORAGE_URL)
+storage_url>\
+Auth\ Token\ from\ alternate\ authentication\ \-\ optional\ (OS_AUTH_TOKEN)
+auth_token>\
+AuthVersion\ \-\ optional\ \-\ set\ to\ (1,2,3)\ if\ your\ auth\ URL\ has\ no\ version\ (ST_AUTH_VERSION)
+auth_version>\
+Endpoint\ type\ to\ choose\ from\ the\ service\ catalogue\ (OS_ENDPOINT_TYPE)
Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ 1\ /\ Public\ (default,\ choose\ this\ if\ not\ sure)
\ \ \ \\\ "public"
@@ -8104,21 +9493,24 @@ Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
\ \ \ \\\ "internal"
\ 3\ /\ Admin
\ \ \ \\\ "admin"
-endpoint_type>
+endpoint_type>\
Remote\ config
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
-[remote]
-env_auth\ =\ false
-user\ =\ user_name
-key\ =\ password_or_api_key
-auth\ =\ https://auth.api.rackspacecloud.com/v1.0
-domain\ =\ Default
-tenant\ =
-tenant_domain\ =
-region\ =
-storage_url\ =
-auth_version\ =
-endpoint_type\ =
+[test]
+env_auth\ =\ true
+user\ =\
+key\ =\
+auth\ =\
+user_id\ =\
+domain\ =\
+tenant\ =\
+tenant_id\ =\
+tenant_domain\ =\
+region\ =\
+storage_url\ =\
+auth_token\ =\
+auth_version\ =\
+endpoint_type\ =\
\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
y)\ Yes\ this\ is\ OK
e)\ Edit\ this\ remote
@@ -8205,11 +9597,23 @@ of OpenStack environment variables.
When you run through the config, make sure you choose \f[C]true\f[] for
\f[C]env_auth\f[] and leave everything else blank.
.PP
-rclone will then set any empty config parameters from the enviroment
+rclone will then set any empty config parameters from the environment
using standard OpenStack environment variables.
There is a list of the
variables (https://godoc.org/github.com/ncw/swift#Connection.ApplyEnvironment)
in the docs for the swift library.
+.SS Using an alternate authentication method
+.PP
+If your OpenStack installation uses a non\-standard authentication
+method that might not be yet supported by rclone or the underlying swift
+library, you can authenticate externally (e.g.
+calling manually the \f[C]openstack\f[] commands to get a token).
+Then, you just need to pass the two configuration variables
+\f[C]auth_token\f[] and \f[C]storage_url\f[].
+If they are both provided, the other variables are ignored.
+rclone will not try to authenticate but instead assume it is already
+authenticated and use these two variables to access the OpenStack
+installation.
.SS Using rclone without a config file
.PP
You can use rclone with swift without a config file, if desired, like
@@ -8265,6 +9669,155 @@ storage storage url and auth token
.PP
This is most likely caused by forgetting to specify your tenant when
setting up a swift remote.
+.SS pCloud
+.PP
+Paths are specified as \f[C]remote:path\f[]
+.PP
+Paths may be as deep as required, eg
+\f[C]remote:directory/subdirectory\f[].
+.PP
+The initial setup for pCloud involves getting a token from pCloud which
+you need to do in your browser.
+\f[C]rclone\ config\f[] walks you through it.
+.PP
+Here is an example of how to make a remote called \f[C]remote\f[].
+First run:
+.IP
+.nf
+\f[C]
+\ rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/s/q>\ n
+name>\ remote
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Amazon\ Drive
+\ \ \ \\\ "amazon\ cloud\ drive"
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
+\ \ \ \\\ "s3"
+\ 3\ /\ Backblaze\ B2
+\ \ \ \\\ "b2"
+\ 4\ /\ Box
+\ \ \ \\\ "box"
+\ 5\ /\ Dropbox
+\ \ \ \\\ "dropbox"
+\ 6\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 7\ /\ FTP\ Connection
+\ \ \ \\\ "ftp"
+\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ \ \ \\\ "google\ cloud\ storage"
+\ 9\ /\ Google\ Drive
+\ \ \ \\\ "drive"
+10\ /\ Hubic
+\ \ \ \\\ "hubic"
+11\ /\ Local\ Disk
+\ \ \ \\\ "local"
+12\ /\ Microsoft\ Azure\ Blob\ Storage
+\ \ \ \\\ "azureblob"
+13\ /\ Microsoft\ OneDrive
+\ \ \ \\\ "onedrive"
+14\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+\ \ \ \\\ "swift"
+15\ /\ Pcloud
+\ \ \ \\\ "pcloud"
+16\ /\ QingCloud\ Object\ Storage
+\ \ \ \\\ "qingstor"
+17\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+18\ /\ Yandex\ Disk
+\ \ \ \\\ "yandex"
+19\ /\ http\ Connection
+\ \ \ \\\ "http"
+Storage>\ pcloud
+Pcloud\ App\ Client\ Id\ \-\ leave\ blank\ normally.
+client_id>\
+Pcloud\ App\ Client\ Secret\ \-\ leave\ blank\ normally.
+client_secret>\
+Remote\ config
+Use\ auto\ config?
+\ *\ Say\ Y\ if\ not\ sure
+\ *\ Say\ N\ if\ you\ are\ working\ on\ a\ remote\ or\ headless\ machine
+y)\ Yes
+n)\ No
+y/n>\ y
+If\ your\ browser\ doesn\[aq]t\ open\ automatically\ go\ to\ the\ following\ link:\ http://127.0.0.1:53682/auth
+Log\ in\ and\ authorize\ rclone\ for\ access
+Waiting\ for\ code...
+Got\ code
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[remote]
+client_id\ =\
+client_secret\ =\
+token\ =\ {"access_token":"XXX","token_type":"bearer","expiry":"0001\-01\-01T00:00:00Z"}
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+\f[]
+.fi
+.PP
+See the remote setup docs (https://rclone.org/remote_setup/) for how to
+set it up on a machine with no Internet browser available.
+.PP
+Note that rclone runs a webserver on your local machine to collect the
+token as returned from pCloud.
+This only runs from the moment it opens your browser to the moment you
+get back the verification code.
+This is on \f[C]http://127.0.0.1:53682/\f[] and this it may require you
+to unblock it temporarily if you are running a host firewall.
+.PP
+Once configured you can then use \f[C]rclone\f[] like this,
+.PP
+List directories in top level of your pCloud
+.IP
+.nf
+\f[C]
+rclone\ lsd\ remote:
+\f[]
+.fi
+.PP
+List all the files in your pCloud
+.IP
+.nf
+\f[C]
+rclone\ ls\ remote:
+\f[]
+.fi
+.PP
+To copy a local directory to an pCloud directory called backup
+.IP
+.nf
+\f[C]
+rclone\ copy\ /home/source\ remote:backup
+\f[]
+.fi
+.SS Modified time and hashes
+.PP
+pCloud allows modification times to be set on objects accurate to 1
+second.
+These will be used to detect whether objects need syncing or not.
+In order to set a Modification time pCloud requires the object be
+re\-uploaded.
+.PP
+pCloud supports MD5 and SHA1 type hashes, so you can use the
+\f[C]\-\-checksum\f[] flag.
+.SS Deleting files
+.PP
+Deleted files will be moved to the trash.
+Your subscription level will determine how long items stay in the trash.
+\f[C]rclone\ cleanup\f[] can be used to empty the trash.
.SS SFTP
.PP
SFTP is the Secure (or SSH) File Transfer
@@ -8275,9 +9828,9 @@ It runs over SSH v2 and is standard with most modern SSH installations.
Paths are specified as \f[C]remote:path\f[].
If the path does not begin with a \f[C]/\f[] it is relative to the home
directory of the user.
-An empty path \f[C]remote:\f[] refers to the users home directory.
+An empty path \f[C]remote:\f[] refers to the user\[aq]s home directory.
.PP
-Here is an example of making a SFTP configuration.
+Here is an example of making an SFTP configuration.
First run
.IP
.nf
@@ -8361,7 +9914,7 @@ y/e/d>\ y
\f[]
.fi
.PP
-This remote is called \f[C]remote\f[] and can now be used like this
+This remote is called \f[C]remote\f[] and can now be used like this:
.PP
See all directories in the home directory
.IP
@@ -8397,7 +9950,7 @@ rclone\ sync\ /home/local/directory\ remote:directory
.fi
.SS SSH Authentication
.PP
-The SFTP remote supports 3 authentication methods
+The SFTP remote supports three authentication methods:
.IP \[bu] 2
Password
.IP \[bu] 2
@@ -8408,8 +9961,8 @@ ssh\-agent
Key files should be unencrypted PEM\-encoded private key files.
For instance \f[C]/home/$USER/.ssh/id_rsa\f[].
.PP
-If you don\[aq]t specify \f[C]pass\f[] or \f[C]key_file\f[] then it will
-attempt to contact an ssh\-agent.
+If you don\[aq]t specify \f[C]pass\f[] or \f[C]key_file\f[] then rclone
+will attempt to contact an ssh\-agent.
.SS ssh\-agent on macOS
.PP
Note that there seem to be various problems with using an ssh\-agent on
@@ -8443,7 +9996,15 @@ SFTP supports checksums if the same login has shell access and
\f[C]md5sum\f[] or \f[C]sha1sum\f[] as well as \f[C]echo\f[] are in the
remote\[aq]s PATH.
.PP
-The only ssh agent supported under Windows is Putty\[aq]s pagent.
+The only ssh agent supported under Windows is Putty\[aq]s pageant.
+.PP
+The Go SSH library disables the use of the aes128\-cbc cipher by
+default, due to security concerns.
+This can be re\-enabled on a per\-connection basis by setting the
+\f[C]use_insecure_cipher\f[] setting in the configuration file to
+\f[C]true\f[].
+Further details on the insecurity of this cipher can be found [in this
+paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
.PP
SFTP isn\[aq]t supported under plan9 until this
issue (https://github.com/pkg/sftp/issues/156) is fixed.
@@ -8454,6 +10015,196 @@ work with it: \f[C]\-\-dump\-headers\f[], \f[C]\-\-dump\-bodies\f[],
.PP
Note that \f[C]\-\-timeout\f[] isn\[aq]t supported (but
\f[C]\-\-contimeout\f[] is).
+.SS WebDAV
+.PP
+Paths are specified as \f[C]remote:path\f[]
+.PP
+Paths may be as deep as required, eg
+\f[C]remote:directory/subdirectory\f[].
+.PP
+To configure the WebDAV remote you will need to have a URL for it, and a
+username and password.
+If you know what kind of system you are connecting to then rclone can
+enable extra features.
+.PP
+Here is an example of how to make a remote called \f[C]remote\f[].
+First run:
+.IP
+.nf
+\f[C]
+\ rclone\ config
+\f[]
+.fi
+.PP
+This will guide you through an interactive setup process:
+.IP
+.nf
+\f[C]
+No\ remotes\ found\ \-\ make\ a\ new\ one
+n)\ New\ remote
+s)\ Set\ configuration\ password
+q)\ Quit\ config
+n/s/q>\ n
+name>\ remote
+Type\ of\ storage\ to\ configure.
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Amazon\ Drive
+\ \ \ \\\ "amazon\ cloud\ drive"
+\ 2\ /\ Amazon\ S3\ (also\ Dreamhost,\ Ceph,\ Minio)
+\ \ \ \\\ "s3"
+\ 3\ /\ Backblaze\ B2
+\ \ \ \\\ "b2"
+\ 4\ /\ Box
+\ \ \ \\\ "box"
+\ 5\ /\ Dropbox
+\ \ \ \\\ "dropbox"
+\ 6\ /\ Encrypt/Decrypt\ a\ remote
+\ \ \ \\\ "crypt"
+\ 7\ /\ FTP\ Connection
+\ \ \ \\\ "ftp"
+\ 8\ /\ Google\ Cloud\ Storage\ (this\ is\ not\ Google\ Drive)
+\ \ \ \\\ "google\ cloud\ storage"
+\ 9\ /\ Google\ Drive
+\ \ \ \\\ "drive"
+10\ /\ Hubic
+\ \ \ \\\ "hubic"
+11\ /\ Local\ Disk
+\ \ \ \\\ "local"
+12\ /\ Microsoft\ Azure\ Blob\ Storage
+\ \ \ \\\ "azureblob"
+13\ /\ Microsoft\ OneDrive
+\ \ \ \\\ "onedrive"
+14\ /\ Openstack\ Swift\ (Rackspace\ Cloud\ Files,\ Memset\ Memstore,\ OVH)
+\ \ \ \\\ "swift"
+15\ /\ Pcloud
+\ \ \ \\\ "pcloud"
+16\ /\ QingCloud\ Object\ Storage
+\ \ \ \\\ "qingstor"
+17\ /\ SSH/SFTP\ Connection
+\ \ \ \\\ "sftp"
+18\ /\ WebDAV
+\ \ \ \\\ "webdav"
+19\ /\ Yandex\ Disk
+\ \ \ \\\ "yandex"
+20\ /\ http\ Connection
+\ \ \ \\\ "http"
+Storage>\ webdav
+URL\ of\ http\ host\ to\ connect\ to
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Connect\ to\ example.com
+\ \ \ \\\ "https://example.com"
+url>\ https://example.com/remote.php/webdav/
+Name\ of\ the\ WebDAV\ site/service/software\ you\ are\ using
+Choose\ a\ number\ from\ below,\ or\ type\ in\ your\ own\ value
+\ 1\ /\ Nextcloud
+\ \ \ \\\ "nextcloud"
+\ 2\ /\ Owncloud
+\ \ \ \\\ "owncloud"
+\ 3\ /\ Other\ site/service\ or\ software
+\ \ \ \\\ "other"
+vendor>\ 1
+User\ name
+user>\ user
+Password.
+y)\ Yes\ type\ in\ my\ own\ password
+g)\ Generate\ random\ password
+n)\ No\ leave\ this\ optional\ password\ blank
+y/g/n>\ y
+Enter\ the\ password:
+password:
+Confirm\ the\ password:
+password:
+Remote\ config
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+[remote]
+url\ =\ https://example.com/remote.php/webdav/
+vendor\ =\ nextcloud
+user\ =\ user
+pass\ =\ ***\ ENCRYPTED\ ***
+\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
+y)\ Yes\ this\ is\ OK
+e)\ Edit\ this\ remote
+d)\ Delete\ this\ remote
+y/e/d>\ y
+\f[]
+.fi
+.PP
+Once configured you can then use \f[C]rclone\f[] like this,
+.PP
+List directories in top level of your WebDAV
+.IP
+.nf
+\f[C]
+rclone\ lsd\ remote:
+\f[]
+.fi
+.PP
+List all the files in your WebDAV
+.IP
+.nf
+\f[C]
+rclone\ ls\ remote:
+\f[]
+.fi
+.PP
+To copy a local directory to an WebDAV directory called backup
+.IP
+.nf
+\f[C]
+rclone\ copy\ /home/source\ remote:backup
+\f[]
+.fi
+.SS Modified time and hashes
+.PP
+Plain WebDAV does not support modified times.
+However when used with Owncloud or Nextcloud rclone will support
+modified times.
+.PP
+Hashes are not supported.
+.SS Owncloud
+.PP
+Click on the settings cog in the bottom right of the page and this will
+show the WebDAV URL that rclone needs in the config step.
+It will look something like
+\f[C]https://example.com/remote.php/webdav/\f[].
+.PP
+Owncloud supports modified times using the \f[C]X\-OC\-Mtime\f[] header.
+.SS Nextcloud
+.PP
+This is configured in an identical way to Owncloud.
+Note that Nextcloud does not support streaming of files (\f[C]rcat\f[])
+whereas Owncloud does.
+This may be
+fixed (https://github.com/nextcloud/nextcloud-snap/issues/365) in the
+future.
+.SS Put.io
+.PP
+put.io can be accessed in a read only way using webdav.
+.PP
+Configure the \f[C]url\f[] as \f[C]https://webdav.put.io\f[] and use
+your normal account username and password for \f[C]user\f[] and
+\f[C]pass\f[].
+Set the \f[C]vendor\f[] to \f[C]other\f[].
+.PP
+Your config file should end up looking like this:
+.IP
+.nf
+\f[C]
+[putio]
+type\ =\ webdav
+url\ =\ https://webdav.put.io
+vendor\ =\ other
+user\ =\ YourUserName
+pass\ =\ encryptedpassword
+\f[]
+.fi
+.PP
+If you are using \f[C]put.io\f[] with \f[C]rclone\ mount\f[] then use
+the \f[C]\-\-read\-only\f[] flag to signal to the OS that it can\[aq]t
+write to the mount.
+.PP
+For more help see the put.io webdav
+docs (http://help.put.io/apps-and-integrations/ftp-and-webdav).
.SS Yandex Disk
.PP
Yandex Disk (https://disk.yandex.com) is a cloud storage solution
@@ -8632,7 +10383,7 @@ latin1) then you can use the \f[C]convmv\f[] tool to convert the
filesystem to UTF\-8.
This tool is available in most distributions\[aq] package managers.
.PP
-If an invalid (non\-UTF8) filename is read, the invalid caracters will
+If an invalid (non\-UTF8) filename is read, the invalid characters will
be replaced with the unicode replacement character, \[aq]�\[aq].
\f[C]rclone\f[] will emit a debug message in this case (use \f[C]\-v\f[]
to see), eg
@@ -8743,7 +10494,7 @@ with unicode normalization in the sync routine instead.
This tells rclone to stay in the filesystem specified by the root and
not to recurse into different file systems.
.PP
-For example if you have a directory heirachy like this
+For example if you have a directory hierarchy like this
.IP
.nf
\f[C]
@@ -8792,6 +10543,186 @@ This flag disables warning messages on skipped symlinks or junction
points, as you explicitly acknowledge that they should be skipped.
.SS Changelog
.IP \[bu] 2
+v1.39 \- 2017\-12\-23
+.RS 2
+.IP \[bu] 2
+New backends
+.IP \[bu] 2
+WebDAV
+.RS 2
+.IP \[bu] 2
+tested with nextcloud, owncloud, put.io and others!
+.RE
+.IP \[bu] 2
+Pcloud
+.IP \[bu] 2
+cache \- wraps a cache around other backends (Remus Bunduc)
+.RS 2
+.IP \[bu] 2
+useful in combination with mount
+.IP \[bu] 2
+NB this feature is in beta so use with care
+.RE
+.IP \[bu] 2
+New commands
+.IP \[bu] 2
+serve command with subcommands:
+.RS 2
+.IP \[bu] 2
+serve webdav: this implements a webdav server for any rclone remote.
+.IP \[bu] 2
+serve http: command to serve a remote over HTTP
+.RE
+.IP \[bu] 2
+config: add sub commands for full config file management
+.RS 2
+.IP \[bu] 2
+create/delete/dump/edit/file/password/providers/show/update
+.RE
+.IP \[bu] 2
+touch: to create or update the timestamp of a file (Jakub Tasiemski)
+.IP \[bu] 2
+New Features
+.IP \[bu] 2
+curl install for rclone (Filip Bartodziej)
+.IP \[bu] 2
+\-\-stats now shows percentage, size, rate and ETA in condensed form
+(Ishuah Kariuki)
+.IP \[bu] 2
+\-\-exclude\-if\-present to exclude a directory if a file is present
+(Iakov Davydov)
+.IP \[bu] 2
+rmdirs: add \-\-leave\-root flag (lewpam)
+.IP \[bu] 2
+move: add \-\-delete\-empty\-src\-dirs flag to remove dirs after move
+(Ishuah Kariuki)
+.IP \[bu] 2
+Add \-\-dump flag, introduce \-\-dump requests, responses and remove
+\-\-dump\-auth, \-\-dump\-filters
+.RS 2
+.IP \[bu] 2
+Obscure X\-Auth\-Token: from headers when dumping too
+.RE
+.IP \[bu] 2
+Document and implement exit codes for different failure modes (Ishuah
+Kariuki)
+.IP \[bu] 2
+Compile
+.IP \[bu] 2
+Bug Fixes
+.IP \[bu] 2
+Retry lots more different types of errors to make multipart transfers
+more reliable
+.IP \[bu] 2
+Save the config before asking for a token, fixes disappearing oauth
+config
+.IP \[bu] 2
+Warn the user if \-\-include and \-\-exclude are used together (Ernest
+Borowski)
+.IP \[bu] 2
+Fix duplicate files (eg on Google drive) causing spurious copies
+.IP \[bu] 2
+Allow trailing and leading whitespace for passwords (Jason Rose)
+.IP \[bu] 2
+ncdu: fix crashes on empty directories
+.IP \[bu] 2
+rcat: fix goroutine leak
+.IP \[bu] 2
+moveto/copyto: Fix to allow copying to the same name
+.IP \[bu] 2
+Mount
+.IP \[bu] 2
+\-\-vfs\-cache mode to make writes into mounts more reliable.
+.RS 2
+.IP \[bu] 2
+this requires caching files on the disk (see \-\-cache\-dir)
+.IP \[bu] 2
+As this is a new feature, use with care
+.RE
+.IP \[bu] 2
+Use sdnotify to signal systemd the mount is ready (Fabian Möller)
+.IP \[bu] 2
+Check if directory is not empty before mounting (Ernest Borowski)
+.IP \[bu] 2
+Local
+.IP \[bu] 2
+Add error message for cross file system moves
+.IP \[bu] 2
+Fix equality check for times
+.IP \[bu] 2
+Dropbox
+.IP \[bu] 2
+Rework multipart upload
+.RS 2
+.IP \[bu] 2
+buffer the chunks when uploading large files so they can be retried
+.IP \[bu] 2
+change default chunk size to 48MB now we are buffering them in memory
+.IP \[bu] 2
+retry every error after the first chunk is done successfully
+.RE
+.IP \[bu] 2
+Fix error when renaming directories
+.IP \[bu] 2
+Swift
+.IP \[bu] 2
+Fix crash on bad authentication
+.IP \[bu] 2
+Google Drive
+.IP \[bu] 2
+Add service account support (Tim Cooijmans)
+.IP \[bu] 2
+S3
+.IP \[bu] 2
+Make it work properly with Digital Ocean Spaces (Andrew
+Starr\-Bochicchio)
+.IP \[bu] 2
+Fix crash if a bad listing is received
+.IP \[bu] 2
+Add support for ECS task IAM roles (David Minor)
+.IP \[bu] 2
+Backblaze B2
+.IP \[bu] 2
+Fix multipart upload retries
+.IP \[bu] 2
+Fix \-\-hard\-delete to make it work 100% of the time
+.IP \[bu] 2
+Swift
+.IP \[bu] 2
+Allow authentication with storage URL and auth key (Giovanni Pizzi)
+.IP \[bu] 2
+Add new fields for swift configuration to support IBM Bluemix Swift
+(Pierre Carlson)
+.IP \[bu] 2
+Add OS_TENANT_ID and OS_USER_ID to config
+.IP \[bu] 2
+Allow configs with user id instead of user name
+.IP \[bu] 2
+Check if swift segments container exists before creating (John Leach)
+.IP \[bu] 2
+Fix memory leak in swift transfers (upstream fix)
+.IP \[bu] 2
+SFTP
+.IP \[bu] 2
+Add option to enable the use of aes128\-cbc cipher (Jon Fautley)
+.IP \[bu] 2
+Amazon cloud drive
+.IP \[bu] 2
+Fix download of large files failing with "Only one auth mechanism
+allowed"
+.IP \[bu] 2
+crypt
+.IP \[bu] 2
+Option to encrypt directory names or leave them intact
+.IP \[bu] 2
+Implement DirChangeNotify (Fabian Möller)
+.IP \[bu] 2
+onedrive
+.IP \[bu] 2
+Add option to choose resourceURL during setup of OneDrive Business
+account if more than one is available for user
+.RE
+.IP \[bu] 2
v1.38 \- 2017\-09\-30
.RS 2
.IP \[bu] 2
@@ -11012,6 +12943,50 @@ Girish Ramakrishnan
LingMan
.IP \[bu] 2
Jacob McNamee
+.IP \[bu] 2
+jersou
+.IP \[bu] 2
+thierry
+.IP \[bu] 2
+Simon Leinen
+.IP \[bu] 2
+Dan Dascalescu
+.IP \[bu] 2
+Jason Rose
+.IP \[bu] 2
+Andrew Starr\-Bochicchio
+.IP \[bu] 2
+John Leach
+.IP \[bu] 2
+Corban Raun
+.IP \[bu] 2
+Pierre Carlson
+.IP \[bu] 2
+Ernest Borowski
+.IP \[bu] 2
+Remus Bunduc
+.IP \[bu] 2
+Iakov Davydov
+.IP \[bu] 2
+Fabian Möller
+.IP \[bu] 2
+Jakub Tasiemski
+.IP \[bu] 2
+David Minor
+.IP \[bu] 2
+Tim Cooijmans
+.IP \[bu] 2
+Laurence
+.IP \[bu] 2
+Giovanni Pizzi
+.IP \[bu] 2
+Filip Bartodziej
+.IP \[bu] 2
+Jon Fautley
+.IP \[bu] 2
+lewapm <32110057+lewapm@users.noreply.github.com>
+.IP \[bu] 2
+Yassine Imounachen
.SH Contact the rclone project
.SS Forum
.PP
@@ -11033,7 +13008,7 @@ Rclone has a Google+ page which announcements are posted to
Google+ page for general comments
.SS Twitter
.PP
-You can also follow me on twitter for rclone announcments
+You can also follow me on twitter for rclone announcements
.IP \[bu] 2
[\@njcw](https://twitter.com/njcw)
.SS Email