diff --git a/MANUAL.html b/MANUAL.html
index e15d044c0..878b84640 100644
--- a/MANUAL.html
+++ b/MANUAL.html
@@ -17,7 +17,7 @@
Jun 24, 2020 Aug 07, 2020rclone(1) User Manual
-
Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors’ web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.
-Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone’s familiar syntax includes shell pipeline support, and --dry-run
protection. It is used at the command line, in scripts or via its API.
Users call rclone “The Swiss army knife of cloud storage”, and “Technology indistinguishable from magic”.
+Rclone is a command line program to manage files on cloud storage. It is a feature rich alternative to cloud vendors' web storage interfaces. Over 40 cloud storage products support rclone including S3 object stores, business & consumer file storage services, as well as standard transfer protocols.
+Rclone has powerful cloud equivalents to the unix commands rsync, cp, mv, mount, ls, ncdu, tree, rm, and cat. Rclone's familiar syntax includes shell pipeline support, and --dry-run
protection. It is used at the command line, in scripts or via its API.
Users call rclone "The Swiss army knife of cloud storage", and "Technology indistinguishable from magic".
Rclone really looks after your data. It preserves timestamps and verifies checksums at all times. Transfers over limited bandwidth; intermittent connections, or subject to quota can be restarted, from the last good file transferred. You can check the integrity of your files. Where possible, rclone employs server side transfers to minimise local bandwidth use and transfers from one provider to another without using local disk.
Virtual backends wrap local and cloud file systems to apply encryption, caching, chunking and joining.
Rclone mounts any local, cloud or virtual filesystem as a disk on Windows, macOS, linux and FreeBSD, and also serves these over SFTP, HTTP, WebDAV, FTP and DLNA.
@@ -145,7 +145,7 @@curl https://rclone.org/install.sh | sudo bash
For beta installation, run:
curl https://rclone.org/install.sh | sudo bash -s beta
-Note that this script checks the version of rclone installed first and won’t re-download if not needed.
+Note that this script checks the version of rclone installed first and won't re-download if not needed.
Fetch and unpack
curl -O https://downloads.rclone.org/rclone-current-linux-amd64.zip
@@ -199,7 +199,7 @@ rclone v1.49.1
You need to mount the host rclone config dir at /config/rclone
into the Docker container. Due to the fact that rclone updates tokens inside its config file, and that the update process involves a file rename, you need to mount the whole host rclone config dir, not just the single host rclone config file.
You need to mount a host data dir at /data
into the Docker container.
By default, the rclone binary inside a Docker container runs with UID=0 (root). As a result, all files created in a run will have UID=0. If your config and data files reside on the host with a non-root UID:GID, you need to pass these on the container start command line.
-It is possible to use rclone mount
inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact docker run
options to do that might vary slightly between hosts. See, e.g. the discussion in this thread.
+It is possible to use rclone mount
inside a userspace Docker container, and expose the resulting fuse mount to the host. The exact docker run
options to do that might vary slightly between hosts. See, e.g. the discussion in this thread.
You also need to mount the host /etc/passwd
and /etc/group
for fuse to work inside the container.
Here are some commands tested on an Ubuntu 18.04.3 host:
@@ -227,16 +227,19 @@ docker run --rm \
ls ~/data/mount
kill %1
Make sure you have at least Go 1.7 installed. Download go if necessary. The latest release is recommended. Then
+Make sure you have at least Go 1.10 installed. Download go if necessary. The latest release is recommended. Then
git clone https://github.com/rclone/rclone.git
cd rclone
go build
./rclone version
-You can also build and install rclone in the GOPATH (which defaults to ~/go
) with:
go get -u -v github.com/rclone/rclone
-and this will build the binary in $GOPATH/bin
(~/go/bin/rclone
by default) after downloading the source to $GOPATH/src/github.com/rclone/rclone
(~/go/src/github.com/rclone/rclone
by default).
This will leave you a checked out version of rclone you can modify and send pull requests with. If you use make
instead of go build
then the rclone build will have the correct version information in it.
You can also build the latest stable rclone with:
+go get github.com/rclone/rclone
+or the latest version (equivalent to the beta) with
+go get github.com/rclone/rclone@master
+These will build the binary in $(go env GOPATH)/bin
(~/go/bin/rclone
by default) after downloading the source to the go module cache. Note - do not use the -u
flag here. This causes go to try to update the depencencies that rclone uses and sometimes these don't work with the current version of rclone.
This can be done with Stefan Weichinger’s ansible role.
+This can be done with Stefan Weichinger's ansible role.
Instructions
git clone https://github.com/stefangweichinger/ansible-rclone.git
into your local roles-directoryFirst, you’ll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
First, you'll need to configure rclone. As the object storage systems have quite complicated authentication these are kept in a config file. (See the --config
entry for how to find the config file and choose its location.)
The easiest way to make the config is to run rclone with the config option:
rclone config
See the following for detailed instructions for
@@ -295,7 +298,7 @@ go buildRclone syncs a directory tree from one storage system to another.
Its syntax is like this
Syntax: [options] subcommand <parameters> <parameters...>
-Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg “drive:myfolder” to look at “myfolder” in Google drive.
+Source and destination paths are specified by the name you gave the storage system in the config file then the sub path, eg "drive:myfolder" to look at "myfolder" in Google drive.
You can define as many storage paths as you like in the config file.
rclone uses a system of subcommands. For example
@@ -329,12 +332,12 @@ rclone sync /local/path remote:path # syncs /local/path to the remoterclone copyCopy files from source to dest, skipping already copied
Copy the source to the destination. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Doesn’t delete files from the destination.
-Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents.
-If dest:path doesn’t exist, it is created and the source:path contents go there.
+Copy the source to the destination. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Doesn't delete files from the destination.
+Note that it is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents.
+If dest:path doesn't exist, it is created and the source:path contents go there.
For example
rclone copy source:sourcepath dest:destpath
-Let’s say there are two files in sourcepath
+Let's say there are two files in sourcepath
sourcepath/one.txt
sourcepath/two.txt
This copies them to
@@ -343,8 +346,8 @@ destpath/two.txtNot to
destpath/sourcepath/one.txt
destpath/sourcepath/two.txt
-If you are familiar with rsync
, rclone always works as if you had written a trailing / - meaning “copy the contents of this directory”. This applies to all commands and whether you are talking about the source or destination.
See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.
+If you are familiar with rsync
, rclone always works as if you had written a trailing / - meaning "copy the contents of this directory". This applies to all commands and whether you are talking about the source or destination.
See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when copying a small number of files into a large destination can speed transfers up greatly.
For example, if you have many files in /path/to/src but only a few of them change every day, you can copy all the files which have changed recently very efficiently like this:
rclone copy --max-age 24h --no-traverse /path/to/src remote:
Note: Use the -P
/--progress
flag to view real-time transfer statistics
Make source and dest identical, modifying destination only.
Sync the source to the destination, changing the destination only. Doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.
+Sync the source to the destination, changing the destination only. Doesn't transfer unchanged files, testing by size and modification time or MD5SUM. Destination is updated to match source, including deleting files if necessary.
Important: Since this can cause data loss, test first with the --dry-run
flag to see exactly what would be copied and deleted.
Note that files in the destination won’t be deleted if there were any errors at any point.
-It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it’s the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy
command above if unsure.
If dest:path doesn’t exist, it is created and the source:path contents go there.
+Note that files in the destination won't be deleted if there were any errors at any point.
+It is always the contents of the directory that is synced, not the directory so when source:path is a directory, it's the contents of source:path that are copied, not the directory name and contents. See extended explanation in the copy
command above if unsure.
If dest:path doesn't exist, it is created and the source:path contents go there.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone sync source:path dest:path [flags]
Moves the contents of the source directory to the destination directory. Rclone will error if the source and destination overlap and the remote does not support a server side directory move operation.
If no filters are in use and if possible this will server side move source:path
into dest:path
. After this source:path
will no longer exist.
Otherwise for each file in source:path
selected by the filters (if any) this will move it into dest:path
. If possible a server side move will be used, otherwise it will copy it (server side if possible) into dest:path
then delete the original (if no errors on copy) in source:path
.
If you want to delete empty source directories after move, use the –delete-empty-src-dirs flag.
-See the –no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.
-Important: Since this can cause data loss, test first with the –dry-run flag.
+If you want to delete empty source directories after move, use the --delete-empty-src-dirs flag.
+See the --no-traverse option for controlling whether rclone lists the destination directory or not. Supplying this option when moving a small number of files into a large destination can speed transfers up greatly.
+Important: Since this can cause data loss, test first with the --dry-run flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone move source:path dest:path [flags]
Remove the files in path. Unlike purge
it obeys include/exclude filters so can be used to selectively delete files.
rclone delete
only deletes objects but leaves the directory structure alone. If you want to delete a directory and all of its contents use rclone purge
If you supply the –rmdirs flag, it will remove all empty directories along with it.
+If you supply the --rmdirs flag, it will remove all empty directories along with it.
Eg delete all files bigger than 100MBytes
Check what would be deleted first (use either)
rclone --min-size 100M lsl remote:path
rclone --dry-run --min-size 100M delete remote:path
Then delete
rclone --min-size 100M delete remote:path
-That reads “delete everything with a minimum size of 100 MB”, hence delete all files bigger than 100MBytes.
+That reads "delete everything with a minimum size of 100 MB", hence delete all files bigger than 100MBytes.
rclone delete remote:path [flags]
-h, --help help for delete
@@ -430,9 +433,9 @@ rclone --dry-run --min-size 100M delete remote:path
Make the path if it doesn’t already exist.
+Make the path if it doesn't already exist.
Make the path if it doesn’t already exist.
+Make the path if it doesn't already exist.
rclone mkdir remote:path [flags]
-h, --help help for mkdir
@@ -444,7 +447,7 @@ rclone --dry-run --min-size 100M delete remote:path
Remove the path if empty.
Remove the path. Note that you can’t remove a path with objects in it, use purge for that.
+Remove the path. Note that you can't remove a path with objects in it, use purge for that.
rclone rmdir remote:path [flags]
-h, --help help for rmdir
@@ -456,10 +459,10 @@ rclone --dry-run --min-size 100M delete remote:path
Checks the files in the source and destination match.
Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don’t match. It doesn’t alter the source or destination.
-If you supply the –size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
-If you supply the –download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don’t support hashes or if you really want to check all the data.
-If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
+Checks the files in the source and destination match. It compares sizes and hashes (MD5 or SHA1) and logs a report of files which don't match. It doesn't alter the source or destination.
+If you supply the --size-only flag, it will only compare the sizes not the hashes as well. Use this for a quick check.
+If you supply the --download flag, it will download the data from both remotes and check them against each other on the fly. This can be useful for remotes that don't support hashes or if you really want to check all the data.
+If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
rclone check source:path dest:path [flags]
--download Check by downloading rather than with hash.
@@ -490,9 +493,9 @@ rclone --dry-run --min-size 100M delete remote:path
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone ls remote:path [flags]
-h, --help help for ls
@@ -514,7 +517,7 @@ rclone --dry-run --min-size 100M delete remote:path
-1 2016-10-17 17:41:53 -1 1000files
-1 2017-01-03 14:40:54 -1 2500files
-1 2017-07-08 14:39:28 -1 4000files
-If you just want the directory names use “rclone lsf –dirs-only”.
+If you just want the directory names use "rclone lsf --dirs-only".
Any of the filtering options can be applied to this command.
There are several related list commands
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsd remote:path [flags]
-h, --help help for lsd
@@ -557,9 +560,9 @@ rclone --dry-run --min-size 100M delete remote:path
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsl remote:path [flags]
-h, --help help for lsl
@@ -614,7 +617,7 @@ rclone --dry-run --min-size 100M delete remote:path
rclone v1.41
- os/arch: linux/amd64
- go version: go1.10
-If you supply the –check flag, then it will do an online check to compare your version with the latest release and the latest beta.
+If you supply the --check flag, then it will do an online check to compare your version with the latest release and the latest beta.
$ rclone version --check
yours: 1.42.0.6
latest: 1.42 (released 2018-06-16)
@@ -739,13 +742,13 @@ Other: 8.241G
Note that not all the backends provide all the fields - they will be missing if they are not known for that backend. Where it is known that the value is unlimited the value will also be omitted.
-Use the –full flag to see the numbers written out in full, eg
+Use the --full flag to see the numbers written out in full, eg
Total: 18253611008
Used: 7993453766
Free: 1411001220
Trashed: 104857602
Other: 8849156022
-Use the –json flag for a computer readable output, eg
+Use the --json flag for a computer readable output, eg
{
"total": 18253611008,
"used": 7993453766,
@@ -767,7 +770,7 @@ Other: 8849156022
Remote authorization.
Remote authorization. Used to authorize a remote or headless rclone from a machine with a browser - use as instructed by rclone config.
-Use the –auth-no-open-browser to prevent rclone to open auth link in default browser automatically.
+Use the --auth-no-open-browser to prevent rclone to open auth link in default browser automatically.
rclone authorize [flags]
--auth-no-open-browser Do not automatically open auth link in default browser
@@ -780,7 +783,7 @@ Other: 8849156022
Run a backend specific command.
This runs a backend specific command. The commands themselves (except for “help” and “features”) are defined by the backends and you should see the backend docs for definitions.
+This runs a backend specific command. The commands themselves (except for "help" and "features") are defined by the backends and you should see the backend docs for definitions.
You can discover what commands a backend implements by using
rclone backend help remote:
rclone backend help <backendname>
@@ -811,7 +814,7 @@ rclone backend help <backendname>
rclone cat remote:path/to/dir
Or like this to output any .txt files in dir or its subdirectories.
rclone --include "*.txt" cat remote:path/to/dir
-Use the –head flag to print characters only at the start, –tail for the end and –offset and –count to print a section in the middle. Note that if offset is negative it will count from the end, so –offset -1 –count 1 is equivalent to –tail 1.
+Use the --head flag to print characters only at the start, --tail for the end and --offset and --count to print a section in the middle. Note that if offset is negative it will count from the end, so --offset -1 --count 1 is equivalent to --tail 1.
rclone cat remote:path [flags]
--count int Only print N characters. (default -1)
@@ -832,8 +835,8 @@ rclone backend help <backendname>
For example to make a swift remote of name myremote using auto config you would do:
rclone config create myremote swift env_auth true
Note that if the config process would normally ask a question the default is taken. Each time that happens rclone will print a message saying how to affect the value taken.
-If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren’t already obscured before putting them in the config file.
-NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the “–obscure” flag, or if you are 100% certain you are already passing obscured passwords then use “–no-obscure”. You can also set osbscured passwords using the “rclone config password” command.
+If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
+NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command.
So for example if you wanted to configure a Google Drive remote but using remote authorization you would do this:
rclone config create mydrive drive config_is_local false
rclone config create `name` `type` [`key` `value`]* [flags]
@@ -863,7 +866,7 @@ rclone backend help <backendname>
This disconnects the remote: passed in to the cloud storage system.
This normally means revoking the oauth token.
-To reconnect use “rclone config reconnect”.
+To reconnect use "rclone config reconnect".
rclone config disconnect remote: [flags]
-h, --help help for disconnect
@@ -911,10 +914,10 @@ rclone backend help <backendname>
Update password in an existing remote.
Update an existing remote’s password. The password should be passed in pairs of key
value
.
Update an existing remote's password. The password should be passed in pairs of key
value
.
For example to set password of a remote of name myremote you would do:
rclone config password myremote fieldname mypassword
-This command is obsolete now that “config update” and “config create” both support obscuring passwords directly.
+This command is obsolete now that "config update" and "config create" both support obscuring passwords directly.
rclone config password `name` [`key` `value`]+ [flags]
-h, --help help for password
@@ -939,7 +942,7 @@ rclone backend help <backendname>
Re-authenticates user with remote.
This reconnects remote: passed in to the cloud storage system.
-To disconnect the remote use “rclone config disconnect”.
+To disconnect the remote use "rclone config disconnect".
This normally means going through the interactive oauth flow again.
rclone config reconnect remote: [flags]
Update options in an existing remote.
Update an existing remote’s options. The options should be passed in in pairs of key
value
.
Update an existing remote's options. The options should be passed in in pairs of key
value
.
For example to update the env_auth field of a remote of name myremote you would do:
rclone config update myremote swift env_auth true
-If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren’t already obscured before putting them in the config file.
-NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the “–obscure” flag, or if you are 100% certain you are already passing obscured passwords then use “–no-obscure”. You can also set osbscured passwords using the “rclone config password” command.
-If the remote uses OAuth the token will be updated, if you don’t require this add an extra parameter thus:
+If any of the parameters passed is a password field, then rclone will automatically obscure them if they aren't already obscured before putting them in the config file.
+NB If the password parameter is 22 characters or longer and consists only of base64 characters then rclone can get confused about whether the password is already obscured or not and put unobscured passwords into the config file. If you want to be 100% certain that the passwords get obscured then use the "--obscure" flag, or if you are 100% certain you are already passing obscured passwords then use "--no-obscure". You can also set osbscured passwords using the "rclone config password" command.
+If the remote uses OAuth the token will be updated, if you don't require this add an extra parameter thus:
rclone config update myremote swift env_auth true config_refresh_token false
rclone config update `name` [`key` `value`]+ [flags]
This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. It doesn’t delete files from the destination.
+This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. It doesn't delete files from the destination.
Note: Use the -P
/--progress
flag to view real-time transfer statistics
rclone copyto source:path dest:path [flags]
Copy url content to dest.
Download a URL’s content and copy it to the destination without saving it in temporary storage.
-Setting –auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path.
-Setting –no-clobber will prevent overwriting file on the destination if there is one with the same name.
-Setting –stdout or making the output file name “-” will cause the output to be written to standard output.
+Download a URL's content and copy it to the destination without saving it in temporary storage.
+Setting --auto-filename will cause the file name to be retrieved from the from URL (after any redirections) and used in the destination path.
+Setting --no-clobber will prevent overwriting file on the destination if there is one with the same name.
+Setting --stdout or making the output file name "-" will cause the output to be written to standard output.
rclone copyurl https://example.com dest:path [flags]
-a, --auto-filename Get the file name from the URL and use it for destination file path
@@ -1047,7 +1050,7 @@ if src is directory
You can use it like this also, but that will involve downloading all the files in remote:path.
rclone cryptcheck remote:path encryptedremote:path
After it has run it will log the status of the encryptedremote:.
-If you supply the –one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
+If you supply the --one-way flag, it will only check that files in source match the files in destination, not the other way around. Meaning extra files in destination that are not in the source will not trigger an error.
rclone cryptcheck remote:path cryptedremote:path [flags]
Options
-h, --help help for cryptcheck
@@ -1061,7 +1064,7 @@ if src is directory
Cryptdecode returns unencrypted file names.
Synopsis
rclone cryptdecode returns unencrypted file names when provided with a list of encrypted file names. List limit is 10 items.
-If you supply the –reverse flag, it will return encrypted file names.
+If you supply the --reverse flag, it will return encrypted file names.
use it like this
rclone cryptdecode encryptedremote: encryptedfilename1 encryptedfilename2
@@ -1078,7 +1081,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone deletefile
Remove a single file from remote.
Synopsis
-Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn’t obey include/exclude filters - if the specified file exists, it will always be removed.
+Remove a single file from remote. Unlike delete
it cannot be used to remove a directory and it doesn't obey include/exclude filters - if the specified file exists, it will always be removed.
rclone deletefile remote:path [flags]
Options
-h, --help help for deletefile
@@ -1090,7 +1093,7 @@ rclone cryptdecode --reverse encryptedremote: filename1 filename2
rclone genautocomplete
Output completion script for a given shell.
Synopsis
-Generates a shell completion script for rclone. Run with –help to list the supported shells.
+Generates a shell completion script for rclone. Run with --help to list the supported shells.
Options
-h, --help help for genautocomplete
See the global flags page for global options not listed here.
@@ -1192,7 +1195,7 @@ Supported hashes are:
rclone link will create or retrieve a public link to the given file or folder.
rclone link remote:path/to/file
rclone link remote:path/to/folder/
-If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
+If successful, the last line of the output will contain the link. Exact capabilities depend on the remote, but the link will always be created with the least constraints – e.g. no expiry, no password protection, accessible without account.
rclone link remote:path [flags]
Options
-h, --help help for link
@@ -1226,7 +1229,7 @@ canole
diwogej7
ferejej3gux/
fubuwic
-Use the –format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:
+Use the --format option to control what gets listed. By default this is just the path, but you can use these parameters to control the output:
p - path
s - size
t - modification time
@@ -1236,7 +1239,7 @@ o - Original ID of underlying object
m - MimeType of object if known
e - encrypted name
T - tier of storage if known, eg "Hot" or "Cool"
-So if you wanted the path, size and modification time, you would use –format “pst”, or maybe –format “tsp” to put the path last.
+So if you wanted the path, size and modification time, you would use --format "pst", or maybe --format "tsp" to put the path last.
Eg
$ rclone lsf --format "tsp" swift:bucket
2016-06-25 18:55:41;60295;bevajer5jef
@@ -1244,7 +1247,7 @@ T - tier of storage if known, eg "Hot" or "Cool"
-If you specify “h” in the format you will get the MD5 hash by default, use the “–hash” flag to change which hash you want. Note that this can be returned as an empty string if it isn’t available on the object (and for directories), “ERROR” if there was an error reading it from the object and “UNSUPPORTED” if that object does not support that hash type.
+If you specify "h" in the format you will get the MD5 hash by default, use the "--hash" flag to change which hash you want. Note that this can be returned as an empty string if it isn't available on the object (and for directories), "ERROR" if there was an error reading it from the object and "UNSUPPORTED" if that object does not support that hash type.
For example to emulate the md5sum command you can use
rclone lsf -R --hash MD5 --format hp --separator " " --files-only .
Eg
@@ -1254,8 +1257,8 @@ cd65ac234e6fea5925974a51cdd865cc canole 03b5341b4f234b9d984d03ad076bae91 diwogej7 8fd37c3810dd660778137ac3a66cc06d fubuwic 99713e14a4c4ff553acaf1930fad985b gixacuh7ku -(Though “rclone md5sum .” is an easier way of typing this.)
-By default the separator is “;” this can be changed with the –separator flag. Note that separators aren’t escaped in the path so putting it last is a good strategy.
+(Though "rclone md5sum ." is an easier way of typing this.)
+By default the separator is ";" this can be changed with the --separator flag. Note that separators aren't escaped in the path so putting it last is a good strategy.
Eg
$ rclone lsf --separator "," --format "tshp" swift:bucket
2016-06-25 18:55:41,60295,7908e352297f0f530b84a756f188baa3,bevajer5jef
@@ -1269,7 +1272,7 @@ cd65ac234e6fea5925974a51cdd865cc canole
test.log,22355
test.sh,449
"this file contains a comma, in the file name.txt",6
-Note that the –absolute parameter is useful for making lists of files to pass to an rclone copy with the –files-from-raw flag.
+Note that the --absolute parameter is useful for making lists of files to pass to an rclone copy with the --files-from-raw flag.
For example to find all the files modified within one day and copy those only (without traversing the whole directory structure):
rclone lsf --absolute --files-only --max-age 1d /path/to/local > new_files
rclone copy --files-from-raw new_files /path/to/local remote:path
@@ -1283,9 +1286,9 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
lsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsf remote:path [flags]
--absolute Put a leading / in front of path names.
@@ -1308,16 +1311,16 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
List directories and objects in the path in JSON format.
The output is an array of Items, where each Item looks like this
-{ “Hashes” : { “SHA-1” : “f572d396fae9206628714fb2ce00f72e94f2258f”, “MD5” : “b1946ac92492d2347c6235b4d2611184”, “DropboxHash” : “ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc” }, “ID”: “y2djkhiujf83u33”, “OrigID”: “UYOJVTUW00Q1RzTDA”, “IsBucket” : false, “IsDir” : false, “MimeType” : “application/octet-stream”, “ModTime” : “2017-05-31T16:15:57.034468261+01:00”, “Name” : “file.txt”, “Encrypted” : “v0qpsdq8anpci8n929v3uu9338”, “EncryptedPath” : “kja9098349023498/v0qpsdq8anpci8n929v3uu9338”, “Path” : “full/path/goes/here/file.txt”, “Size” : 6, “Tier” : “hot”, }
-If –hash is not specified the Hashes property won’t be emitted. The types of hash can be specified with the –hash-type parameter (which may be repeated). If –hash-type is set then it implies –hash.
-If –no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift).
-If –no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift).
-If –encrypted is not specified the Encrypted won’t be emitted.
-If –dirs-only is not specified files in addition to directories are returned
-If –files-only is not specified directories in addition to the files will be returned.
-The Path field will only show folders below the remote path being listed. If “remote:path” contains the file “subfolder/file.txt”, the Path for “file.txt” will be “subfolder/file.txt”, not “remote:path/subfolder/file.txt”. When used without –recursive the Path will always be the same as Name.
-If the directory is a bucket in a bucket based backend, then “IsBucket” will be set to true. This key won’t be present unless it is “true”.
-The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown (“2017-05-31T16:15:57.034+01:00”) whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown (“2017-05-31T16:15:57+01:00”).
+{ "Hashes" : { "SHA-1" : "f572d396fae9206628714fb2ce00f72e94f2258f", "MD5" : "b1946ac92492d2347c6235b4d2611184", "DropboxHash" : "ecb65bb98f9d905b70458986c39fcbad7715e5f2fcc3b1f07767d7c83e2438cc" }, "ID": "y2djkhiujf83u33", "OrigID": "UYOJVTUW00Q1RzTDA", "IsBucket" : false, "IsDir" : false, "MimeType" : "application/octet-stream", "ModTime" : "2017-05-31T16:15:57.034468261+01:00", "Name" : "file.txt", "Encrypted" : "v0qpsdq8anpci8n929v3uu9338", "EncryptedPath" : "kja9098349023498/v0qpsdq8anpci8n929v3uu9338", "Path" : "full/path/goes/here/file.txt", "Size" : 6, "Tier" : "hot", }
+If --hash is not specified the Hashes property won't be emitted. The types of hash can be specified with the --hash-type parameter (which may be repeated). If --hash-type is set then it implies --hash.
+If --no-modtime is specified then ModTime will be blank. This can speed things up on remotes where reading the ModTime takes an extra request (eg s3, swift).
+If --no-mimetype is specified then MimeType will be blank. This can speed things up on remotes where reading the MimeType takes an extra request (eg s3, swift).
+If --encrypted is not specified the Encrypted won't be emitted.
+If --dirs-only is not specified files in addition to directories are returned
+If --files-only is not specified directories in addition to the files will be returned.
+The Path field will only show folders below the remote path being listed. If "remote:path" contains the file "subfolder/file.txt", the Path for "file.txt" will be "subfolder/file.txt", not "remote:path/subfolder/file.txt". When used without --recursive the Path will always be the same as Name.
+If the directory is a bucket in a bucket based backend, then "IsBucket" will be set to true. This key won't be present unless it is "true".
+The time is in RFC3339 format with up to nanosecond precision. The number of decimal digits in the seconds will depend on the precision that the remote can hold the times, so if times are accurate to the nearest millisecond (eg Google Drive) then 3 digits will always be shown ("2017-05-31T16:15:57.034+01:00") whereas if the times are accurate to the nearest second (Dropbox, Box, WebDav etc) no digits will be shown ("2017-05-31T16:15:57+01:00").
The whole output can be processed as a JSON blob, or alternatively it can be processed line by line as each item is written one to a line.
Any of the filtering options can be applied to this command.
There are several related list commands
@@ -1329,9 +1332,9 @@ rclone copy --files-from-raw new_files /path/to/local remote:pathlsjson
to list objects and directories in JSON formatls
,lsl
,lsd
are designed to be human readable. lsf
is designed to be human and machine readable. lsjson
is designed to be machine readable.
Note that ls
and lsl
recurse by default - use “–max-depth 1” to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use “-R” to make them recurse.
Listing a non existent directory will produce an error except for remotes which can’t have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
+Note that ls
and lsl
recurse by default - use "--max-depth 1" to stop the recursion.
The other list commands lsd
,lsf
,lsjson
do not recurse by default - use "-R" to make them recurse.
Listing a non existent directory will produce an error except for remotes which can't have empty directories (eg s3, swift, gcs, etc - the bucket based remotes).
rclone lsjson remote:path [flags]
--dirs-only Show only directories in the listing.
@@ -1352,9 +1355,9 @@ rclone copy --files-from-raw new_files /path/to/local remote:path
Mount the remote as file system on a mountpoint.
rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE.
+rclone mount allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
First set up your remote using rclone config
. Check it works with rclone ls
etc.
You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the –daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows.
+You can either run mount in foreground mode or background (daemon) mode. Mount runs in foreground mode by default, use the --daemon flag to specify background mode mode. Background mode is only supported on Linux and OSX, you can only run mount in foreground mode on Windows.
On Linux/macOS/FreeBSD Start the mount like this where /path/to/local/mount
is an empty existing directory.
rclone mount remote:path/to/files /path/to/local/mount
Or on Windows like this where X:
is an unused drive letter or use a path to non-existent directory.
When running in background mode the user will have to stop the mount manually (specified below).
When the program ends while in foreground mode, either via Ctrl+C or receiving a SIGINT or SIGTERM signal, the mount is automatically stopped.
-The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user’s responsibility to stop the mount manually.
+The umount operation can fail, for example when the mountpoint is busy. When that happens, it is the user's responsibility to stop the mount manually.
Stopping the mount manually:
# Linux
fusermount -u /path/to/local/mount
@@ -1378,31 +1381,31 @@ umount /path/to/local/mount
By default, rclone will mount the remote as a normal drive. However, you can also mount it as a Network Drive (or Network Share, as mentioned in some places)
Unlike other systems, Windows provides a different filesystem type for network drives. Windows and other programs treat the network drives and fixed/removable drives differently: In network drives, many I/O operations are optimized, as the high latency and low reliability (compared to a normal drive) of a network is expected.
Although many people prefer network shares to be mounted as normal system drives, this might cause some issues, such as programs not working as expected or freezes and errors while operating with the mounted remote in Windows Explorer. If you experience any of those, consider mounting rclone remotes as network shares, as Windows expects normal drives to be fast and reliable, while cloud storage is far from that. See also Limitations section below for more info
-Add “–fuse-flag –VolumePrefix=” to your “mount” command, replacing “share” with any other name of your choice if you are mounting more than one remote. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap.
+Add "--fuse-flag --VolumePrefix=" to your "mount" command, replacing "share" with any other name of your choice if you are mounting more than one remote. Otherwise, the mountpoints will conflict and your mounted filesystems will overlap.
Without the use of “–vfs-cache-mode” this can only write files sequentially, it can only seek when reading. This means that many applications won’t work with their files on an rclone mount without “–vfs-cache-mode writes” or “–vfs-cache-mode full”. See the File Caching section for more info.
+Without the use of "--vfs-cache-mode" this can only write files sequentially, it can only seek when reading. This means that many applications won't work with their files on an rclone mount without "--vfs-cache-mode writes" or "--vfs-cache-mode full". See the File Caching section for more info.
The bucket based remotes (eg Swift, S3, Google Compute Storage, B2, Hubic) do not support the concept of empty directories, so empty directories will have a tendency to disappear once they fall out of the directory cache.
Only supported on Linux, FreeBSD, OS X and Windows at the moment.
File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can’t use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.
+File systems expect things to be 100% reliable, whereas cloud storage systems are a long way from 100% reliable. The rclone sync/copy commands cope with this with lots of retries. However rclone mount can't use retries in the same way without making local copies of the uploads. Look at the file caching for solutions to make mount more reliable.
You can use the flag –attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.
-The default is “1s” which caches files just long enough to avoid too many callbacks to rclone from the kernel.
+You can use the flag --attr-timeout to set the time the kernel caches the attributes (size, modification time etc) for directory entries.
+The default is "1s" which caches files just long enough to avoid too many callbacks to rclone from the kernel.
In theory 0s should be the correct value for filesystems which can change outside the control of the kernel. However this causes quite a few problems such as rclone using too much memory, rclone not serving files to samba and excessive time listing directories.
-The kernel can cache the info about a file for the time given by “–attr-timeout”. You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With “–attr-timeout 1s” this is very unlikely but not impossible. The higher you set “–attr-timeout” the more likely it is. The default setting of “1s” is the lowest setting which mitigates the problems above.
-If you set it higher (‘10s’ or ‘1m’ say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.
-If files don’t change on the remote outside of the control of rclone then there is no chance of corruption.
+The kernel can cache the info about a file for the time given by "--attr-timeout". You may see corruption if the remote file changes length during this window. It will show up as either a truncated file or a file with garbage on the end. With "--attr-timeout 1s" this is very unlikely but not impossible. The higher you set "--attr-timeout" the more likely it is. The default setting of "1s" is the lowest setting which mitigates the problems above.
+If you set it higher ('10s' or '1m' say) then the kernel will call back to rclone less often making it more efficient, however there is more chance of the corruption issue above.
+If files don't change on the remote outside of the control of rclone then there is no chance of corruption.
This is the same as setting the attr_timeout option in mount.fuse.
Note that all the rclone filters can be used to select a subset of the files to be visible in the mount.
When running rclone mount as a systemd service, it is possible to use Type=notify. In this case the service will enter the started state after the mountpoint has been successfully set up. Units having the rclone mount service specified as a requirement will see all files and folders immediately in this mode.
–vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.
-When –vfs-read-chunk-size-limit is also specified and greater than –vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely.
-With –vfs-read-chunk-size 100M and –vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When –vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
-Chunked reading will only work with –vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with –vfs-cache-mode full.
+--vfs-read-chunk-size will enable reading the source objects in parts. This can reduce the used download quota for some remotes by requesting only chunks from the remote that are actually read at the cost of an increased number of requests.
+When --vfs-read-chunk-size-limit is also specified and greater than --vfs-read-chunk-size, the chunk size for each open file will get doubled for each chunk read, until the specified value is reached. A value of -1 will disable the limit and the chunk size will grow indefinitely.
+With --vfs-read-chunk-size 100M and --vfs-read-chunk-size-limit 0 the following parts will be downloaded: 0-100M, 100M-200M, 200M-300M, 300M-400M and so on. When --vfs-read-chunk-size-limit 500M is specified, the result would be 0-100M, 100M-300M, 300M-700M, 700M-1200M, 1200M-1700M and so on.
+Chunked reading will only work with --vfs-cache-mode < full, as the file will always be copied to the vfs cache before opening with --vfs-cache-mode full.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1426,47 +1429,47 @@ umount /path/to/local/mount
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
-If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to –low-level-retries times.
-If an upload fails it will be retried up to --low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to –low-level-retries times.
+If an upload or download fails it will be retried up to --low-level-retries times.
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
-The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below.
+The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target.
-If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
rclone mount remote:path /path/to/mountpoint [flags]
--allow-non-empty Allow mounting over a non-empty directory (not Windows).
@@ -1523,8 +1526,8 @@ umount /path/to/local/mount
if src is directory
move it to dst, overwriting existing files if they exist
see move command for full details
-This doesn’t transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
-Important: Since this can cause data loss, test first with the –dry-run flag.
+This doesn't transfer unchanged files, testing by size and modification time or MD5SUM. src will be deleted on successful transfer.
+Important: Since this can cause data loss, test first with the --dry-run flag.
Note: Use the -P
/--progress
flag to view real-time transfer statistics.
rclone moveto source:path dest:path [flags]
Explore a remote with a text based user interface.
This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - “What is using all my disk space?”.
+This displays a text based user interface allowing the navigation of a remote. It is most useful for answering the question - "What is using all my disk space?".
To make the user interface it first scans the entire remote given and builds an in memory representation. rclone ncdu can be used during this scanning phase and you will see it building up the directory structure as it goes along.
-Here are the keys - press ‘?’ to toggle the help on and off
+Here are the keys - press '?' to toggle the help on and off
↑,↓ or k,j to Move
→,l to enter
←,h to return
@@ -1553,7 +1556,7 @@ if src is directory
? to toggle help on and off
q/ESC/c-C to quit
This an homage to the ncdu tool but for rclone remotes. It is missing lots of features at the moment but is useful as it stands.
-Note that it might take some time to delete big files/folders. The UI won’t respond in the meantime since the deletion is done synchronously.
+Note that it might take some time to delete big files/folders. The UI won't respond in the meantime since the deletion is done synchronously.
rclone ncdu remote:path [flags]
-h, --help help for ncdu
@@ -1565,7 +1568,7 @@ if src is directory
Obscure password for use in the rclone config file
In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent “eyedropping” - namely someone seeing a password in the rclone config file by accident.
+In the rclone config file, human readable passwords are obscured. Obscuring them is done by encrypting them and writing them out in base64. This is not a secure way of encrypting these passwords as rclone can decrypt them - it is to prevent "eyedropping" - namely someone seeing a password in the rclone config file by accident.
Many equally important things (like access tokens) are not obscured in the config file. However it is very hard to shoulder surf a 64 character hex token.
If you want to encrypt the config file then please use config file encryption - see rclone config for more info.
rclone obscure password [flags]
@@ -1579,23 +1582,23 @@ if src is directory
Run a command against a running rclone.
This runs a command against a running rclone. Use the –url flag to specify an non default URL to connect on. This can be either a “:port” which is taken to mean “http://localhost:port” or a “host:port” which is taken to mean “http://host:port”
-A username and password can be passed in with –user and –pass.
-Note that –rc-addr, –rc-user, –rc-pass will be read also for –url, –user, –pass.
+This runs a command against a running rclone. Use the --url flag to specify an non default URL to connect on. This can be either a ":port" which is taken to mean "http://localhost:port" or a "host:port" which is taken to mean "http://host:port"
+A username and password can be passed in with --user and --pass.
+Note that --rc-addr, --rc-user, --rc-pass will be read also for --url, --user, --pass.
Arguments should be passed in as parameter=value.
The result will be returned as a JSON object by default.
-The –json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.
-The -o/–opt option can be used to set a key “opt” with key, value options in the form “-o key=value” or “-o key”. It can be repeated as many times as required. This is useful for rc commands which take the “opt” parameter which by convention is a dictionary of strings.
+The --json parameter can be used to pass in a JSON blob as an input instead of key=value arguments. This is the only way of passing in more complicated values.
+The -o/--opt option can be used to set a key "opt" with key, value options in the form "-o key=value" or "-o key". It can be repeated as many times as required. This is useful for rc commands which take the "opt" parameter which by convention is a dictionary of strings.
-o key=value -o key2
-Will place this in the “opt” value
+Will place this in the "opt" value
{"key":"value", "key2","")
-The -a/–arg option can be used to set strings in the “arg” value. It can be repeated as many times as required. This is useful for rc commands which take the “arg” parameter which by convention is a list of strings.
+The -a/--arg option can be used to set strings in the "arg" value. It can be repeated as many times as required. This is useful for rc commands which take the "arg" parameter which by convention is a list of strings.
-a value -a value2
-Will place this in the “arg” value
+Will place this in the "arg" value
["value", "value2"]
-Use –loopback to connect to the rclone instance running “rclone rc”. This is very useful for testing commands without having to run an rclone rc server, eg:
+Use --loopback to connect to the rclone instance running "rclone rc". This is very useful for testing commands without having to run an rclone rc server, eg:
rclone rc --loopback operations/about fs=/
-Use “rclone rc” to see a list of all possible commands.
+Use "rclone rc" to see a list of all possible commands.
rclone rc commands parameter [flags]
-a, --arg stringArray Argument placed in the "arg" array.
@@ -1620,7 +1623,7 @@ if src is directory
ffmpeg - | rclone rcat remote:path/to/file
If the remote file already exists, it will be overwritten.
rcat will try to upload small files in a single request, which is usually more efficient than the streaming/chunked upload endpoints, which use multiple requests. Exact behaviour depends on the remote. What is considered a small file may be set through --streaming-upload-cutoff
. Uploading only starts after the cutoff is reached or if the file ends before that. The data must fit into RAM. The cutoff needs to be small enough to adhere the limits of your remote, please see there. Generally speaking, setting this cutoff too high will decrease your performance.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you’re better off caching locally and then rclone move
it to the destination.
Note that the upload can also not be retried because the data is not kept around until the upload succeeds. If you need to transfer a lot of data, you're better off caching locally and then rclone move
it to the destination.
rclone rcat remote:path [flags]
-h, --help help for rcat
@@ -1648,7 +1651,7 @@ ffmpeg - | rclone rcat remote:path/to/file
Remove empty directories under the path.
This removes any empty directories (or directories that only contain empty directories) under the path that it finds, including the path if it has nothing in.
-If you supply the –leave-root flag, it will not remove the root directory.
+If you supply the --leave-root flag, it will not remove the root directory.
This is useful for tidying up remotes that rclone has left a lot of empty directories in.
rclone rmdirs remote:path [flags]
rclone serve dlna is a DLNA media server for media stored in an rclone remote. Many devices, such as the Xbox and PlayStation, can automatically discover this server in the LAN and play audio/video from it. VLC is also supported. Service discovery uses UDP multicast packets (SSDP) and will thus only work on LANs.
Rclone will list all files present in the remote, without filtering based on media formats or file extensions. Additionally, there is no media transcoding support. This means that some players might show files that they are not able to play back correctly.
Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs.
-Use –name to choose the friendly server name, which is by default “rclone (hostname)”.
-Use –log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic.
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs.
+Use --name to choose the friendly server name, which is by default "rclone (hostname)".
+Use --log-trace in conjunction with -vv to enable additional debug logging of all UPNP traffic.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1711,47 +1714,47 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
-If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to –low-level-retries times.
-If an upload fails it will be retried up to --low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to –low-level-retries times.
+If an upload or download fails it will be retried up to --low-level-retries times.
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
-The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below.
+The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target.
-If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
rclone serve dlna remote:path [flags]
--addr string ip:port or :port to bind the DLNA http server to. (default ":7879")
@@ -1788,11 +1791,11 @@ ffmpeg - | rclone rcat remote:path/to/file
rclone serve ftp implements a basic ftp server to serve the remote over FTP protocol. This can be viewed with a ftp client or you can make a remote of type ftp to read and write it.
Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
By default this will serve files without needing a login.
-You can set a single username and password with the –user and –pass flags.
+You can set a single username and password with the --user and --pass flags.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -1816,52 +1819,52 @@ ffmpeg - | rclone rcat remote:path/to/file
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
-If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to –low-level-retries times.
-If an upload fails it will be retried up to --low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to –low-level-retries times.
+If an upload or download fails it will be retried up to --low-level-retries times.
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
-The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below.
+The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target.
-If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
There is an example program bin/test_proxy.py in the rclone source code.
-The program’s job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
The program's job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
This config generated must have this extra parameter - _root
- root to use for the backend
And it may have this parameter - _obscure
- comma separated strings for parameters to obscure
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
@@ -1884,8 +1887,8 @@ ffmpeg - | rclone rcat remote:path/to/file "host": "sftp.example.com" }This would mean that an SFTP backend would be created on the fly for the user
and pass
/public_key
returned in the output to the host given. Note that since _obscure
is set to pass
, rclone will obscure the pass
parameter before creating the backend (which is required for sftp backends).
The program can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you’d probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don’t use pass
or public_key
. This also means that if a user’s password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
The program can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you'd probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve ftp remote:path [flags]
Serve the remote over HTTP.
rclone serve http implements a basic web server to serve the remote over HTTP. This can be viewed in a web browser or you can make a remote of type http read from it.
-You can use the filter flags (eg –include, –exclude) to control what is served.
+You can use the filter flags (eg --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
-–bwlimit will be respected for file transfers. Use –stats to control the stats printing.
+--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
-–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
-–baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically.
-–template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
+--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
Allows for creating a relative navigation | |||
– .Link | +-- .Link | The relative to the root link of the Text. | |
– .Text | +-- .Text | The Name of the directory. | |
Information about a specific file/directory. | |||
– .URL | -The ‘url’ of an entry. | +-- .URL | +The 'url' of an entry. |
– .Leaf | -Currently same as ‘URL’ but intended to be ‘just’ the name. | +-- .Leaf | +Currently same as 'URL' but intended to be 'just' the name. |
– .IsDir | +-- .IsDir | Boolean for if an entry is a directory or not. | |
– .Size | +-- .Size | Size in Bytes of the entry. | |
– .ModTime | +-- .ModTime | The UTC timestamp of an entry. |
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
-Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use –realm to set the authentication realm.
+Use --realm to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.
-–cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -2049,47 +2052,47 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
-If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to –low-level-retries times.
-If an upload fails it will be retried up to --low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to –low-level-retries times.
+If an upload or download fails it will be retried up to --low-level-retries times.
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
-The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below.
+The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target.
-If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
rclone serve http remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
@@ -2132,24 +2135,24 @@ htpasswd -B htpasswd anotherUser
Serve the remote for restic’s REST API.
+Serve the remote for restic's REST API.
rclone serve restic implements restic’s REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
+rclone serve restic implements restic's REST backend API over HTTP. This allows restic to use rclone as a data storage mechanism for cloud providers that restic does not support directly.
Restic is a command line program for doing backups.
The server will log errors. Use -v to see access logs.
-–bwlimit will be respected for file transfers. Use –stats to control the stats printing.
+--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
First set up a remote for your chosen cloud provider.
-Once you have set up the remote, check it is working with, for example “rclone lsd remote:”. You may have called the remote something other than “remote:” - just substitute whatever you called it in the following instructions.
+Once you have set up the remote, check it is working with, for example "rclone lsd remote:". You may have called the remote something other than "remote:" - just substitute whatever you called it in the following instructions.
Now start the rclone restic server
rclone serve restic -v remote:backup
-Where you can replace “backup” in the above by whatever path in the remote you wish to use.
-By default this will serve on “localhost:8080” you can change this with use of the “–addr” flag.
+Where you can replace "backup" in the above by whatever path in the remote you wish to use.
+By default this will serve on "localhost:8080" you can change this with use of the "--addr" flag.
You might wish to start this server on boot.
Now you can follow the restic instructions on setting up restic.
Note that you will need restic 0.8.2 or later to interoperate with rclone.
-For the example above you will want to use “http://localhost:8080/” as the URL for the REST server.
+For the example above you will want to use "http://localhost:8080/" as the URL for the REST server.
For example:
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/
$ export RESTIC_PASSWORD=yourpassword
@@ -2172,14 +2175,14 @@ snapshot 45c8fdd8 saved
$ export RESTIC_REPOSITORY=rest:http://localhost:8080/user2repo/
# backup user2 stuff
The “–private-repos” flag can be used to limit users to repositories starting with a path of /<username>/
.
The "--private-repos" flag can be used to limit users to repositories starting with a path of /<username>/
.
Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
-–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
-–baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically.
-–template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
+--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
Allows for creating a relative navigation | |||
– .Link | +-- .Link | The relative to the root link of the Text. | |
– .Text | +-- .Text | The Name of the directory. | |
Information about a specific file/directory. | |||
– .URL | -The ‘url’ of an entry. | +-- .URL | +The 'url' of an entry. |
– .Leaf | -Currently same as ‘URL’ but intended to be ‘just’ the name. | +-- .Leaf | +Currently same as 'URL' but intended to be 'just' the name. |
– .IsDir | +-- .IsDir | Boolean for if an entry is a directory or not. | |
– .Size | +-- .Size | Size in Bytes of the entry. | |
– .ModTime | +-- .ModTime | The UTC timestamp of an entry. |
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
-Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use –realm to set the authentication realm.
+Use --realm to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.
-–cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
rclone serve restic remote:path [flags]
--addr string IPaddress:Port or :Port to bind server to. (default "localhost:8080")
@@ -2299,14 +2302,14 @@ htpasswd -B htpasswd anotherUser
Serve the remote over SFTP.
rclone serve sftp implements an SFTP server to serve the remote over SFTP. This can be used with an SFTP client or you can make a remote of type sftp to use with it.
-You can use the filter flags (eg –include, –exclude) to control what is served.
+You can use the filter flags (eg --include, --exclude) to control what is served.
The server will log errors. Use -v to see access logs.
-–bwlimit will be respected for file transfers. Use –stats to control the stats printing.
-You must provide some means of authentication, either with –user/–pass, an authorized keys file (specify location with –authorized-keys - the default is the same as ssh), an –auth-proxy, or set the –no-auth flag for no authentication when logging in.
+--bwlimit will be respected for file transfers. Use --stats to control the stats printing.
+You must provide some means of authentication, either with --user/--pass, an authorized keys file (specify location with --authorized-keys - the default is the same as ssh), an --auth-proxy, or set the --no-auth flag for no authentication when logging in.
Note that this also implements a small number of shell commands so that it can provide md5sum/sha1sum/df information for the rclone sftp backend. This means that is can support SHA1SUMs, MD5SUMs and the about command when paired with the rclone sftp backend.
-If you don’t supply a –key then rclone will generate one and cache it for later use.
-By default the server binds to localhost:2022 - if you want it to be reachable externally then supply “–addr :2022” for example.
-Note that the default of “–vfs-cache-mode off” is fine for the rclone sftp backend, but it may not be with other SFTP clients.
+If you don't supply a --key then rclone will generate one and cache it for later use.
+By default the server binds to localhost:2022 - if you want it to be reachable externally then supply "--addr :2022" for example.
+Note that the default of "--vfs-cache-mode off" is fine for the rclone sftp backend, but it may not be with other SFTP clients.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -2330,52 +2333,52 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
-If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to –low-level-retries times.
-If an upload fails it will be retried up to --low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to –low-level-retries times.
+If an upload or download fails it will be retried up to --low-level-retries times.
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
-The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below.
+The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target.
-If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
There is an example program bin/test_proxy.py in the rclone source code.
-The program’s job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
The program's job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
This config generated must have this extra parameter - _root
- root to use for the backend
And it may have this parameter - _obscure
- comma separated strings for parameters to obscure
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
@@ -2398,8 +2401,8 @@ htpasswd -B htpasswd anotherUser "host": "sftp.example.com" }This would mean that an SFTP backend would be created on the fly for the user
and pass
/public_key
returned in the output to the host given. Note that since _obscure
is set to pass
, rclone will obscure the pass
parameter before creating the backend (which is required for sftp backends).
The program can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you’d probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don’t use pass
or public_key
. This also means that if a user’s password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
The program can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you'd probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve sftp remote:path [flags]
rclone serve webdav implements a basic webdav server to serve the remote over HTTP via the webdav protocol. This can be viewed with a webdav client, through a web browser, or you can make a remote of type webdav to read and write it.
This controls the ETag header. Without this flag the ETag will be based on the ModTime and Size of the object.
-If this flag is set to “auto” then rclone will choose the first supported hash on the backend or you can use a named hash such as “MD5” or “SHA-1”.
-Use “rclone hashsum” to see the full list.
+If this flag is set to "auto" then rclone will choose the first supported hash on the backend or you can use a named hash such as "MD5" or "SHA-1".
+Use "rclone hashsum" to see the full list.
Use –addr to specify which IP address and port the server should listen on, eg –addr 1.2.3.4:8000 or –addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
-If you set –addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
-–server-read-timeout and –server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
-–max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
-–baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used –baseurl “/rclone” then rclone would serve from a URL starting with “/rclone/”. This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing “/” on –baseurl, so –baseurl “rclone”, –baseurl “/rclone” and –baseurl “/rclone/” are all treated identically.
-–template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
+Use --addr to specify which IP address and port the server should listen on, eg --addr 1.2.3.4:8000 or --addr :8080 to listen to all IPs. By default it only listens on localhost. You can use port :0 to let the OS choose an available port.
+If you set --addr to listen on a public or LAN accessible IP address then using Authentication is advised - see the next section for info.
+--server-read-timeout and --server-write-timeout can be used to control the timeouts on the server. Note that this is the total time for a transfer.
+--max-header-bytes controls the maximum number of bytes the server will accept in the HTTP header.
+--baseurl controls the URL prefix that rclone serves from. By default rclone will serve from the root. If you used --baseurl "/rclone" then rclone would serve from a URL starting with "/rclone/". This is useful if you wish to proxy rclone serve. Rclone automatically inserts leading and trailing "/" on --baseurl, so --baseurl "rclone", --baseurl "/rclone" and --baseurl "/rclone/" are all treated identically.
+--template allows a user to specify a custom markup template for http and webdav serve functions. The server exports the following markup to be used within the template to server pages:
Allows for creating a relative navigation | |||
– .Link | +-- .Link | The relative to the root link of the Text. | |
– .Text | +-- .Text | The Name of the directory. | |
Information about a specific file/directory. | |||
– .URL | -The ‘url’ of an entry. | +-- .URL | +The 'url' of an entry. |
– .Leaf | -Currently same as ‘URL’ but intended to be ‘just’ the name. | +-- .Leaf | +Currently same as 'URL' but intended to be 'just' the name. |
– .IsDir | +-- .IsDir | Boolean for if an entry is a directory or not. | |
– .Size | +-- .Size | Size in Bytes of the entry. | |
– .ModTime | +-- .ModTime | The UTC timestamp of an entry. |
By default this will serve files without needing a login.
-You can either use an htpasswd file which can take lots of users, or set a single username and password with the –user and –pass flags.
-Use –htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
+You can either use an htpasswd file which can take lots of users, or set a single username and password with the --user and --pass flags.
+Use --htpasswd /path/to/htpasswd to provide an htpasswd file. This is in standard apache format and supports MD5, SHA1 and BCrypt for basic authentication. Bcrypt is recommended.
To create an htpasswd file:
touch htpasswd
htpasswd -B htpasswd user
htpasswd -B htpasswd anotherUser
The password file can be updated while rclone is running.
-Use –realm to set the authentication realm.
+Use --realm to set the authentication realm.
By default this will serve over http. If you want you can serve over https. You will need to supply the –cert and –key flags. If you wish to do client side certificate validation then you will need to supply –client-ca also.
-–cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. –key should be the PEM encoded private key and –client-ca should be the PEM encoded client certificate authority certificate.
+By default this will serve over http. If you want you can serve over https. You will need to supply the --cert and --key flags. If you wish to do client side certificate validation then you will need to supply --client-ca also.
+--cert should be either a PEM encoded certificate or a concatenation of that with the CA certificate. --key should be the PEM encoded private key and --client-ca should be the PEM encoded client certificate authority certificate.
Using the --dir-cache-time
flag, you can set how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache. However, changes done on the remote will only be picked up once the cache expires if the backend configured does not support polling for changes. If the backend supports polling, changes will be picked up on within the polling interval.
Alternatively, you can send a SIGHUP
signal to rclone for it to flush all directory caches, regardless of how old they are. Assuming only one rclone instance is running, you can reset the cache like this:
rclone rc vfs/forget file=path/to/file dir=path/to/dir
The --buffer-size
flag determines the amount of memory, that will be used to buffer data in advance.
Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won’t be shared between multiple open file descriptors of the same file.
+Each open file descriptor will try to keep the specified amount of data in memory at all times. The buffered data is bound to one file descriptor and won't be shared between multiple open file descriptors of the same file.
This flag is a upper limit for the used memory per file descriptor. The buffer will only use memory for data that is downloaded but not not yet read. If the buffer is empty, only a small amount of memory will be used. The maximum memory used by rclone for buffering can be up to --buffer-size * open files
.
These flags control the VFS file caching options. The VFS layer is used by rclone mount to make a cloud storage system work more like a normal file system.
-You’ll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
+You'll need to enable VFS caching if you want, for example, to read and write simultaneously to a file. See below for more details.
Note that the VFS cache works in addition to the cache backend and you may find that you need one or the other or both.
--cache-dir string Directory rclone will use for caching.
--vfs-cache-max-age duration Max age of objects in the cache. (default 1h0m0s)
@@ -2566,52 +2569,52 @@ htpasswd -B htpasswd anotherUser
--vfs-cache-max-size int Max total size of objects in the cache. (default off)
If run with -vv
rclone will print the location of the file cache. The files are stored in the user cache file area which is OS dependent but can be controlled with --cache-dir
or setting the appropriate environment variable.
The cache has 4 different modes selected by --vfs-cache-mode
. The higher the cache mode the more compatible rclone becomes at the cost of using disk space.
Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won’t get written back to the remote. However they will still be in the on disk cache.
-If using –vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every –vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
-Note that files are written back to the remote only when they are closed so if rclone is quit or dies with open files then these won't get written back to the remote. However they will still be in the on disk cache.
+If using --vfs-cache-max-size note that the cache may exceed this size for two reasons. Firstly because it is only checked every --vfs-cache-poll-interval. Secondly because open files cannot be evicted from the cache.
+In this mode the cache will read directly from the remote and write directly to the remote without caching anything on disk.
This will mean some operations are not possible
This is very similar to “off” except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
+This is very similar to "off" except that files opened for read AND write will be buffered to disks. This means that files opened for write will be a lot more compatible, but uses the minimal disk space.
These operations are not possible
In this mode files opened for read only are still read directly from the remote, write only and read/write files are buffered to disk first.
This mode should support all normal file system operations.
-If an upload fails it will be retried up to –low-level-retries times.
-If an upload fails it will be retried up to --low-level-retries times.
+In this mode all reads and writes are buffered to and from disk. When a file is opened for read it will be downloaded in its entirety first.
This may be appropriate for your needs, or you may prefer to look at the cache backend which does a much more sophisticated job of caching, including caching directory hierarchies and chunks of files.
In this mode, unlike the others, when a file is written to the disk, it will be kept on the disk after it is written to the remote. It will be purged on a schedule according to --vfs-cache-max-age
.
This mode should support all normal file system operations.
-If an upload or download fails it will be retried up to –low-level-retries times.
+If an upload or download fails it will be retried up to --low-level-retries times.
Linux file systems are case-sensitive: two files can differ only by case, and the exact case must be used when opening a file.
Windows is not like most other operating systems supported by rclone. File systems in modern Windows are case-insensitive but case-preserving: although existing files can be opened using any case, the exact case used to create the file is preserved and available for programs to query. It is not allowed for two files in the same directory to differ only by case.
Usually file systems on macOS are case-insensitive. It is possible to make macOS file systems case-sensitive but that is not the default
-The “–vfs-case-insensitive” mount flag controls how rclone handles these two cases. If its value is “false”, rclone passes file names to the mounted file system as is. If the flag is “true” (or appears without a value on command line), rclone may perform a “fixup” as explained below.
+The "--vfs-case-insensitive" mount flag controls how rclone handles these two cases. If its value is "false", rclone passes file names to the mounted file system as is. If the flag is "true" (or appears without a value on command line), rclone may perform a "fixup" as explained below.
The user may specify a file name to open/delete/rename/etc with a case different than what is stored on mounted file system. If an argument refers to an existing file with exactly the same name, then the case of the existing file on the disk will be used. However, if a file name with exactly the same name is not found but a name differing only by case exists, rclone will transparently fixup the name. This fixup happens only when an existing file is requested. Case sensitivity of file names created anew by rclone is controlled by an underlying mounted file system.
-Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether “fixup” is performed to satisfy the target.
-If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: “true” on Windows and macOS, “false” otherwise. If the flag is provided without a value, then it is “true”.
+Note that case sensitivity of the operating system running rclone (the target) may differ from case sensitivity of a file system mounted by rclone (the source). The flag controls whether "fixup" is performed to satisfy the target.
+If the flag is not provided on command line, then its default value depends on the operating system where rclone runs: "true" on Windows and macOS, "false" otherwise. If the flag is provided without a value, then it is "true".
If you supply the parameter --auth-proxy /path/to/program
then rclone will use that program to generate backends on the fly which then are used to authenticate incoming requests. This uses a simple JSON based protocl with input on STDIN and output on STDOUT.
PLEASE NOTE: --auth-proxy
and --authorized-keys
cannot be used together, if --auth-proxy
is set the authorized keys option will be ignored.
There is an example program bin/test_proxy.py in the rclone source code.
-The program’s job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won’t use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
The program's job is to take a user
and pass
on the input and turn those into the config for a backend on STDOUT in JSON format. This config will have any default parameters for the backend added, but it won't use configuration from environment variables or command line options - it is the job of the proxy program to make a complete config.
This config generated must have this extra parameter - _root
- root to use for the backend
And it may have this parameter - _obscure
- comma separated strings for parameters to obscure
If password authentication was used by the client, input to the proxy process (on STDIN) would look similar to this:
@@ -2634,8 +2637,8 @@ htpasswd -B htpasswd anotherUser "host": "sftp.example.com" }This would mean that an SFTP backend would be created on the fly for the user
and pass
/public_key
returned in the output to the host given. Note that since _obscure
is set to pass
, rclone will obscure the pass
parameter before creating the backend (which is required for sftp backends).
The program can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you’d probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don’t use pass
or public_key
. This also means that if a user’s password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
The program can manipulate the supplied user
in any way, for example to make proxy to many different sftp backends, you could make the user
be user@example.com
and then set the host
to example.com
in the output and the user to user
. For security you'd probably want to restrict the host
to a limited list.
Note that an internal cache is keyed on user
so only use that for configuration, don't use pass
or public_key
. This also means that if a user's password or public-key is changed the cache will need to expire (which takes 5 mins) before it takes effect.
This can be used to build general purpose proxies to any kind of backend that rclone supports.
rclone serve webdav remote:path [flags]
Create new file or change file modification time.
Set the modification time on object(s) as specified by remote:path to have the current time.
-If remote:path does not exist then a zero sized object will be created unless the –no-create flag is provided.
-If –timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of:
+If remote:path does not exist then a zero sized object will be created unless the --no-create flag is provided.
+If --timestamp is used then it will set the modification time to that time instead of the current time. Times may be specified as one of:
Note that –timestamp is in UTC if you want local time then add the –localtime flag.
+Note that --timestamp is in UTC if you want local time then add the --localtime flag.
rclone touch remote:path [flags]
-h, --help help for touch
@@ -2737,8 +2740,8 @@ htpasswd -B htpasswd anotherUser
└── file5
1 directories, 5 files
-You can use any of the filtering options with the tree command (eg –include and –exclude). You can also use –fast-list.
-The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone’s short options.
+You can use any of the filtering options with the tree command (eg --include and --exclude). You can also use --fast-list.
+The tree command has many options for controlling the listing which are compatible with the tree command. Note that not all of them have short options as they conflict with rclone's short options.
rclone tree remote:path [flags]
-a, --all All files are listed (list . files too).
@@ -2768,7 +2771,7 @@ htpasswd -B htpasswd anotherUser
rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn’t.
rclone normally syncs or copies directories. However, if the source remote points to a file, rclone will just copy that file. The destination remote must point to a directory - rclone will give the error Failed to create file system for "remote:file": is a file not a directory
if it isn't.
For example, suppose you have a remote with a file in called test.jpg
, then you could copy just that file like this
rclone copy remote:test.jpg /tmp/download
The file test.jpg
will be placed inside /tmp/download
.
This refers to the local file system.
On Windows only \
may be used instead of /
in local paths only, non local paths must use /
.
These paths needn’t start with a leading /
- if they don’t then they will be relative to the current directory.
These paths needn't start with a leading /
- if they don't then they will be relative to the current directory.
This refers to a directory path/to/dir
on remote:
as defined in the config file (configured with rclone config
).
On most backends this is refers to the same directory as remote:path/to/dir
and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading /
will refer to your “home” directory and paths with a leading /
will refer to the root.
On most backends this is refers to the same directory as remote:path/to/dir
and that format should be preferred. On a very small number of remotes (FTP, SFTP, Dropbox for business) this will refer to a different directory. On these, paths without a leading /
will refer to your "home" directory and paths with a leading /
will refer to the root.
This is an advanced form for creating remotes on the fly. backend
should be the name or prefix of a backend (the type
in the config file) and all the configuration for the backend should be provided on the command line (or in environment variables).
Here are some examples:
@@ -2806,11 +2809,11 @@ htpasswd -B htpasswd anotherUserrclone copy 'Important files?' remote:backup
If you want to send a '
you will need to use "
, eg
rclone copy "O'Reilly Reviews" remote:backup
-The rules for quoting metacharacters are complicated and if you want the full details you’ll have to consult the manual page for your shell.
+The rules for quoting metacharacters are complicated and if you want the full details you'll have to consult the manual page for your shell.
If your names have spaces in you need to put them in "
, eg
rclone copy "E:\folder name\folder name\folder name" remote:backup
-If you are using the root directory on its own then don’t quote it (see #464 for why), eg
+If you are using the root directory on its own then don't quote it (see #464 for why), eg
rclone copy E:\ remote:backup
:
in the namesrclone uses :
to mark a remote name. This is, however, a valid filename component in non-Windows OSes. The remote name parser will only search for a :
up to the first /
so if you need to act on a file or directory like this then use the full path starting with a /
, or use ./
as a current directory prefix.
rclone sync /full/path/to/sync:me remote:path
Most remotes (but not all - see the overview) support server side copy.
-This means if you want to copy one folder to another then rclone won’t download all the files and re-upload them; it will instruct the server to copy them in place.
+This means if you want to copy one folder to another then rclone won't download all the files and re-upload them; it will instruct the server to copy them in place.
Eg
rclone copy s3:oldbucket s3:newbucket
Will copy the contents of oldbucket
to newbucket
without downloading and re-uploading.
Remotes which don’t support server side copy will download and re-upload in this case.
-Server side copies are used with sync
and copy
and will be identified in the log when using the -v
flag. The move
command may also use them if remote doesn’t support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.
Remotes which don't support server side copy will download and re-upload in this case.
+Server side copies are used with sync
and copy
and will be identified in the log when using the -v
flag. The move
command may also use them if remote doesn't support server side move directly. This is done by issuing a server side copy then a delete which is much quicker than a download and re-upload.
Server side copies will only be attempted if the remote names are the same.
This can be used when scripting to make aged backups efficiently, eg
rclone sync remote:current-backup remote:previous-backup
@@ -2833,24 +2836,24 @@ rclone sync /path/to/files remote:current-backup
Rclone has a number of options to control its behaviour.
Options that take parameters can have the values passed in two ways, --option=value
or --option value
. However boolean (true/false) options behave slightly differently to the other options in that --boolean
sets the option to true
and the absence of the flag sets it to false
. It is also possible to specify --boolean=false
or --boolean=true
. Note that --boolean false
is not valid - this is parsed as --boolean
and the false
is parsed as an extra command line argument for rclone.
Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as “300ms”, “-1.5h” or “2h45m”. Valid time units are “ns”, “us” (or “µs”), “ms”, “s”, “m”, “h”.
+Options which use TIME use the go time parser. A duration string is a possibly signed sequence of decimal numbers, each with optional fraction and a unit suffix, such as "300ms", "-1.5h" or "2h45m". Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
Options which use SIZE use kByte by default. However, a suffix of b
for bytes, k
for kBytes, M
for MBytes, G
for GBytes, T
for TBytes and P
for PBytes may be used. These are the binary units, eg 1, 2**10, 2**20, 2**30 respectively.
When using sync
, copy
or move
any files which would have been overwritten or deleted are moved in their original hierarchy into this directory.
If --suffix
is set, then the moved files will have the suffix added to them. If there is a file with the same path (after the suffix has been added) in DIR, then it will be overwritten.
The remote in use must support server side move or copy and you must use the same remote as the destination of the sync. The backup directory must not overlap the destination directory.
For example
rclone sync /path/to/local remote:current --backup-dir remote:old
will sync /path/to/local
to remote:current
, but for any files which would have been updated or deleted will be stored in remote:old
.
If running rclone from a script you might want to use today’s date as the directory name passed to --backup-dir
to store the old files, or you might want to pass --suffix
with today’s date.
If running rclone from a script you might want to use today's date as the directory name passed to --backup-dir
to store the old files, or you might want to pass --suffix
with today's date.
See --compare-dest
and --copy-dest
.
Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn’t resolve or resolves to more than one IP address it will give an error.
-Local address to bind to for outgoing connections. This can be an IPv4 address (1.2.3.4), an IPv6 address (1234::789A) or host name. If the host name doesn't resolve or resolves to more than one IP address it will give an error.
+This option controls the bandwidth limit. Limits can be specified in two ways: As a single limit, or as a timetable.
Single limits last for the duration of the session. To use a single limit, specify the desired bandwidth in kBytes/s, or use a suffix b|k|M|G. The default is 0
which means to not limit bandwidth.
For example, to limit bandwidth usage to 10 MBytes/s use --bwlimit 10M
It is also possible to specify a “timetable” of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH...
where: WEEKDAY
is optional element. It could be written as whole world or only using 3 first characters. HH:MM
is an hour from 00:00 to 23:59.
It is also possible to specify a "timetable" of limits, which will cause certain limits to be applied at certain times. To specify a timetable, format your entries as WEEKDAY-HH:MM,BANDWIDTH WEEKDAY-HH:MM,BANDWIDTH...
where: WEEKDAY
is optional element. It could be written as whole world or only using 3 first characters. HH:MM
is an hour from 00:00 to 23:59.
An example of a typical timetable to avoid link saturation during daytime working hours could be:
--bwlimit "08:00,512 12:00,10M 13:00,512 18:00,30M 23:00,off"
In this example, the transfer bandwidth will be every day set to 512kBytes/sec at 8am. At noon, it will raise to 10Mbytes/s, and drop back to 512kBytes/sec at 1pm. At 6pm, the bandwidth limit will be set to 30MBytes/s, and at 11pm it will be completely disabled (full speed). Anything between 11pm and 8am will remain unlimited.
@@ -2861,50 +2864,50 @@ rclone sync /path/to/files remote:current-backup--bwlimit "Mon-00:00,512 12:00,1M Sun-20:00,off"
Is equal to this:
--bwlimit "Mon-00:00,512Mon-12:00,1M Tue-12:00,1M Wed-12:00,1M Thu-12:00,1M Fri-12:00,1M Sat-12:00,1M Sun-12:00,1M Sun-20:00,off"
Bandwidth limits only apply to the data transfer. They don’t apply to the bandwidth of the directory listings etc.
-Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let’s say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
Bandwidth limits only apply to the data transfer. They don't apply to the bandwidth of the directory listings etc.
+Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let's say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5/8 = 0.625MByte/s so you would use a --bwlimit 0.625M
parameter for rclone.
On Unix systems (Linux, macOS, …) the bandwidth limiter can be toggled by sending a SIGUSR2
signal to rclone. This allows to remove the limitations of a long running rclone transfer and to restore it back to the value specified with --bwlimit
quickly when needed. Assuming there is only one rclone instance running, you can toggle the limiter like this:
kill -SIGUSR2 $(pidof rclone)
If you configure rclone with a remote control then you can use change the bwlimit dynamically:
rclone rc core/bwlimit rate=1M
-Use this sized buffer to speed up file transfers. Each --transfer
will use this much memory for buffering.
When using mount
or cmount
each open file descriptor will use this much memory for buffering. See the mount documentation for more details.
Set to 0
to disable the buffering for the minimum memory usage.
Note that the memory allocation of the buffers is influenced by the –use-mmap flag.
-Note that the memory allocation of the buffers is influenced by the --use-mmap flag.
+If this flag is set then in a sync
, copy
or move
, rclone will do all the checks to see whether files need to be transferred before doing any of the transfers. Normally rclone would start running transfers as soon as possible.
This flag can be useful on IO limited systems where transfers interfere with checking.
Using this flag can use more memory as it effectively sets --max-backlog
to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.
The number of checkers to run in parallel. Checkers do the equality checking of files during a sync. For some storage systems (eg S3, Swift, Dropbox) this can take a significant amount of time so they are run in parallel.
The default is to run 8 checkers in parallel.
-Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check the file hash and size to determine if files are equal.
-This is useful when the remote doesn’t support setting modified time and a more accurate sync is desired than just checking the file size.
+This is useful when the remote doesn't support setting modified time and a more accurate sync is desired than just checking the file size.
This is very useful when transferring between remotes which store the same hash type on the object, eg Drive and Swift. For details of which remotes support which hash type see the table in the overview section.
Eg rclone --checksum sync s3:/bucket swift:/bucket
would run much quicker than without the --checksum
flag.
When using this flag, rclone won’t update mtimes of remote files if they are incorrect as it would normally.
-When using this flag, rclone won't update mtimes of remote files if they are incorrect as it would normally.
+When using sync
, copy
or move
DIR is checked in addition to the destination for files. If a file identical to the source is found that file is NOT copied from source. This is useful to copy just files that have changed since the last backup.
You must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.
See --copy-dest
and --backup-dir
.
Specify the location of the rclone config file.
Normally the config file is in your home directory as a file called .config/rclone/rclone.conf
(or .rclone.conf
if created with an older version). If $XDG_CONFIG_HOME
is set it will be at $XDG_CONFIG_HOME/rclone/rclone.conf
.
If there is a file rclone.conf
in the same directory as the rclone executable it will be preferred. This file must be created manually for Rclone to use it, it will never be created automatically.
If you run rclone config file
you will see where the default location is for you.
Use this flag to override the config location, eg rclone --config=".myconfig" .config
.
Set the connection timeout. This should be in go time format which looks like 5s
for 5 seconds, 10m
for 10 minutes, or 3h30m
.
The connection timeout is the amount of time rclone will wait for a connection to go through to a remote object storage system. It is 1m
by default.
When using sync
, copy
or move
DIR is checked in addition to the destination for files. If a file identical to the source is found that file is server side copied from DIR to the destination. This is useful for incremental backup.
The remote in use must support server side copy and you must use the same remote as the destination of the sync. The compare directory must not overlap the destination directory.
See --compare-dest
and --backup-dir
.
Mode to run dedupe command in. One of interactive
, skip
, first
, newest
, oldest
, rename
. The default is interactive
. See the dedupe command for more information as to what these options mean.
This disables a comma separated list of optional features. For example to disable server side move and server side copy use:
--disable move,copy
The features can be put in any case.
@@ -2912,111 +2915,111 @@ rclone sync /path/to/files remote:current-backup--disable help
See the overview features and optional features to get an idea of which feature does what.
This flag can be useful for debugging and in exceptional circumstances (eg Google Drive limiting the total volume of Server Side Copies to 100GB/day).
-Do a trial run with no permanent changes. Use this to see what rclone would do without actually doing it. Useful when setting up the sync
command which deletes files in the destination.
This specifies the amount of time to wait for a server’s first response headers after fully writing the request headers if the request has an “Expect: 100-continue” header. Not all backends support using this.
+This specifies the amount of time to wait for a server's first response headers after fully writing the request headers if the request has an "Expect: 100-continue" header. Not all backends support using this.
Zero means no timeout and causes the body to be sent immediately, without waiting for the server to approve. This time does not include the time to send the request header.
The default is 1s
. Set to 0
to disable.
By default, rclone will exit with return code 0 if there were no errors.
This option allows rclone to return exit code 9 if no files were transferred between the source and destination. This allows using rclone in scripts, and triggering follow-on actions if data was copied, or skipping if not.
NB: Enabling this option turns a usually non-fatal error into a potentially fatal one - please check and adjust your scripts accordingly!
-Add an HTTP header for all transactions. The flag can be repeated to add multiple headers.
If you want to add headers only for uploads use --header-upload
and if you want to add headers only for downloads use --header-download
.
This flag is supported for all HTTP based backends even those not supported by --header-upload
and --header-download
so may be used as a workaround for those with care.
rclone ls remote:test --header "X-Rclone: Foo" --header "X-LetMeIn: Yes"
-Add an HTTP header for all download transactions. The flag can be repeated to add multiple headers.
rclone sync s3:test/src ~/dst --header-download "X-Amz-Meta-Test: Foo" --header-download "X-Amz-Meta-Test2: Bar"
See the GitHub issue here for currently supported backends.
-Add an HTTP header for all upload transactions. The flag can be repeated to add multiple headers.
rclone sync ~/src s3:test/dst --header-upload "Content-Disposition: attachment; filename='cool.html'" --header-upload "X-Amz-Meta-Test: FooBar"
See the GitHub issue here for currently supported backends.
-Using this option will cause rclone to ignore the case of the files when synchronizing so files will not be copied/synced when the existing filenames are the same, even if the casing is different.
-Normally rclone will check that the checksums of transferred files match, and give an error “corrupted on transfer” if they don’t.
-You can use this option to skip that check. You should only use it if you have had the “corrupted on transfer” error message and you are sure you might want to transfer potentially corrupted data.
-Normally rclone will check that the checksums of transferred files match, and give an error "corrupted on transfer" if they don't.
+You can use this option to skip that check. You should only use it if you have had the "corrupted on transfer" error message and you are sure you might want to transfer potentially corrupted data.
+Using this option will make rclone unconditionally skip all files that exist on the destination, no matter the content of these files.
-While this isn’t a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
-While this isn't a generally recommended option, it can be useful in cases where your files change due to encryption. However, it cannot correct partial transfers in case a transfer was interrupted.
+Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the modification time. If --checksum
is set then it only checks the checksum.
It will also cause rclone to skip verifying the sizes are the same after transfer.
This can be useful for transferring files to and from OneDrive which occasionally misreports the size of image files (see #399 for more info).
-Using this option will cause rclone to unconditionally upload all files regardless of the state of files on the destination.
Normally rclone would skip any files that have the same modification time and are the same size (or have the same checksum if using --checksum
).
Treat source and destination files as immutable and disallow modification.
With this option set, files will be created and deleted as requested, but existing files will never be updated. If an existing file does not match between the source and destination, rclone will give the error Source and destination exist but do not match: immutable file modified
.
Note that only commands which transfer files (e.g. sync
, copy
, move
) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete
, purge
) or implicitly (e.g. sync
, move
). Use copy --immutable
if it is desired to avoid deletion as well as modification.
Note that only commands which transfer files (e.g. sync
, copy
, move
) are affected by this behavior, and only modification is disallowed. Files may still be deleted explicitly (e.g. delete
, purge
) or implicitly (e.g. sync
, move
). Use copy --immutable
if it is desired to avoid deletion as well as modification.
This can be useful as an additional layer of protection for immutable or append-only data sets (notably backup archives), where modification implies corruption and should not be propagated.
-During rmdirs it will not remove root directory, even if it’s empty.
-Log all of rclone’s output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
Note that if you are using the logrotate
program to manage rclone’s logs, then you should use the copytruncate
option as rclone doesn’t have a signal to rotate logs.
Comma separated list of log format options. date
, time
, microseconds
, longfile
, shortfile
, UTC
. The default is “date
,time
”.
During rmdirs it will not remove root directory, even if it's empty.
+Log all of rclone's output to FILE. This is not active by default. This can be useful for tracking down problems with syncs in combination with the -v
flag. See the Logging section for more info.
Note that if you are using the logrotate
program to manage rclone's logs, then you should use the copytruncate
option as rclone doesn't have a signal to rotate logs.
Comma separated list of log format options. date
, time
, microseconds
, longfile
, shortfile
, UTC
. The default is "date
,time
".
This sets the log level for rclone. The default log level is NOTICE
.
DEBUG
is equivalent to -vv
. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.
INFO
is equivalent to -v
. It outputs information about each transfer and prints stats once a minute by default.
NOTICE
is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.
ERROR
is equivalent to -q
. It only outputs error messages.
This switches the log format to JSON for rclone. The fields of json log are level, msg, source, time.
-This controls the number of low level retries rclone does.
A low level retry is used to retry a failing operation - typically one HTTP request. This might be uploading a chunk of a big file for example. You will see low level retries in the log with the -v
flag.
This shouldn’t need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries
flag) quicker.
This shouldn't need to be changed from the default in normal operations. However, if you get a lot of low level retries you may wish to reduce the value so rclone moves on to a high level retry (see the --retries
flag) quicker.
Disable low level retries with --low-level-retries 1
.
This is the maximum allowable backlog of files in a sync/copy/move queued for being checked or transferred.
This can be set arbitrarily large. It will only use memory when the queue is in use. Note that it will use in the order of N kB of memory when the backlog is in use.
Setting this large allows rclone to calculate how many files are pending more accurately, give a more accurate estimated finish time and make --order-by
work more accurately.
Setting this small will make rclone more synchronous to the listings of the remote which may be desirable.
Setting this to a negative number will make the backlog as large as possible.
-This tells rclone not to delete more than N files. If that limit is exceeded then a fatal error will be generated and rclone will stop the operation in progress.
-This modifies the recursion depth for all the commands except purge.
So if you do rclone --max-depth 1 ls remote:path
you will see only the files in the top level directory. Using --max-depth 2
means you will see all the files in first two directory levels and so on.
For historical reasons the lsd
command defaults to using a --max-depth
of 1 - you can override this with the command line flag.
You can use this command to disable recursion (with --max-depth 1
).
Note that if you use this with sync
and --delete-excluded
the files not recursed through are considered excluded and will be deleted on the destination. Test first with --dry-run
if you are not sure what will happen.
Rclone will stop scheduling new transfers when it has run for the duration specified.
Defaults to off.
When the limit is reached any existing transfers will complete.
-Rclone won’t exit with an error if the transfer limit is reached.
-Rclone won't exit with an error if the transfer limit is reached.
+Rclone will stop transferring when it has reached the size specified. Defaults to off.
When the limit is reached all transfers will stop immediately.
Rclone will exit with exit code 8 if the transfer limit is reached.
-This modifies the behavior of --max-transfer
Defaults to --cutoff-mode=hard
.
Specifying --cutoff-mode=hard
will stop transferring immediately when Rclone reaches the limit.
Specifying --cutoff-mode=soft
will stop starting new transfers when Rclone reaches the limit.
Specifying --cutoff-mode=cautious
will try to prevent Rclone from reaching the limit.
When checking whether a file has been modified, this is the maximum allowed time difference that a file can have and still be considered equivalent.
The default is 1ns
unless this is overridden by a remote. For example OS X only stores modification times to the nearest second so if you are reading and writing to an OS X filing system this will be 1s
by default.
This command line flag allows you to override that computed default.
-When downloading files to the local backend above this size, rclone will use multiple threads to download the file (default 250M).
-Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE)
on unix or NTSetInformationFile
on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won’t create fragmented or sparse files and there won’t be any assembly time at the end of the transfer.
Rclone preallocates the file (using fallocate(FALLOC_FL_KEEP_SIZE)
on unix or NTSetInformationFile
on Windows both of which takes no time) then each thread writes directly into the file at the correct place. This means that rclone won't create fragmented or sparse files and there won't be any assembly time at the end of the transfer.
The number of threads used to download is controlled by --multi-thread-streams
.
Use -vv
if you wish to see info about the threads.
This will work with the sync
/copy
/move
commands and friends copyto
/moveto
. Multi thread downloads will be used with rclone mount
and rclone serve
if --vfs-cache-mode
is set to writes
or above.
NB that this only works for a local destination but will work with any source.
NB that multi thread copies are disabled for local to local copies as they are faster without unless --multi-thread-streams
is set explicitly.
NB on Windows using multi-thread downloads will cause the resulting files to be sparse. Use --local-no-sparse
to disable sparse files (which may cause long delays at the start of downloads) or disable multi-thread downloads with --multi-thread-streams 0
When using multi thread downloads (see above --multi-thread-cutoff
) this sets the maximum number of streams to use. Set to 0
to disable multi thread downloads (Default 4).
Exactly how many streams rclone uses for the download depends on the size of the file. To calculate the number of download streams Rclone divides the size of the file by the --multi-thread-cutoff
and rounds up, up to the maximum set with --multi-thread-streams
.
So if --multi-thread-cutoff 250MB
and --multi-thread-streams 4
are in effect (the defaults):
The --no-check-dest
can be used with move
or copy
and it causes rclone not to check the destination at all when copying files.
This means that:
--retries 1
is recommended otherwise you’ll transfer everything again on a retry--retries 1
is recommended otherwise you'll transfer everything again on a retryThis flag is useful to minimise the transactions if you know that none of the files are on the destination.
This is a specialized flag which should be ignored by most users!
-Don’t set Accept-Encoding: gzip
. This means that rclone won’t ask the server for compressed files automatically. Useful if you’ve set the server to return files with Content-Encoding: gzip
but you uploaded compressed files.
Don't set Accept-Encoding: gzip
. This means that rclone won't ask the server for compressed files automatically. Useful if you've set the server to return files with Content-Encoding: gzip
but you uploaded compressed files.
There is no need to set this in normal operation, and doing so will decrease the network transfer efficiency of rclone.
-The --no-traverse
flag controls whether the destination file system is traversed when using the copy
or move
commands. --no-traverse
is not compatible with sync
and will be ignored if you supply it with sync
.
If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse
will stop rclone listing the destination and save time.
However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven’t changed and won’t need copying then you shouldn’t use --no-traverse
.
However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse
.
See rclone copy for an example of how to use it.
-Don’t normalize unicode characters in filenames during the sync routine.
+Don't normalize unicode characters in filenames during the sync routine.
Sometimes, an operating system will store filenames containing unicode parts in their decomposed form (particularly macOS). Some cloud storage systems will then recompose the unicode, resulting in duplicate files if the data is ever copied back to a local filesystem.
Using this flag will disable that functionality, treating each unicode character as unique. For example, by default é and é will be normalized into the same character. With --no-unicode-normalization
they will be treated as unique characters.
When using this flag, rclone won’t update modification times of remote files if they are incorrect as it would normally.
+When using this flag, rclone won't update modification times of remote files if they are incorrect as it would normally.
This can be used if the remote is being synced with another tool also (eg the Google Drive client).
-The --order-by
flag controls the order in which files in the backlog are processed in rclone sync
, rclone copy
and rclone move
.
The order by string is constructed like this. The first part describes what aspect is being measured:
The --order-by
flag does not do a separate pass over the data. This means that it may transfer some files out of the order specified if
Rclone will do its best to transfer the best file it has so in practice this should not cause a problem. Think of --order-by
as being more of a best efforts flag rather than a perfect ordering.
This flag supplies a program which should supply the config password when run. This is an alternative to rclone prompting for the password or setting the RCLONE_CONFIG_PASS
variable.
The argument to this should be a command with a space separated list of arguments. If one of the arguments has a space in then enclose it in "
, if you want a literal "
in an argument then enclose the argument in "
and double the "
. See CSV encoding for more info.
Eg
@@ -3091,88 +3094,88 @@ rclone sync /path/to/files remote:current-backup --password-command echo "hello with ""quotes"" and space"See the Configuration Encryption for more info.
See a Windows PowerShell example on the Wiki.
-This flag makes rclone update the stats in a static block in the terminal providing a realtime overview of the transfer.
Any log messages will scroll above the static block. Log messages will push the static block down to the bottom of the terminal where it will stay.
Normally this is updated every 500mS but this period can be overridden with the --stats
flag.
This can be used with the --stats-one-line
flag for a simpler display.
Note: On Windows until this bug is fixed all non-ASCII characters will be replaced with .
when --progress
is in use.
This flag will limit rclone’s output to error messages only.
-This flag will limit rclone's output to error messages only.
+Retry the entire sync if it fails this many times it fails (default 3).
-Some remotes can be unreliable and a few retries help pick up the files which didn’t get transferred because of errors.
+Some remotes can be unreliable and a few retries help pick up the files which didn't get transferred because of errors.
Disable retries with --retries 1
.
This sets the interval between each retry specified by --retries
The default is 0
. Use 0
to disable.
Normally rclone will look at modification time and size of files to see if they are equal. If you set this flag then rclone will check only the size.
-This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn’t set checksums of modification times in the same way as rclone.
-This can be useful transferring files from Dropbox which have been modified by the desktop sync client which doesn't set checksums of modification times in the same way as rclone.
+Commands which transfer data (sync
, copy
, copyto
, move
, moveto
) will print data transfer stats at regular intervals to show their progress.
This sets the interval.
The default is 1m
. Use 0
to disable.
If you set the stats interval then all commands can show stats. This can be useful when running other commands, check
or mount
for example.
Stats are logged at INFO
level by default which means they won’t show at default log level NOTICE
. Use --stats-log-level NOTICE
or -v
to make them show. See the Logging section for more info on log levels.
Stats are logged at INFO
level by default which means they won't show at default log level NOTICE
. Use --stats-log-level NOTICE
or -v
to make them show. See the Logging section for more info on log levels.
Note that on macOS you can send a SIGINFO (which is normally ctrl-T in the terminal) to make the stats print immediately.
-By default, the --stats
output will truncate file names and paths longer than 40 characters. This is equivalent to providing --stats-file-name-length 40
. Use --stats-file-name-length 0
to disable any truncation of file names printed by stats.
Log level to show --stats
output at. This can be DEBUG
, INFO
, NOTICE
, or ERROR
. The default is INFO
. This means at the default level of logging which is NOTICE
the stats won’t show - if you want them to then use --stats-log-level NOTICE
. See the Logging section for more info on log levels.
Log level to show --stats
output at. This can be DEBUG
, INFO
, NOTICE
, or ERROR
. The default is INFO
. This means at the default level of logging which is NOTICE
the stats won't show - if you want them to then use --stats-log-level NOTICE
. See the Logging section for more info on log levels.
When this is specified, rclone condenses the stats into a single line showing the most important stats only.
-When this is specified, rclone enables the single-line stats and prepends the display with a date string. The default is 2006/01/02 15:04:05 -
When this is specified, rclone enables the single-line stats and prepends the display with a user-supplied date string. The date string MUST be enclosed in quotes. Follow golang specs for date formatting syntax.
-By default, data transfer rates will be printed in bytes/second.
This option allows the data rate to be printed in bits/second.
Data transfer volume will still be reported in bytes.
The rate is reported as a binary unit, not SI unit. So 1 Mbit/s equals 1,048,576 bits/s and not 1,000,000 bits/s.
The default is bytes
.
When using sync
, copy
or move
any files which would have been overwritten or deleted will have the suffix added to them. If there is a file with the same path (after the suffix has been added), then it will be overwritten.
The remote in use must support server side move or copy and you must use the same remote as the destination of the sync.
This is for use with files to add the suffix in the current directory or with --backup-dir
. See --backup-dir
for more info.
For example
rclone sync /path/to/local/file remote:current --suffix .bak
will sync /path/to/local
to remote:current
, but for any files which would have been updated or deleted have .bak added.
When using --suffix
, setting this causes rclone put the SUFFIX before the extension of the files that it backs up rather than after.
So let’s say we had --suffix -2019-01-01
, without the flag file.txt
would be backed up to file.txt-2019-01-01
and with the flag it would be backed up to file-2019-01-01.txt
. This can be helpful to make sure the suffixed files can still be opened.
So let's say we had --suffix -2019-01-01
, without the flag file.txt
would be backed up to file.txt-2019-01-01
and with the flag it would be backed up to file-2019-01-01.txt
. This can be helpful to make sure the suffixed files can still be opened.
On capable OSes (not Windows or Plan9) send all log output to syslog.
This can be useful for running rclone in a script or rclone mount
.
If using --syslog
this sets the syslog facility (eg KERN
, USER
). See man syslog
for a list of possible facilities. The default facility is DAEMON
.
Limit HTTP transactions per second to this. Default is 0 which is used to mean unlimited transactions per second.
For example to limit rclone to 10 HTTP transactions per second use --tpslimit 10
, or to 1 transaction every 2 seconds use --tpslimit 0.5
.
Use this when the number of transactions per second from rclone is causing a problem with the cloud storage provider (eg getting you banned or rate limited).
This can be very useful for rclone mount
to control the behaviour of applications using it.
See also --tpslimit-burst
.
Max burst of transactions for --tpslimit
(default 1
).
Normally --tpslimit
will do exactly the number of transaction per second specified. However if you supply --tps-burst
then rclone can save up some transactions from when it was idle giving a burst of up to the parameter supplied.
For example if you provide --tpslimit-burst 10
then if rclone has been idle for more than 10*--tpslimit
then it can do 10 transactions very quickly before they are limited again.
This may be used to increase performance of --tpslimit
without changing the long term average number of transactions per second.
By default, rclone doesn’t keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
+By default, rclone doesn't keep track of renamed files, so if you rename a file locally then sync it to a remote, rclone will delete the old file on the remote and upload a new copy.
If you use this flag, and the remote supports server side copy or server side move, and the source and destination have a compatible hash, then this will track renames during sync
operations and perform renaming server-side.
Files will be matched by size and hash - if both match then a rename will be considered.
If the destination does not support server-side copy or move, rclone will fall back to the default behaviour and log an error level message to the console. Note: Encrypted destinations are not supported by --track-renames
.
Note that --track-renames
is incompatible with --no-traverse
and that it uses extra memory to keep track of all the rename candidates.
Note also that --track-renames
is incompatible with --delete-before
and will select --delete-after
instead of --delete-during
.
This option changes the matching criteria for --track-renames
to match by any combination of modtime, hash, size. Matching by size is always enabled no matter what option is selected here. This also means that it enables --track-renames
support for encrypted destinations. If nothing is specified, the default option is matching by hashes.
This option allows you to specify when files on your destination are deleted when you sync folders.
Specifying the value --delete-before
will delete all files present on the destination, but not on the source before starting the transfer of any new or updated files. This uses two passes through the file systems, one for the deletions and one for the copies.
Specifying --delete-during
will delete files while checking and uploading files. This is the fastest option and uses the least memory.
Specifying --delete-after
(the default value) will delay deletion of files until all new/updated files have been successfully transferred. The files to be deleted are collected in the copy pass then deleted after the copy pass has completed successfully. The files to be deleted are held in memory so this mode may use more memory. This is the safest mode as it will only delete files if there have been no errors subsequent to that. If there have been errors before the deletions start then you will get the message not deleting files as there were IO errors
.
When doing anything which involves a directory listing (eg sync
, copy
, ls
- in fact nearly every command), rclone normally lists a directory and processes it before using more directory lists to process any subdirectories. This can be parallelised and works very quickly using the least amount of memory.
However, some remotes have a way of listing all files beneath a directory in one (or a small number) of transactions. These tend to be the bucket based remotes (eg S3, B2, GCS, Swift, Hubic).
If you use the --fast-list
flag then rclone will use this method for listing directories. This will have the following consequences for the listing:
rclone should always give identical results with and without --fast-list
.
If you pay for transactions and can fit your entire sync listing into memory then --fast-list
is recommended. If you have a very big sync to do then don’t use --fast-list
otherwise you will run out of memory.
If you use --fast-list
on a remote which doesn’t support it, then rclone will just ignore it.
If you pay for transactions and can fit your entire sync listing into memory then --fast-list
is recommended. If you have a very big sync to do then don't use --fast-list
otherwise you will run out of memory.
If you use --fast-list
on a remote which doesn't support it, then rclone will just ignore it.
This sets the IO idle timeout. If a transfer has started but then becomes idle for this long it is considered broken and disconnected.
The default is 5m
. Set to 0
to disable.
The number of file transfers to run in parallel. It can sometimes be useful to set this to a smaller number if the remote is giving a lot of timeouts or bigger if you have lots of bandwidth and a fast remote.
The default is to run 4 file transfers in parallel.
-This forces rclone to skip any files which exist on the destination and have a modified time that is newer than the source file.
-This can be useful when transferring to a remote which doesn’t support mod times directly (or when using --use-server-modtime
to avoid extra API calls) as it is more accurate than a --size-only
check and faster than using --checksum
.
If an existing destination file has a modification time equal (within the computed modify window precision) to the source file’s, it will be updated if the sizes are different. If --checksum
is set then rclone will update the destination if the checksums differ too.
This can be useful when transferring to a remote which doesn't support mod times directly (or when using --use-server-modtime
to avoid extra API calls) as it is more accurate than a --size-only
check and faster than using --checksum
.
If an existing destination file has a modification time equal (within the computed modify window precision) to the source file's, it will be updated if the sizes are different. If --checksum
is set then rclone will update the destination if the checksums differ too.
If an existing destination file is older than the source file then it will be updated if the size or checksum differs from the source file.
-On remotes which don’t support mod time directly (or when using --use-server-modtime
) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
On remotes which don't support mod time directly (or when using --use-server-modtime
) the time checked will be the uploaded time. This means that if uploading to one of these remotes, rclone will skip any files which exist on the destination and have an uploaded time that is newer than the modification time of the source file.
If this flag is set then rclone will use anonymous memory allocated by mmap on Unix based platforms and VirtualAlloc on Windows for its transfer buffers (size controlled by --buffer-size
). Memory allocated like this does not go on the Go heap and can be returned to the OS immediately when it is finished with.
If this flag is not set then rclone will allocate and free the buffers using the Go memory allocator which may use more memory as memory pages are returned less aggressively to the OS.
It is possible this does not work well on all platforms so it is disabled by default; in the future it may be enabled by default.
-Some object-store backends (e.g, Swift, S3) do not preserve file modification times (modtime). On these backends, rclone stores the original modtime as additional metadata on the object. By default it will make an API call to retrieve the metadata when the modtime is needed by an operation.
-Use this flag to disable the extra API call and rely instead on the server’s modified time. In cases such as a local to remote sync using --update
, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.
Use this flag to disable the extra API call and rely instead on the server's modified time. In cases such as a local to remote sync using --update
, knowing the local file is newer than the time it was last uploaded to the remote is sufficient. In those cases, this flag can speed up the process and reduce the number of API calls necessary.
Using this flag on a sync operation without also using --update
would cause all files modified at any time other than the last upload time to be uploaded again, which is probably not what you want.
With -v
rclone will tell you about each file that is transferred and a small number of significant events.
With -vv
rclone will become very verbose telling you about every file it considers and transfers. Please send bug reports with a log with this setting.
Prints the version number
The outgoing SSL/TLS connections rclone makes can be controlled with these options. For example this can be very useful with the HTTP or WebDAV backends. Rclone HTTP servers have their own set of configuration for SSL/TLS which you can find in their documentation.
-This loads the PEM encoded certificate authority certificate and uses it to verify the certificates of the servers rclone connects to.
If you have generated certificates signed with a local CA then you will need this flag to connect to servers using those certificates.
-This loads the PEM encoded client side certificate.
This is used for mutual TLS authentication.
The --client-key
flag is required too when using this.
This loads the PEM encoded client side private key used for mutual TLS authentication. Used in conjunction with --client-cert
.
--no-check-certificate
controls whether a client verifies the server’s certificate chain and host name. If --no-check-certificate
is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.
--no-check-certificate
controls whether a client verifies the server's certificate chain and host name. If --no-check-certificate
is true, TLS accepts any certificate presented by the server and any host name in that certificate. In this mode, TLS is susceptible to man-in-the-middle attacks.
This option defaults to false
.
This should be used only for testing.
Your configuration file contains information for logging in to your cloud services. This means that you should keep your .rclone.conf
file in a secure location.
If you are in an environment where that isn’t possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone.
+If you are in an environment where that isn't possible, you can add a password to your configuration. This means that you will have to supply the password every time you start rclone.
To add a password to your rclone configuration, execute rclone config
.
>rclone config
Current remotes:
@@ -3270,34 +3273,34 @@ export RCLONE_CONFIG_PASS
One useful example of this is using the passwordstore
application to retrieve the password:
export RCLONE_PASSWORD_COMMAND="pass rclone/config"
If the passwordstore
password manager holds the password for the rclone configuration, using the script method means the password is primarily protected by the passwordstore
system, and is never embedded in the clear in scripts, nor available for examination using the standard commands available. It is quite possible with long running rclone sessions for copies of passwords to be innocently captured in log files or terminal scroll buffers, etc. Using the script method of supplying the password enhances the security of the config password considerably.
If you are running rclone inside a script, unless you are using the --password-command
method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false
to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn’t contain a valid password, and --password-command
has not been supplied.
If you are running rclone inside a script, unless you are using the --password-command
method, you might want to disable password prompts. To do that, pass the parameter --ask-password=false
to rclone. This will make rclone fail instead of asking for a password if RCLONE_CONFIG_PASS
doesn't contain a valid password, and --password-command
has not been supplied.
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren’t documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
These options are useful when developing or debugging rclone. There are also some more remote specific options which aren't documented here which are used for testing. These start with remote name eg --drive-test-option
- see the docs for the remote in question.
Write CPU profile to file. This can be analysed with go tool pprof
.
The --dump
flag takes a comma separated list of flags to dump info about.
Note that some headers including Accept-Encoding
as shown may not be correct in the request and the response may not show Content-Encoding
if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
The available flags are:
-Dump HTTP headers with Authorization:
lines removed. May still contain sensitive info. Can be very verbose. Useful for debugging only.
Use --dump auth
if you do want the Authorization:
headers.
Dump HTTP headers and bodies - may contain sensitive info. Can be very verbose. Useful for debugging only.
-Note that the bodies are buffered in memory so don’t use this for enormous files.
-Note that the bodies are buffered in memory so don't use this for enormous files.
+Like --dump bodies
but dumps the request bodies and the response headers. Useful for debugging download problems.
Like --dump bodies
but dumps the response bodies and the request headers. Useful for debugging upload problems.
Dump HTTP headers - will contain sensitive info such as Authorization:
headers - use --dump headers
to dump without Authorization:
headers. Can be very verbose. Useful for debugging only.
Dump the filters to the output. Useful to see exactly what include and exclude options are filtering on.
-This dumps a list of the running go-routines at the end of the command to standard output.
-This dumps a list of the open files at the end of the command. It uses the lsof
command to do that so you’ll need that installed to use it.
This dumps a list of the open files at the end of the command. It uses the lsof
command to do that so you'll need that installed to use it.
Write memory profile to file. This can be analysed with go tool pprof
.
For the filtering options
@@ -3349,8 +3352,8 @@ export RCLONE_CONFIG_PASS4
- File not found5
- Temporary error (one that more retries might fix) (Retry errors)6
- Less serious errors (like 461 errors from dropbox) (NoRetry errors)7
- Fatal error (one that more retries won’t fix, like account suspended) (Fatal errors)8
- Transfer exceeded - limit set by –max-transfer reached7
- Fatal error (one that more retries won't fix, like account suspended) (Fatal errors)8
- Transfer exceeded - limit set by --max-transfer reached9
- Operation successful, but no files transferredHTTP_PROXY
, HTTPS_PROXY
and NO_PROXY
(or the lowercase versions thereof).
HTTPS_PROXY
takes precedence over HTTP_PROXY
for https requests.The filters are applied for the copy
, sync
, move
, ls
, lsl
, md5sum
, sha1sum
, size
, delete
and check
operations. Note that purge
does not obey the filters.
Each path as it passes through rclone is matched against the include and exclude rules like --include
, --exclude
, --include-from
, --exclude-from
, --filter
, or --filter-from
. The simplest way to try them out is using the ls
command, or --dry-run
together with -v
. --filter-from
, --exclude-from
, --include-from
, --files-from
, --files-from-raw
understand -
as a file name to mean read from standard input.
The patterns used to match files for inclusion or exclusion are based on “file globs” as used by the unix shell.
-If the pattern starts with a /
then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn’t start with /
then it is matched starting at the end of the path, but it will only match a complete path element:
The patterns used to match files for inclusion or exclusion are based on "file globs" as used by the unix shell.
+If the pattern starts with a /
then it only matches at the top level of the directory tree, relative to the root of the remote (not necessarily the root of the local drive). If it doesn't start with /
then it is matched starting at the end of the path, but it will only match a complete path element:
file.jpg - matches "file.jpg"
- matches "directory/file.jpg"
- doesn't match "afile.jpg"
@@ -3486,7 +3489,7 @@ Configuration file is stored at:
With --ignore-case
potato - matches "potato"
- matches "POTATO"
-Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir
won’t work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir
+Note also that rclone filter globs can only be used in one of the filter command line flags, not in the specification of the remote, so rclone copy "remote:dir*.jpg" /path/to/dir
won't work - what is required is rclone --include "*.jpg" copy remote:dir /path/to/dir
Directories
Rclone keeps track of directories that could match any file patterns.
Eg if you add the include rule
@@ -3494,9 +3497,9 @@ Configuration file is stored at:
Rclone will synthesize the directory include rule
/a/
If you put any rules which end in /
then it will only match directories.
-Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won’t optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don’t have a concept of directory.
+Directory matches are only used to optimise directory access patterns - you must still match the files that you want to match. Directory matches won't optimise anything on bucket based remotes (eg s3, swift, google compute storage, b2) which don't have a concept of directory.
Differences between rsync and rclone patterns
-Rclone implements bash style {a,b,c}
glob matching which rsync doesn’t.
+Rclone implements bash style {a,b,c}
glob matching which rsync doesn't.
Rclone always does a wildcard match so \
must always escape a \
.
How the rules are used
Rclone maintains a combined list of include rules and exclude rules.
@@ -3554,7 +3557,7 @@ file2.jpg
Add a single include rule with --include
.
This flag can be repeated. See above for the order the flags are processed in.
Eg --include *.{png,jpg}
to include all png
and jpg
files in the backup and no others.
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from
.
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from
.
--include-from
- Read include patterns from fileAdd include rules from a file.
This flag can be repeated. See above for the order the flags are processed in.
@@ -3565,7 +3568,7 @@ file2.jpg file2.aviThen use as --include-from include-file.txt
. This will sync all jpg
, png
files and file2.avi
.
This is useful if you have a lot of rules.
-This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn’t provide enough flexibility then you must use --filter-from
.
This adds an implicit --exclude *
at the very end of the filter list. This means you can mix --include
and --include-from
with the other filters (eg --exclude
) but you must include all the files you want in the include statement. If this doesn't provide enough flexibility then you must use --filter-from
.
--filter
- Add a file-filtering ruleThis can be used to add a single include or exclude rule. Include rules start with +
and exclude rules start with -
. A special rule called !
can be used to clear the existing rules.
This flag can be repeated. See above for the order the flags are processed in.
@@ -3591,7 +3594,7 @@ file2.aviRclone will traverse the file system if you use --files-from
, effectively using the files in --files-from
as a set of filters. Rclone will not error if any of the files are missing.
If you use --no-traverse
as well as --files-from
then rclone will not traverse the destination file system, it will find each file individually using approximately 1 API call. This can be more efficient for small lists of files.
This option can be repeated to read from more than one file. These are read in the order that they are placed on the command line.
-Paths within the --files-from
file will be interpreted as starting with the root specified in the command. Leading /
characters are ignored. See –files-from-raw if you need the input to be processed in a raw manner.
Paths within the --files-from
file will be interpreted as starting with the root specified in the command. Leading /
characters are ignored. See --files-from-raw if you need the input to be processed in a raw manner.
For example, suppose you had files-from.txt
with this content:
# comment
file1.jpg
@@ -3601,11 +3604,11 @@ subdir/file2.jpg
This will transfer these files only (if they exist)
/home/me/pics/file1.jpg → remote:pics/file1.jpg
/home/me/pics/subdir/file2.jpg → remote:pics/subdir/file2.jpg
-To take a more complicated example, let’s say you had a few files you want to back up regularly with these absolute paths:
+To take a more complicated example, let's say you had a few files you want to back up regularly with these absolute paths:
/home/user1/important
/home/user1/dir/file
/home/user2/stuff
-To copy these you’d find a common subdirectory - in this case /home
and put the remaining files in files-from.txt
with or without leading /
, eg
To copy these you'd find a common subdirectory - in this case /home
and put the remaining files in files-from.txt
with or without leading /
, eg
user1/important
user1/dir/file
user2/stuff
@@ -3627,13 +3630,13 @@ user2/stuff
/home/user2/stuff → remote:backup/home/user2/stuff
--files-from-raw
- Read list of source-file names without any processingThis option is same as --files-from
with the only difference being that the input is read in a raw manner. This means that lines with leading/trailing whitespace and lines starting with ;
or #
are read without any processing. rclone lsf has a compatible format that can be used to export file lists from remotes, which can then be used as an input to --files-from-raw
.
--min-size
- Don’t transfer any file smaller than this--min-size
- Don't transfer any file smaller than thisThis option controls the minimum size file which will be transferred. This defaults to kBytes
but a suffix of k
, M
, or G
can be used.
For example --min-size 50k
means no files smaller than 50kByte will be transferred.
--max-size
- Don’t transfer any file larger than this--max-size
- Don't transfer any file larger than thisThis option controls the maximum size file which will be transferred. This defaults to kBytes
but a suffix of k
, M
, or G
can be used.
For example --max-size 1G
means no files larger than 1GByte will be transferred.
--max-age
- Don’t transfer any file older than this--max-age
- Don't transfer any file older than thisThis option controls the maximum age of files to transfer. Give in seconds or with a suffix of:
ms
- MillisecondsFor example --max-age 2d
means no files older than 2 days will be transferred.
This can also be an absolute time in one of these formats
--min-age
- Don’t transfer any file younger than this--min-age
- Don't transfer any file younger than thisThis option controls the minimum age of files to transfer. Give in seconds or with a suffix (see --max-age
for list of suffixes)
For example --min-age 2d
means no files younger than 2 days will be transferred.
--delete-excluded
- Delete files on dest excluded from syncYou can exclude dir3
from sync by running the following command:
rclone sync --exclude-if-present .ignore dir1 remote:backup
-Currently only one filename is supported, i.e. --exclude-if-present
should not be used multiple times.
Currently only one filename is supported, i.e. --exclude-if-present
should not be used multiple times.
Rclone can serve a web based GUI (graphical user interface). This is somewhat experimental at the moment so things may be subject to change.
Run this command in a terminal and rclone will download and then display the GUI in a web browser.
@@ -3720,7 +3723,7 @@ dir1/dir2/dir3/.ignoreWhen you run the rclone rcd --rc-web-gui
this is what happens
If rclone is run with the --rc
flag then it starts an http server which can be used to remote control rclone using its API.
If you just want to run a remote control then see the rcd command.
Flag to start the http server listen on remote requests
-IPaddress:Port or :Port to bind server to. (default “localhost:5572”)
-IPaddress:Port or :Port to bind server to. (default "localhost:5572")
+SSL PEM key (concatenation of certificate and CA certificate)
-Client certificate authority to verify clients with
-htpasswd file - if not provided no authentication is done
-SSL PEM Private key
-Maximum size of request header (default 4096)
-User name for authentication.
-Password for authentication.
-Realm for authentication (default “rclone”)
-Realm for authentication (default "rclone")
+Timeout for server reading data (default 1h0m0s)
-Timeout for server writing data (default 1h0m0s)
-Enable the serving of remote objects via the HTTP interface. This means objects will be accessible at http://127.0.0.1:5572/ by default, so you can browse to http://127.0.0.1:5572/ or http://127.0.0.1:5572/* to see a listing of the remotes. Objects may be requested from remotes using this syntax http://127.0.0.1:5572/[remote:path]/path/to/object
Default Off.
-Path to local files to serve on the HTTP server.
If this is set then rclone will serve the files in that directory. It will also open the root in the web browser if specified. This is for implementing browser based GUIs for rclone functions.
If --rc-user
or --rc-pass
is set then the URL that is opened will have the authorization in the URL in the http://user:pass@localhost/
style.
Default Off.
-Enable OpenMetrics/Prometheus compatible endpoint at /metrics
.
Default Off.
-Set this flag to serve the default web gui on the same port as rclone.
Default Off.
-Set the allowed Access-Control-Allow-Origin for rc requests.
-Can be used with –rc-web-gui if the rclone is running on different IP than the web-gui.
+Can be used with --rc-web-gui if the rclone is running on different IP than the web-gui.
Default is IP address on which rc is running.
-Set the URL to fetch the rclone-web-gui files from.
Default https://api.github.com/repos/rclone/rclone-webui-react/releases/latest.
-Set this flag to check and update rclone-webui-react from the rc-web-fetch-url.
Default Off.
-Set this flag to force update rclone-webui-react from the rc-web-fetch-url.
Default Off.
-Set this flag to disable opening browser automatically when using web-gui.
Default Off.
-Expire finished async jobs older than DURATION (default 60s).
-Interval duration to check for expired async jobs (default 10s).
-By default rclone will require authorisation to have been set up on the rc interface in order to use any methods which access any rclone remotes. Eg operations/list
is denied as it involved creating a remote as is sync/copy
.
If this is set then no authorisation will be required on the server to use these methods. The alternative is to use --rc-user
and --rc-pass
and use these credentials in the request.
Default Off.
@@ -3908,7 +3911,7 @@ dir1/dir2/dir3/.ignoreThis takes the following parameters
Note that this is the direct equivalent of using this “backend” command:
+Note that this is the direct equivalent of using this "backend" command:
rclone backend noop . -o echo=yes -o blue path1 path2
-Note that arguments must be preceded by the “-a” flag
+Note that arguments must be preceded by the "-a" flag
See the backend command for more information.
Authentication is required for this call.
Ensure the specified file chunks are cached on disk.
The chunks= parameter specifies the file chunks to check. It takes a comma separated list of array slice indices. The slice indices are similar to Python slices: start[:end]
-start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value “-5:” represents the last 5 chunks of a file.
-Some valid examples are: “:5,-5:” -> the first and last five chunks “0,-2” -> the first and the second last chunk “0:10” -> the first ten chunks
-Any parameter with a key that starts with “file” can be used to specify files to fetch, eg
+start is the 0 based chunk number from the beginning of the file to fetch inclusive. end is 0 based chunk number from the beginning of the file to fetch exclusive. Both values can be negative, in which case they count from the back of the file. The value "-5:" represents the last 5 chunks of a file.
+Some valid examples are: ":5,-5:" -> the first and last five chunks "0,-2" -> the first and the second last chunk "0:10" -> the first ten chunks
+Any parameter with a key that starts with "file" can be used to specify files to fetch, eg
rclone rc cache/fetch chunks=0 file=hello file2=home/goodbye
File names will automatically be encrypted when the a crypt remote is used on top of the cache.
This takes the following parameters
This takes the following parameters
See the config password command command for more information on the above.
Authentication is required for this call.
@@ -4002,7 +4005,7 @@ rclone rc cache/expire remote=/ withData=trueThis takes the following parameters
The format of the parameter is exactly the same as passed to –bwlimit except only one bandwidth may be specified.
-In either case “rate” is returned as a human readable string, and “bytesPerSecond” is returned as a number.
+The format of the parameter is exactly the same as passed to --bwlimit except only one bandwidth may be specified.
+In either case "rate" is returned as a human readable string, and "bytesPerSecond" is returned as a number.
This tells the go runtime to do a garbage collection run. It isn’t necessary to call this normally, but it can be useful for debugging memory problems.
+This tells the go runtime to do a garbage collection run. It isn't necessary to call this normally, but it can be useful for debugging memory problems.
This returns list of stats groups currently in memory.
Returns the following values:
@@ -4096,7 +4099,7 @@ rclone rc core/bwlimit rate=1M "checking": an array of names of currently active file checks [] } -Values for “transferring”, “checking” and “lastError” are only assigned if data is available. The value for “eta” is null if an eta cannot be determined.
+Values for "transferring", "checking" and "lastError" are only assigned if data is available. The value for "eta" is null if an eta cannot be determined.
This deletes entire stats group
Parameters
@@ -4136,7 +4139,7 @@ rclone rc core/bwlimit rate=1MThis shows the current version of go and the go runtime
rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE.
+rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
If no mountType is provided, the priority is given as follows: 1. mount 2.cmount 3.mount2
This takes the following parameters
The mount types are strings like “mount”, “mount2”, “cmount” and can be passed to mount/mount as the mountType parameter.
+The mount types are strings like "mount", "mount2", "cmount" and can be passed to mount/mount as the mountType parameter.
Eg
rclone rc mount/types
Authentication is required for this call.
rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone’s cloud storage systems as a file system with FUSE.
+rclone allows Linux, FreeBSD, macOS and Windows to mount any of Rclone's cloud storage systems as a file system with FUSE.
This takes the following parameters
This takes the following parameters
The result is as returned from rclone about –json
+The result is as returned from rclone about --json
See the about command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
See the cleanup command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
Authentication is required for this call.
This takes the following parameters
This takes the following parameters
See the delete command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
See the deletefile command command for more information on the above.
Authentication is required for this call.
This takes the following parameters
This returns info about the remote passed in;
{
@@ -4330,8 +4333,8 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
operations/list: List the given remote and path in JSON format
This takes the following parameters
-- fs - a remote name string eg “drive:”
-- remote - a path within that remote eg “dir”
+- fs - a remote name string eg "drive:"
+- remote - a path within that remote eg "dir"
- opt - a dictionary of options to control the listing (optional)
- recurse - If set recurse directories
@@ -4353,25 +4356,25 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
operations/mkdir: Make a destination directory or container
This takes the following parameters
-- fs - a remote name string eg “drive:”
-- remote - a path within that remote eg “dir”
+- fs - a remote name string eg "drive:"
+- remote - a path within that remote eg "dir"
See the mkdir command command for more information on the above.
Authentication is required for this call.
operations/movefile: Move a file from source remote to destination remote
This takes the following parameters
-- srcFs - a remote name string eg “drive:” for the source
-- srcRemote - a path within that remote eg “file.txt” for the source
-- dstFs - a remote name string eg “drive2:” for the destination
-- dstRemote - a path within that remote eg “file2.txt” for the destination
+- srcFs - a remote name string eg "drive:" for the source
+- srcRemote - a path within that remote eg "file.txt" for the source
+- dstFs - a remote name string eg "drive2:" for the destination
+- dstRemote - a path within that remote eg "file2.txt" for the destination
Authentication is required for this call.
operations/publiclink: Create or retrieve a public link to the given file or folder.
This takes the following parameters
-- fs - a remote name string eg “drive:”
-- remote - a path within that remote eg “dir”
+- fs - a remote name string eg "drive:"
+- remote - a path within that remote eg "dir"
Returns
@@ -4382,24 +4385,24 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
operations/purge: Remove a directory or container and all of its contents
This takes the following parameters
-- fs - a remote name string eg “drive:”
-- remote - a path within that remote eg “dir”
+- fs - a remote name string eg "drive:"
+- remote - a path within that remote eg "dir"
See the purge command command for more information on the above.
Authentication is required for this call.
operations/rmdir: Remove an empty directory or container
This takes the following parameters
-- fs - a remote name string eg “drive:”
-- remote - a path within that remote eg “dir”
+- fs - a remote name string eg "drive:"
+- remote - a path within that remote eg "dir"
See the rmdir command command for more information on the above.
Authentication is required for this call.
operations/rmdirs: Remove all the empty directories in the path
This takes the following parameters
-- fs - a remote name string eg “drive:”
-- remote - a path within that remote eg “dir”
+- fs - a remote name string eg "drive:"
+- remote - a path within that remote eg "dir"
- leaveRoot - boolean, set to true not to delete the root
See the rmdirs command command for more information on the above.
@@ -4407,7 +4410,7 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
operations/size: Count the number of bytes and files in remote
This takes the following parameters
-- fs - a remote name string eg “drive:path/to/dir”
+- fs - a remote name string eg "drive:path/to/dir"
Returns
@@ -4450,16 +4453,16 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
sync/copy: copy a directory from source remote to destination remote
This takes the following parameters
-- srcFs - a remote name string eg “drive:src” for the source
-- dstFs - a remote name string eg “drive:dst” for the destination
+- srcFs - a remote name string eg "drive:src" for the source
+- dstFs - a remote name string eg "drive:dst" for the destination
See the copy command command for more information on the above.
Authentication is required for this call.
sync/move: move a directory from source remote to destination remote
This takes the following parameters
-- srcFs - a remote name string eg “drive:src” for the source
-- dstFs - a remote name string eg “drive:dst” for the destination
+- srcFs - a remote name string eg "drive:src" for the source
+- dstFs - a remote name string eg "drive:dst" for the destination
- deleteEmptySrcDirs - delete empty src directories if set
See the move command command for more information on the above.
@@ -4467,8 +4470,8 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
sync/sync: sync a directory from source remote to destination remote
This takes the following parameters
-- srcFs - a remote name string eg “drive:src” for the source
-- dstFs - a remote name string eg “drive:dst” for the destination
+- srcFs - a remote name string eg "drive:src" for the source
+- dstFs - a remote name string eg "drive:dst" for the destination
See the sync command command for more information on the above.
Authentication is required for this call.
@@ -4491,12 +4494,12 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
rclone rc vfs/refresh
Otherwise pass directories in as dir=path. Any parameter key starting with dir will refresh that directory, eg
rclone rc vfs/refresh dir=home/junk dir2=data/misc
-If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use –fast-list if enabled.
+If the parameter recursive=true is given the whole directory tree will get refreshed. This refresh will use --fast-list if enabled.
Accessing the remote control via HTTP
Rclone implements a simple HTTP based protocol.
Each endpoint takes an JSON object and returns a JSON object or an error. The JSON objects are essentially a map of string names to values.
All calls must made using POST.
-The input objects can be supplied using URL parameters, POST parameters or by supplying “Content-Type: application/json” and a JSON blob in the body. There are examples of these below using curl
.
+The input objects can be supplied using URL parameters, POST parameters or by supplying "Content-Type: application/json" and a JSON blob in the body. There are examples of these below using curl
.
The response will be a JSON blob in the body of the response. This is formatted to be reasonably human readable.
Error returns
If an error occurs then there will be an HTTP error status (eg 500) and the body of the response will contain a JSON encoded error object, eg
@@ -4511,7 +4514,7 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount
}
The keys in the error response are - error - error string - input - the input parameters to the call - status - the HTTP status code - path - the path of the call
The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested “Access-Control-Request-Headers” back.
+The sever implements basic CORS support and allows all origins for that. The response to a preflight OPTIONS request will echo the requested "Access-Control-Request-Headers" back.
curl -X POST 'http://localhost:5572/rc/noop?potato=1&sausage=2'
Response
@@ -4528,7 +4531,7 @@ rclone rc mount/mount fs=mydrive: mountPoint=/home/<user>/mountPoint mount "sausage": "2" } } -Note that curl doesn’t return errors to the shell unless you use the -f
option
Note that curl doesn't return errors to the shell unless you use the -f
option
$ curl -f -X POST 'http://localhost:5572/rc/error?potato=1&sausage=2'
curl: (22) The requested URL returned error: 400 Bad Request
$ echo $?
@@ -4567,7 +4570,7 @@ $ echo $?
If you use the --rc
flag this will also enable the use of the go profiling tools on the same port.
To use these, first install go.
Debugging memory use
-To profile rclone’s memory use you can run:
+To profile rclone's memory use you can run:
go tool pprof -web http://localhost:5572/debug/pprof/heap
This should open a page in your browser showing what is using what memory.
You can also use the -text
flag to produce a textual summary
@@ -4610,7 +4613,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total
go tool pprof http://localhost:5572/debug/pprof/mutex
See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team’s blog post on profiling go programs.
+See the net/http/pprof docs for more info on how to use the profiling and for a general overview see the Go team's blog post on profiling go programs.
The profiling hook is zero overhead unless it is used.
Each cloud storage system is slightly different. Rclone attempts to provide a unified interface to them, but some underlying differences show through.
@@ -4898,16 +4901,16 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB totalThe cloud storage system supports various hash types of the objects. The hashes are used when transferring data as an integrity check and can be specifically used with the --checksum
flag in syncs and in the check
command.
To use the verify checksums when transferring between cloud storage systems they must support a common hash type.
† Note that Dropbox supports its own custom hash. This is an SHA256 sum of all the 4MB block SHA256s.
-‡ SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote’s PATH.
‡ SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH.
†† WebDAV supports hashes when used with Owncloud and Nextcloud only.
††† WebDAV supports modtimes when used with Owncloud and Nextcloud only.
-‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft’s own QuickXorHash.
+‡‡ Microsoft OneDrive Personal supports SHA1 hashes, whereas OneDrive for business and SharePoint server support Microsoft's own QuickXorHash.
‡‡‡ Mail.ru uses its own modified SHA1 hash
The cloud storage system supports setting modification times on objects. If it does then this enables a using the modification times as part of the sync. If not then only the size will be checked by default, though the MD5SUM can be checked with the --checksum
flag.
All cloud storage systems support some kind of date on the object and these will be set when transferring from the cloud storage system.
If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt
and FILE.txt
. If a cloud storage system is case insensitive then that isn’t possible.
If a cloud storage systems is case sensitive then it is possible to have two files which differ only in case, eg file.txt
and FILE.txt
. If a cloud storage system is case insensitive then that isn't possible.
This can cause problems when syncing between a case insensitive system and a case sensitive system. The symptom of this is that no matter how many times you run the sync it never completes fully.
The local filesystem and SFTP may or may not be case sensitive depending on OS.
Most of the time this doesn’t cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.
+Most of the time this doesn't cause any problems as people tend to avoid files whose name differs only by case even on case sensitive systems.
If a cloud storage system allows duplicate files then it can have two objects with the same name.
This confuses rclone greatly when syncing - use the rclone dedupe
command to rename or remove duplicates.
This transformation is reversed when downloading a file or parsing rclone
arguments. For example, when uploading a file named my file?.txt
to Onedrive will be displayed as my file?.txt
on the console, but stored as my file?.txt
(the ?
gets replaced by the similar looking ?
character) to Onedrive. The reverse transformation allows to read a fileunusual/name.txt
from Google Drive, by passing the name unusual/name.txt
(the /
needs to be replaced by the similar looking /
character) on the command line.
The table below shows the characters that are replaced by default.
-When a replacement character is found in a filename, this character will be escaped with the ‛
character to avoid ambiguous file names. (e.g. a file named ␀.txt
would shown as ‛␀.txt
)
When a replacement character is found in a filename, this character will be escaped with the ‛
character to avoid ambiguous file names. (e.g. a file named ␀.txt
would shown as ‛␀.txt
)
Each cloud storage backend can use a different set of characters, which will be specified in the documentation for each backend.
To take a specific example, the FTP backend’s default encoding is
+To take a specific example, the FTP backend's default encoding is
--ftp-encoding "Slash,Del,Ctl,RightSpace,Dot"
-However, let’s say the FTP server is running on Windows and can’t have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are
+However, let's say the FTP server is running on Windows and can't have any of the invalid Windows characters in file names. You are backing up Linux servers to this FTP server which do have those characters in file names. So you would add the Windows set which are
Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
to the existing ones, giving:
Slash,LtGt,DoubleQuote,Colon,Question,Asterisk,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot,Del,RightSpace
This can be specified using the --ftp-encoding
flag or using an encoding
parameter in the config file.
Or let’s say you have a Windows server but you want to preserve *
and ?
, you would then have this as the encoding (the Windows encoding minus Asterisk
and Question
).
Or let's say you have a Windows server but you want to preserve *
and ?
, you would then have this as the encoding (the Windows encoding minus Asterisk
and Question
).
Slash,LtGt,DoubleQuote,Colon,Pipe,BackSlash,Ctl,RightSpace,RightPeriod,InvalidUtf8,Dot
This can be specified using the --local-encoding
flag or using an encoding
parameter in the config file.
This deletes a directory quicker than just deleting all the files in the directory.
-† Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don’t actually have a quicker way of deleting files other than deleting them individually.
+† Note Swift, Hubic, and Tardigrade implement this in order to delete directory markers but they don't actually have a quicker way of deleting files other than deleting them individually.
‡ StreamUpload is not supported with Nextcloud
Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn’t support Move
directly.
If the server doesn’t support Copy
directly then for copy operations the file is downloaded then re-uploaded.
Used when copying an object to and from the same remote. This known as a server side copy so you can copy a file without downloading it and uploading it again. It is used if you use rclone copy
or rclone move
if the remote doesn't support Move
directly.
If the server doesn't support Copy
directly then for copy operations the file is downloaded then re-uploaded.
Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in rclone move
if the server doesn’t support DirMove
.
If the server isn’t capable of Move
then rclone simulates it with Copy
then delete. If the server doesn’t support Copy
then rclone will download the file and re-upload it.
Used when moving/renaming an object on the same remote. This is known as a server side move of a file. This is used in rclone move
if the server doesn't support DirMove
.
If the server isn't capable of Move
then rclone simulates it with Copy
then delete. If the server doesn't support Copy
then rclone will download the file and re-upload it.
This is used to implement rclone move
to move a directory if possible. If it isn’t then it will use Move
on each file (which falls back to Copy
then download and upload - see Move
section).
This is used to implement rclone move
to move a directory if possible. If it isn't then it will use Move
on each file (which falls back to Copy
then download and upload - see Move
section).
This is used for emptying the trash for a remote by rclone cleanup
.
If the server can’t do CleanUp
then rclone cleanup
will return an error.
If the server can't do CleanUp
then rclone cleanup
will return an error.
The remote supports a recursive list to list all the contents beneath a directory quickly. This enables the --fast-list
flag to work. See the rclone docs for more details.
Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat
.
Some remotes allow files to be uploaded without knowing the file size in advance. This allows certain operations to work without spooling the file to local disk first, e.g. rclone rcat
.
Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don’t have an account on the particular cloud provider.
+Sets the necessary permissions on a file or folder and prints a link that allows others to access them, even if they don't have an account on the particular cloud provider.
This is used to fetch quota information from the remote, like bytes used/free/quota and bytes used in the trash.
This is also used to return the space used, available for rclone mount
.
If the server can’t do About
then rclone about
will return an error.
If the server can't do About
then rclone about
will return an error.
The remote supports empty directories. See Limitations for details. Most Object/Bucket based remotes do not support this.
These flags are available for every command. They control the backends and may be set in the config file.
@@ -6070,6 +6073,7 @@ Showing nodes accounting for 1537.03kB, 100% of 1537.03kB total --pcloud-client-id string Pcloud App Client Id --pcloud-client-secret string Pcloud App Client Secret --pcloud-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) + --pcloud-hostname string Hostname to connect to. (default "api.pcloud.com") --pcloud-root-folder-id string Fill in for rclone to use a non root folder as its starting point. (default "d0") --premiumizeme-encoding MultiEncoder This sets the encoding for the backend. (default Slash,DoubleQuote,BackSlash,Del,Ctl,InvalidUtf8,Dot) --putio-encoding MultiEncoder This sets the encoding for the backend. (default Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot) @@ -6290,7 +6294,7 @@ y/e/d> yInvalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to fichier (1Fichier).
-Your API Key, get it from https://1fichier.com/console/params.pl
Here are the advanced options specific to fichier (1Fichier).
-If you want to download a shared folder, add this parameter
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
rclone copy /home/source remote:source
Here are the standard options specific to alias (Alias for an existing remote).
-Remote or path to alias. Can be “myremote:path/to/dir”, “myremote:bucket”, “myremote:” or “/local/path”.
+Remote or path to alias. Can be "myremote:path/to/dir", "myremote:bucket", "myremote:" or "/local/path".
Amazon Drive, formerly known as Amazon Cloud Drive, is a cloud storage service run by Amazon for consumers.
Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don’t already have your own set of keys you will not be able to use rclone with Amazon Drive.
+Important: rclone supports Amazon Drive only if you have your own set of API keys. Unfortunately the Amazon Drive developer program is now closed to new entries so if you don't already have your own set of keys you will not be able to use rclone with Amazon Drive.
For the history on why rclone no longer has a set of Amazon Drive API keys see the forum.
If you happen to know anyone who works at Amazon then please ask them to re-instate rclone into the Amazon Drive developer program - thanks!
The initial setup for Amazon Drive involves getting a token from Amazon which you need to do in your browser. rclone config
walks you through it.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google’s very secure App Engine environment and doesn’t store any credentials which pass through it.
-Since rclone doesn’t currently have its own Amazon Drive credentials so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
Note also if you are not using Amazon’s auth_url
and token_url
, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize
will not work.
The configuration process for Amazon Drive may involve using an oauth proxy. This is used to keep the Amazon credentials out of the source code. The proxy runs in Google's very secure App Engine environment and doesn't store any credentials which pass through it.
+Since rclone doesn't currently have its own Amazon Drive credentials so you will either need to have your own client_id
and client_secret
with Amazon Drive, or use a third party oauth proxy in which case you will need to enter client_id
, client_secret
, auth_url
and token_url
.
Note also if you are not using Amazon's auth_url
and token_url
, (ie you filled in something for those) then if setting up on a remote machine you can only use the copying the config method of configuration - rclone authorize
will not work.
Here is an example of how to make a remote called remote
. First run:
rclone config
This will guide you through an interactive setup process:
@@ -6479,7 +6483,7 @@ y/e/d> yTo copy a local directory to an Amazon Drive directory called backup
rclone copy /home/source remote:backup
Amazon Drive doesn’t allow modification times to be changed via the API so these won’t be accurate or used for syncing.
+Amazon Drive doesn't allow modification times to be changed via the API so these won't be accurate or used for syncing.
It does store MD5SUMs so for a more accurate sync, you can use the --checksum
flag.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Any files you delete with rclone will end up in the trash. Amazon don’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon’s apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
+Any files you delete with rclone will end up in the trash. Amazon don't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Amazon's apps or via the Amazon Drive website. As of November 17, 2016, files are automatically deleted by Amazon from the trash after 30 days.
.com
Amazon accountsLet’s say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
Let's say you usually use amazon.co.uk
. When you authenticate with rclone it will take you to an amazon.com
page to log in. Your amazon.co.uk
email and password should work here just fine.
Here are the standard options specific to amazon cloud drive (Amazon Drive).
-Amazon Application Client ID.
Amazon Application Client Secret.
Here are the advanced options specific to amazon cloud drive (Amazon Drive).
-Auth server URL. Leave blank to use Amazon’s.
+Auth server URL. Leave blank to use Amazon's.
Token server url. leave blank to use Amazon’s.
+Token server url. leave blank to use Amazon's.
Checkpoint for internal polling (debug).
Additional time per GB to wait after a failed complete upload to see if it appears.
Sometimes Amazon Drive gives an error when a file has been fully uploaded but the file appears anyway after a little while. This happens sometimes for files over 1GB in size and nearly every time for files bigger than 10GB. This parameter controls the time rclone waits for the file to appear.
The default value for this parameter is 3 minutes per GB, so by default it will wait 3 minutes for every GB uploaded to see if the file appears.
You can disable this feature by setting it to 0. This may cause conflict errors as rclone retries the failed upload but the file will most likely appear correctly eventually.
These values were determined empirically by observing lots of uploads of big files for a range of file sizes.
-Upload with the “-v” flag to see more info about what rclone is doing in this situation.
+Upload with the "-v" flag to see more info about what rclone is doing in this situation.
Files >= this size will be downloaded via their tempLink.
-Files this size or more will be downloaded via their “tempLink”. This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn’t need to be changed.
-To download files above this threshold, rclone requests a “tempLink” which downloads the file through a temporary URL directly from the underlying S3 storage.
+Files this size or more will be downloaded via their "tempLink". This is to work around a problem with Amazon Drive which blocks downloads of files bigger than about 10GB. The default for this is 9GB which shouldn't need to be changed.
+To download files above this threshold, rclone requests a "tempLink" which downloads the file through a temporary URL directly from the underlying S3 storage.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that Amazon Drive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
+Note that Amazon Drive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Amazon Drive has rate limiting so you may notice errors in the sync (429 errors). rclone will automatically retry the sync up to 3 times by default (see --retries
flag) which should hopefully work around this problem.
Amazon Drive has an internal limit of file sizes that can be uploaded to the service. This limit is not officially published, but all files larger than this will fail.
At the time of writing (Jan 2016) is in the area of 50GB per file. This means that larger files are likely to fail.
@@ -6810,17 +6814,17 @@ y) Yes this is OK e) Edit this remote d) Delete this remote y/e/d> -This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
-For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
The modified time is stored as metadata on the object as X-Amz-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
If the modification time needs to be updated rclone will attempt to perform a server side copy to update the modification if the object can be copied in a single part. In the case the object is larger than 5Gb or is in Glacier or Glacier Deep Archive storage the object will be uploaded rather than copied.
S3 allows any valid UTF-8 string as a key.
-Invalid UTF-8 bytes will be replaced, as they can’t be used in XML.
+Invalid UTF-8 bytes will be replaced, as they can't be used in XML.
The following characters are replaced since these are problematic when dealing with the REST API:
The encoding will also encode these file names as they don’t seem to work with the SDK properly:
+The encoding will also encode these file names as they don't seem to work with the SDK properly:
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
The SHA1 checksums of the files are checked on upload and download and will be used in the syncing process.
Large files (bigger than the limit in --b2-upload-cutoff
) which are uploaded in chunks will store their SHA1 on the object as X-Bz-Info-large_file_sha1
as recommended by Backblaze.
For a large file to be uploaded with an SHA1 checksum, the source needs to support SHA1 checksums. The local disk supports SHA1 checksums so large file transfers from local disk will have an SHA1. See the overview for exactly which remotes support SHA1.
-Sources which don’t support SHA1, in particular crypt
will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).
Sources which don't support SHA1, in particular crypt
will upload large files without SHA1 checksums. This may be fixed in the future (see #1767).
Files sizes below --b2-upload-cutoff
will always have an SHA1 regardless of the source.
Backblaze recommends that you do lots of transfers simultaneously for maximum speed. In tests from my SSD equipped laptop the optimum setting is about --transfers 32
though higher numbers may be used for a slight speed improvement. The optimum number for you may vary depending on your hardware, how big the files are, how much you want to load your computer, etc. The default of --transfers 4
is definitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MB by default) will use a 96 MB RAM buffer by default. There can be at most --transfers
of these in use at any moment, so this sets the upper limit on the memory used.
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a “hard delete” of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
When rclone uploads a new version of a file it creates a new version of it. Likewise when you delete a file, the old version will be marked hidden and still be available. Conversely, you may opt in to a "hard delete" of files with the --b2-hard-delete
flag which would permanently remove the file instead of hiding it.
Old versions of files, where available, are visible using the --b2-versions
flag.
NB Note that --b2-versions
does not work with crypt at the moment #1627. Using –backup-dir with rclone is the recommended way of working around this.
NB Note that --b2-versions
does not work with crypt at the moment #1627. Using --backup-dir with rclone is the recommended way of working around this.
If you wish to remove all the old versions then you can use the rclone cleanup remote:bucket
command which will delete all the old versions of files, leaving the current ones intact. You can also supply a path and only old versions under that path will be deleted, eg rclone cleanup remote:bucket/path/to/stuff
.
Note that cleanup
will remove partially uploaded files from the bucket if they are more than a day old.
When you purge
a bucket, the current and the old versions will be deleted then the bucket will be deleted.
Clean up all the old versions and show that they’ve gone.
+Clean up all the old versions and show that they've gone.
$ rclone -q cleanup b2:cleanup-test
$ rclone -q ls b2:cleanup-test
@@ -8721,7 +8725,7 @@ $ rclone -q --b2-versions ls b2:cleanup-test
16 one-v2016-07-04-141003-000.txt
15 one-v2016-07-02-155621-000.txt
Showing that the current version is unchanged but older versions can be seen. These have the UTC date that they were uploaded to the server to the nearest millisecond appended to them.
-Note that when using --b2-versions
no file write operations are permitted, so you can’t upload files or delete them.
Note that when using --b2-versions
no file write operations are permitted, so you can't upload files or delete them.
Rclone supports generating file share links for private B2 buckets. They can either be for a file for example:
./rclone link B2:bucket/path/to/file.txt
@@ -8737,7 +8741,7 @@ https://f002.backblazeb2.com/file/bucket/path/folder/file3?Authorization=xxxxxxx
Here are the standard options specific to b2 (Backblaze B2).
-Account ID or Application Key ID
Application Key
Permanently delete files on remote removal, otherwise hide files.
Here are the advanced options specific to b2 (Backblaze B2).
-Endpoint for the service. Leave blank normally.
A flag string for X-Bz-Test-Mode header for debugging.
This is for debugging purposes only. Setting it to one of the strings below will cause b2 to return specific errors:
These will be set in the “X-Bz-Test-Mode” header which is documented in the b2 integrations checklist.
+These will be set in the "X-Bz-Test-Mode" header which is documented in the b2 integrations checklist.
Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can’t upload files or delete them.
+Include old versions in directory listings. Note that when using this no file write operations are permitted, so you can't upload files or delete them.
Cutoff for switching to chunked upload.
-Files above this size will be uploaded in chunks of “–b2-chunk-size”.
+Files above this size will be uploaded in chunks of "--b2-chunk-size".
This value should be set no larger than 4.657GiB (== 5GB).
Upload chunk size. Must fit in memory.
-When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of “–transfers” chunks in progress at once. 5,000,000 Bytes is the minimum size.
+When uploading large files, chunk the file into this size. Note that these chunks are buffered in memory and there might a maximum of "--transfers" chunks in progress at once. 5,000,000 Bytes is the minimum size.
Disable checksums for large (> upload cutoff) files
Normally rclone will calculate the SHA1 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offers free egress for data downloaded through the Cloudflare network. This is probably only useful for a public bucket. Leave blank if you want to use the endpoint provided by Backblaze.
Time before the authorization token will expire in s or suffix ms|s|m|h|d.
The duration before the download authorization token will expire. The minimum value is 1 second. The maximum value is one week.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
To copy a local directory to an Box directory called backup
rclone copy /home/source remote:backup
If you have an “Enterprise” account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, “Account” Tab, and then set the password in the “Authentication” field.
+If you have an "Enterprise" account type with Box with single sign on (SSO), you need to create a password to use Box with rclone. This can be done at your Enterprise Box account by going to Settings, "Account" Tab, and then set the password in the "Authentication" field.
Once you have done this, you can setup your Enterprise Box account using the same procedure detailed above in the, using the password you have just set.
According to the box docs:
@@ -8920,7 +8924,7 @@ y/e/d> yThis means that if you
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
For files above 50MB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 8MB so increasing --transfers
will increase memory use.
So if the folder you want rclone to use has a URL which looks like https://app.box.com/folder/11xxxxxxxxx8
in the browser, then you use 11xxxxxxxxx8
as the root_folder_id
in the config.
Here are the standard options specific to box (Box).
-Box App Client Id. Leave blank normally.
Box App Client Secret Leave blank normally.
Box App config.json location Leave blank normally.
Here are the advanced options specific to box (Box).
-Fill in for rclone to use a non root folder as its starting point.
Cutoff for switching to multipart upload (>= 50MB).
Max number of times to try committing a multipart file.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that Box is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
-Box file names can’t have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
.
Note that Box is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+Box file names can't have the \
character in. rclone maps this to and from an identical looking unicode equivalent \
.
Box only supports filenames up to 255 characters in length.
The cache
remote wraps another existing remote and stores file structure and its data for long running tasks like rclone mount
.
The cache backend code is working but it currently doesn’t have a maintainer so there are outstanding bugs which aren’t getting fixed.
+The cache backend code is working but it currently doesn't have a maintainer so there are outstanding bugs which aren't getting fixed.
The cache backend is due to be phased out in favour of the VFS caching layer eventually which is more tightly integrated into rclone.
-Until this happens we recommend only using the cache backend if you find you can’t work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn’t needed in those scenarios any more.
+Until this happens we recommend only using the cache backend if you find you can't work without it. There are many docs online describing the use of the cache backend to minimize API hits and by-and-large these are out of date and the cache backend isn't needed in those scenarios any more.
To get started you just need to have an existing remote which can be configured with cache
.
Here is an example of how to make a remote called test-cache
. First run:
cache-tmp-wait-time
passes and the file is next in line, rclone move
is used to move the file to the cloud providercache
when it’s actually deleted from the temporary path then cache
will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)cache
when it's actually deleted from the temporary path then cache
will simply swap the source to the cloud provider without interrupting the reading (small blip can happen though)Files are uploaded in sequence and only one file is uploaded at a time. Uploads will be stored in a queue and be processed based on the order they were added. The queue and the temporary storage is persistent across restarts but can be cleared on startup with the --cache-db-purge
flag.
When the Plex server is configured to only accept secure connections, it is possible to use .plex.direct
URLs to ensure certificate validation succeeds. These URLs are used by Plex internally to connect to the Plex server securely.
The format for these URLs is the following:
https://ip-with-dots-replaced.server-hash.plex.direct:32400/
-The ip-with-dots-replaced
part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1
becomes 127-0-0-1
.
The ip-with-dots-replaced
part can be any IPv4 address, where the dots have been replaced with dashes, e.g. 127.0.0.1
becomes 127-0-0-1
.
To get the server-hash
part, the easiest way is to visit
https://plex.tv/api/resources?includeHttps=1&X-Plex-Token=your-plex-token
This page will list all the available Plex servers for your account with at least one .plex.direct
link for each. Copy one URL and replace the IP address with the desired address. This can be used as the plex_url
value.
–dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache
backend, it will manage its own entries based on the configured time.
--dir-cache-time controls the first layer of directory caching which works at the mount layer. Being an independent caching mechanism from the cache
backend, it will manage its own entries based on the configured time.
To avoid getting in a scenario where dir cache has obsolete data and cache would have the correct one, try to set --dir-cache-time
to a lower time than --cache-info-age
. Default values are already configured in this way.
There are a couple of issues with Windows mount
functionality that still require some investigations. It should be considered as experimental thus far as fixes come in for this OS.
Future iterations of the cache backend will make use of the pooling functionality of the cloud provider to synchronize and at the same time make writing through it more tolerant to failures.
There are a couple of enhancements in track to add these but in the meantime there is a valid concern that the expiring cache listings can lead to cloud provider throttles or bans due to repeated queries on it for very large mounts.
-Some recommendations: - don’t use a very small interval for entry information (--cache-info-age
) - while writes aren’t yet optimised, you can still write through cache
which gives you the advantage of adding the file in the cache at the same time if configured to do so.
Some recommendations: - don't use a very small interval for entry information (--cache-info-age
) - while writes aren't yet optimised, you can still write through cache
which gives you the advantage of adding the file in the cache at the same time if configured to do so.
Future enhancements:
One common scenario is to keep your data encrypted in the cloud provider using the crypt
remote. crypt
uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.
There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache
-During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we’re downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt
+During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we're downloading the full file instead of small chunks. Organizing the remotes in this order yields better results: cloud remote -> cache -> crypt
cache
can not differentiate between relative and absolute paths for the wrapped remote. Any path given in the remote
config setting and on the command line will be passed to the wrapped remote as is, but for storing the chunks on disk the path will be made relative by removing any leading /
character.
This behavior is irrelevant for most backend types, but there are backends where a leading /
changes the effective directory, e.g. in the sftp
backend paths starting with a /
are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin
and sftp:/bin
will share the same cache folder, even if they represent a different directory on the SSH server.
This behavior is irrelevant for most backend types, but there are backends where a leading /
changes the effective directory, e.g. in the sftp
backend paths starting with a /
are relative to the root of the SSH server and paths without are relative to the user home directory. As a result sftp:bin
and sftp:/bin
will share the same cache folder, even if they represent a different directory on the SSH server.
Cache supports the new --rc
mode in rclone and can be remote controlled through the following end points: By default, the listener is disabled if you do not add the flag.
Purge a remote from the cache backend. Supports either a directory or a file. It supports both encrypted and unencrypted file names if cache is wrapped by crypt.
Params: - remote = path to remote (required) - withData = true/false to delete cached data (chunks) as well (optional, false by default)
Here are the standard options specific to cache (Cache a remote).
-Remote to cache. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).
+Remote to cache. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
The URL of the Plex server
The username of the Plex user
The password of the Plex user
NB Input to this must be obscured - see rclone obscure.
The size of a chunk (partial file data).
Use lower numbers for slower connections. If the chunk size is changed, any downloaded chunks will be invalid and cache-chunk-path will need to be cleared or unexpected EOF errors will occur.
How long to cache file structure information (directory listings, file size, times etc). If all write operations are done through the cache then you can safely make this value very large as the cache store will also be updated in real time.
The total size that the chunks can take up on the local disk.
If the cache exceeds this value then it will start to delete the oldest chunks until it goes under this value.
Here are the advanced options specific to cache (Cache a remote).
-The plex token for authentication - auto set normally
Skip all certificate verification when connecting to the Plex server
Directory to store file structure metadata DB. The remote name is used as the DB file name.
Directory to cache chunk files.
Path to where partial file data (chunks) are stored locally. The remote name is appended to the final path.
-This config follows the “–cache-db-path”. If you specify a custom location for “–cache-db-path” and don’t specify one for “–cache-chunk-path” then “–cache-chunk-path” will use the same path as “–cache-db-path”.
+This config follows the "--cache-db-path". If you specify a custom location for "--cache-db-path" and don't specify one for "--cache-chunk-path" then "--cache-chunk-path" will use the same path as "--cache-db-path".
Clear all the cached data for this remote on start.
How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over “cache-chunk-total-size” too often then try to lower this value to force it to perform cleanups more often.
+How often should the cache perform cleanups of the chunk storage. The default value should be ok for most people. If you find that the cache goes over "cache-chunk-total-size" too often then try to lower this value to force it to perform cleanups more often.
How many times to retry a read from a cache storage.
-Since reading from a cache stream is independent from downloading file data, readers can get to a point where there’s no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn’t able to provide file data anymore.
+Since reading from a cache stream is independent from downloading file data, readers can get to a point where there's no more data in the cache. Most of the times this can indicate a connectivity issue if cache isn't able to provide file data anymore.
For really slow connections, increase this to a point where the stream is able to provide data but your experience will be very stuttering.
How many workers should run in parallel to download chunks.
Higher values will mean more parallel processing (better CPU needed) and more concurrent requests on the cloud provider. This impacts several aspects like the cloud provider API limits, more stress on the hardware that rclone runs on but it also means that streams will be more fluid and data will be available much more faster to readers.
Note: If the optional Plex integration is enabled then this setting will adapt to the type of reading performed and the value specified here will be used as a maximum number of workers to use.
@@ -9455,10 +9459,10 @@ chunk_total_size = 10GDisable the in-memory cache for storing chunks during streaming.
By default, cache will keep file data during streaming in RAM as well to provide it to readers as fast as possible.
-This transient data is evicted as soon as it is read and the number of chunks stored doesn’t exceed the number of workers. However, depending on other settings like “cache-chunk-size” and “cache-workers” this footprint can increase if there are parallel streams too (multiple files being read at the same time).
+This transient data is evicted as soon as it is read and the number of chunks stored doesn't exceed the number of workers. However, depending on other settings like "cache-chunk-size" and "cache-workers" this footprint can increase if there are parallel streams too (multiple files being read at the same time).
If the hardware permits it, use this feature to provide an overall better performance during streaming but it can also be disabled if RAM is not available on the local machine.
Limits the number of requests per second to the source FS (-1 to disable)
This setting places a hard limit on the number of requests per second that cache will be doing to the cloud provider remote and try to respect that value by setting waits between reads.
-If you find that you’re getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.
+If you find that you're getting banned or limited on the cloud provider through cache and know that a smaller number of requests per second will allow you to work with it then you can use this setting for that.
A good balance of all the other settings should make this setting useless but it is available to set for more special cases.
NOTE: This will limit the number of requests during streams but other API calls to the cloud provider like directory listings will still pass.
Cache file data on writes through the FS
If you need to read files immediately after you upload them through cache you can enable this flag to have their data stored in the cache store at the same time during upload.
Directory to keep temporary files until they are uploaded.
This is the path where cache will use as a temporary storage for new files that need to be uploaded to the cloud provider.
Specifying a value will enable this feature. Without it, it is completely disabled and files will be uploaded directly to the cloud provider
@@ -9497,7 +9501,7 @@ chunk_total_size = 10GHow long should files be stored in local cache before being uploaded
This is the duration that a file must wait in the temporary location cache-tmp-upload-path before it is selected for upload.
Note that only one file is uploaded at a time and it can take longer to start the upload if a queue formed for this purpose.
@@ -9507,7 +9511,7 @@ chunk_total_size = 10GHow long to wait for the DB to be available - 0 is unlimited
Only one process can have the DB open at any one time, so rclone waits for this duration for the DB to become available before it gives an error.
If you set it to 0 then it will wait forever.
@@ -9522,7 +9526,7 @@ chunk_total_size = 10GRun them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the “rclone backend” command for more info on how to pass options and arguments.
+See the "rclone backend" command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
Print stats on the cache backend in JSON format.
@@ -9530,7 +9534,7 @@ chunk_total_size = 10GThe chunker
overlay transparently splits large files into smaller chunks during upload to wrapped remote and transparently assembles them back when the file is downloaded. This allows to effectively overcome size limits imposed by storage providers.
To use it, first set up the underlying remote following the configuration instructions for that remote. You can also use a local pathname instead of a remote.
-First check your chosen remote is working - we’ll call it remote:path
here. Note that anything inside remote:path
will be chunked and anything outside won’t. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
.
First check your chosen remote is working - we'll call it remote:path
here. Note that anything inside remote:path
will be chunked and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
.
Now configure chunker
using rclone config
. We will call this one overlay
to separate it from the remote
itself.
No remotes found - make a new one
n) New remote
@@ -9590,7 +9594,7 @@ y/e/d> y
In normal use, make sure the remote has a :
in. If you specify the remote without a :
then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files
then rclone will chunk stuff in that directory. If you use a remote of name
then rclone will put files in a directory called name
in the current directory.
When rclone starts a file upload, chunker checks the file size. If it doesn’t exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process.
+When rclone starts a file upload, chunker checks the file size. If it doesn't exceed the configured chunk size, chunker will just pass the file to the wrapped remote. If a file is large, chunker will transparently cut data in pieces with temporary names and stream them one by one, on the fly. Each data chunk will contain the specified number of bytes, except for the last one which may have less data. If file size is unknown in advance (this is called a streaming upload), chunker will internally create a temporary copy, record its size and repeat the above process.
When upload completes, temporary chunk files are finally renamed. This scheme guarantees that operations can be run in parallel and look from outside as atomic. A similar method with hidden temporary chunks is used for other operations (copy/move/rename etc). If an operation fails, hidden chunks are normally destroyed, and the target composite file stays intact.
When a composite file download is requested, chunker transparently assembles it by concatenating data chunks in order. As the split is trivial one could even manually concatenate data chunks together to obtain the original content.
When the list
rclone command scans a directory on wrapped remote, the potential chunk files are accounted for, grouped and assembled into composite directory entries. Any temporary chunks are hidden.
md5
- MD5 hashsum of composite file (if present)sha1
- SHA1 hashsum (if present)There is no field for composite file name as it’s simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling.
+There is no field for composite file name as it's simply equal to the name of meta object on the wrapped remote. Please refer to respective sections for details on hashsums and modified time handling.
You can disable meta objects by setting the meta format option to none
. In this mode chunker will scan directory for all files that follow configured chunk name format, group them by detecting chunks with the same base name and show group names as virtual composite files. This method is more prone to missing chunk errors (especially missing last chunk) than format with metadata enabled.
Chunker supports hashsums only when a compatible metadata is present. Hence, if you choose metadata format of none
, chunker will report hashsum as UNSUPPORTED
.
Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn’t support it.
+Please note that by default metadata is stored only for composite files. If a file is smaller than configured chunk size, chunker will transparently redirect hash requests to wrapped remote, so support depends on that. You will see the empty string as a hashsum of requested type for small files if the wrapped remote doesn't support it.
Many storage backends support MD5 and SHA1 hash types, so does chunker. With chunker you can choose one or another but not both. MD5 is set by default as the most supported type. Since chunker keeps hashes for composite files and falls back to the wrapped remote hash for non-chunked ones, we advise you to choose the same hash type as supported by wrapped remote so that your file listings look coherent.
-If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with md5all
or sha1all
. These two modes guarantee given hash for all files. If wrapped remote doesn’t support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. chunk_type=sha1all
to force hashsums and chunk_size=1P
to effectively disable chunking.
If your storage backend does not support MD5 or SHA1 but you need consistent file hashing, configure chunker with md5all
or sha1all
. These two modes guarantee given hash for all files. If wrapped remote doesn't support it, chunker will then add metadata to all files, even small. However, this can double the amount of small files in storage and incur additional service charges. You can even use chunker to force md5/sha1 support in any other remote at expense of sidecar meta objects by setting eg. chunk_type=sha1all
to force hashsums and chunk_size=1P
to effectively disable chunking.
Normally, when a file is copied to chunker controlled remote, chunker will ask the file source for compatible file hash and revert to on-the-fly calculation if none is found. This involves some CPU overhead but provides a guarantee that given hashsum is available. Also, chunker will reject a server-side copy or move operation if source and destination hashsum types are different resulting in the extra network bandwidth, too. In some rare cases this may be undesired, so chunker provides two optional choices: sha1quick
and md5quick
. If the source does not support primary hash type and the quick mode is enabled, chunker will try to fall back to the secondary type. This will save CPU and bandwidth but can result in empty hashsums at destination. Beware of consequences: the sync
command will revert (sometimes silently) to time/size comparison if compatible hashsums between source and target are not found.
Chunker stores modification times using the wrapped remote so support depends on that. For a small non-chunked file the chunker overlay simply manipulates modification time of the wrapped remote file. For a composite file with metadata chunker will get and set modification time of the metadata object on the wrapped remote. If file is chunked but metadata format is none
then chunker will use modification time of the first data chunk.
If rclone gets killed during a long operation on a big composite file, hidden temporary chunks may stay in the directory. They will not be shown by the list
command but will eat up your account quota. Please note that the deletefile
command deletes only active chunks of a file. As a workaround, you can use remote of the wrapped file system to see them. An easy way to get rid of hidden garbage is to copy littered directory somewhere using the chunker remote and purge the original directory. The copy
command will copy only active chunks while the purge
will remove everything including garbage.
Chunker requires wrapped remote to support server side move
(or copy
+ delete
) operations, otherwise it will explicitly refuse to start. This is because it internally renames temporary chunk files to their final names when an operation completes successfully.
Chunker encodes chunk number in file name, so with default name_format
setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone’s crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. *.rcc##
and save 10 characters (provided at most 99 chunks per file).
Chunker encodes chunk number in file name, so with default name_format
setting it adds 17 characters. Also chunker adds 7 characters of temporary suffix during operations. Many file systems limit base file name without path by 255 characters. Using rclone's crypt remote as a base file system limits file name by 143 characters. Thus, maximum name length is 231 for most files and 119 for chunker-over-crypt. A user in need can change name format to eg. *.rcc##
and save 10 characters (provided at most 99 chunks per file).
Note that a move implemented using the copy-and-delete method may incur double charging with some cloud storage providers.
Chunker will not automatically rename existing chunks when you run rclone config
on a live remote and change the chunk name format. Beware that in result of this some files which have been treated as chunks before the change can pop up in directory listings as normal files and vice versa. The same warning holds for the chunk size. If you desperately need to change critical chunking settings, you should run data migration as described above.
If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can’t have a file called “Hello.doc” and “hello.doc” in the same directory).
+If wrapped remote is case insensitive, the chunker overlay will inherit that property (so you can't have a file called "Hello.doc" and "hello.doc" in the same directory).
Here are the standard options specific to chunker (Transparently chunk/split large files).
-Remote to chunk/unchunk. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).
+Remote to chunk/unchunk. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
Files larger than chunk size will be split in chunks.
Choose how chunker handles hash sums. All modes but “none” require metadata.
+Choose how chunker handles hash sums. All modes but "none" require metadata.
Here are the advanced options specific to chunker (Transparently chunk/split large files).
-String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#…). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.
+String format of chunk file names. The two placeholders are: base file name (*) and chunk number (#...). There must be one and only one asterisk and one or more consecutive hash characters. If chunk number has less digits than the number of hashes, it is left-padded by zeros. If there are more digits in the number, they are left as is. Possible chunk files are ignored if their name does not match given format.
Minimum valid chunk number. Usually 0 or 1. By default chunk numbers start from 1.
Format of the metadata object or “none”. By default “simplejson”. Metadata is a small JSON file named after the composite file.
+Format of the metadata object or "none". By default "simplejson". Metadata is a small JSON file named after the composite file.
Choose how chunker should handle files with missing or invalid chunks.
For files above 128MB rclone will use a chunked transfer. Rclone will upload up to --transfers
chunks at the same time (shared among all the multipart uploads). Chunks are buffered in memory and are normally 64MB so increasing --transfers
will increase memory use.
Note that ShareFile is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
+Note that ShareFile is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
ShareFile only supports filenames up to 256 characters in length.
In addition to the default restricted characters set the following characters are also replaced:
@@ -9905,12 +9909,12 @@ y/e/d> y -Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to sharefile (Citrix Sharefile).
-ID of the root folder
-Leave blank to access “Personal Folders”. You can use one of the standard values here or any folder ID (long hex number ID).
+Leave blank to access "Personal Folders". You can use one of the standard values here or any folder ID (long hex number ID).
Here are the advanced options specific to sharefile (Citrix Sharefile).
-Cutoff for switching to multipart upload.
Upload chunk size. Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
@@ -9960,7 +9964,7 @@ y/e/d> yEndpoint for API calls.
This is usually auto discovered as part of the oauth process, but can be set manually to something like: https://XXX.sharefile.com
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
The crypt
remote encrypts and decrypts another remote.
To use it first set up the underlying remote following the config instructions for that remote. You can also use a local pathname instead of a remote which will encrypt and decrypt from that directory which might be useful for encrypting onto a USB stick for example.
-First check your chosen remote is working - we’ll call it remote:path
in these docs. Note that anything inside remote:path
will be encrypted and anything outside won’t. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
. If you just use s3:
then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.
First check your chosen remote is working - we'll call it remote:path
in these docs. Note that anything inside remote:path
will be encrypted and anything outside won't. This means that if you are using a bucket based remote (eg S3, B2, swift) then you should probably put the bucket in the remote s3:bucket
. If you just use s3:
then rclone will make encrypted bucket names too (if using file name encryption) which may or may not be what you want.
Now configure crypt
using rclone config
. We will call this one secret
to differentiate it from the remote
.
No remotes found - make a new one
n) New remote
@@ -10052,7 +10056,7 @@ y) Yes this is OK
e) Edit this remote
d) Delete this remote
y/e/d> y
-Important The password is stored in the config file is lightly obscured so it isn’t immediately obvious what it is. It is in no way secure unless you use config file encryption.
+Important The password is stored in the config file is lightly obscured so it isn't immediately obvious what it is. It is in no way secure unless you use config file encryption.
A long passphrase is recommended, or you can use a random one.
The obscured password is created by using AES-CTR with a static key, with the salt stored verbatim at the beginning of the obscured password. This static key is shared by between all versions of rclone.
If you reconfigure rclone with the same passwords/passphrases elsewhere it will be compatible, but the obscured version will be different due to the different salt.
@@ -10064,9 +10068,9 @@ y/e/d> yIn normal use, make sure the remote has a :
in. If you specify the remote without a :
then rclone will use a local directory of that name. So if you use a remote of /path/to/secret/files
then rclone will encrypt stuff to that directory. If you use a remote of name
then rclone will put files in a directory called name
in the current directory.
If you specify the remote as remote:path/to/dir
then rclone will store encrypted files in path/to/dir
on the remote. If you are using file name encryption, then when you save files to secret:subdir/subfile
this will store them in the unencrypted path path/to/dir
but the subdir/subpath
bit will be encrypted.
Note that unless you want encrypted bucket names (which are difficult to manage because you won’t know what directory they represent in web interfaces etc), you should probably specify a bucket, eg remote:secretbucket
when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.
Note that unless you want encrypted bucket names (which are difficult to manage because you won't know what directory they represent in web interfaces etc), you should probably specify a bucket, eg remote:secretbucket
when using bucket based remotes such as S3, Swift, Hubic, B2, GCS.
To test I made a little directory of files using “standard” file name encryption.
+To test I made a little directory of files using "standard" file name encryption.
plaintext/
├── file0.txt
├── file1.txt
@@ -10095,7 +10099,7 @@ $ rclone -q ls secret:
8 file2.txt
9 file3.txt
10 subsubdir/file4.txt
-If don’t use file name encryption then the remote will look like this - note the .bin
extensions added to prevent the cloud provider attempting to interpret the data.
If don't use file name encryption then the remote will look like this - note the .bin
extensions added to prevent the cloud provider attempting to interpret the data.
$ rclone -q ls remote:path
54 file0.txt.bin
57 subdir/file3.txt.bin
@@ -10106,22 +10110,22 @@ $ rclone -q ls secret:
Here are some of the features of the file name encryption modes
Off
-- doesn’t hide file names or directory structure
+- doesn't hide file names or directory structure
- allows for longer file names (~246 characters)
- can use sub paths and copy single files
Standard
- file names encrypted
-- file names can’t be as long (~143 characters)
+- file names can't be as long (~143 characters)
- can use sub paths and copy single files
- directory structure visible
- identical files names will have identical uploaded names
- can use shortcuts to shorten the directory recursion
Obfuscation
-This is a simple “rotate” of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called “hello” may become “53.jgnnq”.
-This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it’s an intermediate between “off” and “standard”. The advantage is that it allows for longer path segment names.
+This is a simple "rotate" of the filename, with each file having a rot distance based on the filename. We store the distance at the beginning of the filename. So a file called "hello" may become "53.jgnnq".
+This is not a strong encryption of filenames, but it may stop automated scanning tools from picking up on filename patterns. As such it's an intermediate between "off" and "standard". The advantage is that it allows for longer path segment names.
There is a possibility with some unicode based filenames that the obfuscation is weak and may map lower case characters to upper case equivalents. You can not rely on this for strong protection.
- file names very lightly obfuscated
@@ -10130,7 +10134,7 @@ $ rclone -q ls secret:
- directory structure visible
- identical files names will have identical uploaded names
-Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using “Standard” file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
+Cloud storage systems have various limits on file name length and total path length which you are more likely to hit using "Standard" file name encryption. If you keep your file names to below 156 characters in length then you should be OK on all providers.
There may be an even more secure file name encryption mode in the future which will address the long file name problem.
Directory name encryption
Crypt offers the option of encrypting dir names or leaving them intact. There are two options:
@@ -10141,43 +10145,43 @@ $ rclone -q ls secret:
Modified time and hashes
Crypt stores modification times using the underlying remote so support depends on that.
Hashes are not stored for crypt. However the data integrity is protected by an extremely strong crypto authenticator.
-Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can’t check the checksums properly.
+Note that you should use the rclone cryptcheck
command to check the integrity of a crypted remote instead of rclone check
which can't check the checksums properly.
Standard Options
Here are the standard options specific to crypt (Encrypt/Decrypt a remote).
-–crypt-remote
-Remote to encrypt/decrypt. Normally should contain a ‘:’ and a path, eg “myremote:path/to/dir”, “myremote:bucket” or maybe “myremote:” (not recommended).
+--crypt-remote
+Remote to encrypt/decrypt. Normally should contain a ':' and a path, eg "myremote:path/to/dir", "myremote:bucket" or maybe "myremote:" (not recommended).
- Config: remote
- Env Var: RCLONE_CRYPT_REMOTE
- Type: string
- Default: ""
-–crypt-filename-encryption
+--crypt-filename-encryption
How to encrypt the filenames.
- Config: filename_encryption
- Env Var: RCLONE_CRYPT_FILENAME_ENCRYPTION
- Type: string
-- Default: “standard”
+- Default: "standard"
- Examples:
-- “standard”
+
- "standard"
- Encrypt the filenames see the docs for the details.
-- “obfuscate”
+
- "obfuscate"
- Very simple filename obfuscation.
-- “off”
+
- "off"
-- Don’t encrypt the file names. Adds a “.bin” extension only.
+- Don't encrypt the file names. Adds a ".bin" extension only.
-–crypt-directory-name-encryption
+--crypt-directory-name-encryption
Option to either encrypt directory names or leave them intact.
-NB If filename_encryption is “off” then this option will do nothing.
+NB If filename_encryption is "off" then this option will do nothing.
- Config: directory_name_encryption
- Env Var: RCLONE_CRYPT_DIRECTORY_NAME_ENCRYPTION
@@ -10185,17 +10189,17 @@ $ rclone -q ls secret:
- Default: true
- Examples:
-- “true”
+
- "true"
- Encrypt directory names.
-- “false”
+
- "false"
-- Don’t encrypt directory names, leave them intact.
+- Don't encrypt directory names, leave them intact.
-–crypt-password
+--crypt-password
Password or pass phrase for encryption.
NB Input to this must be obscured - see rclone obscure.
@@ -10204,7 +10208,7 @@ $ rclone -q ls secret:
- Type: string
- Default: ""
-–crypt-password2
+--crypt-password2
Password or pass phrase for salt. Optional but recommended. Should be different to the previous password.
NB Input to this must be obscured - see rclone obscure.
@@ -10215,7 +10219,7 @@ $ rclone -q ls secret:
Advanced Options
Here are the advanced options specific to crypt (Encrypt/Decrypt a remote).
-–crypt-show-mapping
+--crypt-show-mapping
For all files listed show how the names encrypt.
If this flag is set then for each file that the remote is asked to list, it will log (at level INFO) a line stating the decrypted file name and the encrypted file name.
This is so you can work out which encrypted names are which decrypted names just in case you need to do something with the encrypted file names, or for debugging purposes.
@@ -10230,7 +10234,7 @@ $ rclone -q ls secret:
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the “rclone backend” command for more info on how to pass options and arguments.
+See the "rclone backend" command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
encode
Encode the given filename(s)
@@ -10252,9 +10256,9 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
rclone sync
will check the checksums while copying
- you can use
rclone check
between the encrypted remotes
-- you don’t decrypt and encrypt unnecessarily
+- you don't decrypt and encrypt unnecessarily
-For example, let’s say you have your original remote at remote:
with the encrypted version at eremote:
with path remote:crypt
. You would then set up the new remote remote2:
and then the encrypted version eremote2:
with path remote2:crypt
using the same passwords as eremote:
.
+For example, let's say you have your original remote at remote:
with the encrypted version at eremote:
with path remote:crypt
. You would then set up the new remote remote2:
and then the encrypted version eremote2:
with path remote2:crypt
using the same passwords as eremote:
.
To sync the two remotes you would do
rclone sync remote:crypt remote2:crypt
And to check the integrity you would do
@@ -10275,7 +10279,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
16 Bytes of Poly1305 authenticator
1 - 65536 bytes XSalsa20 encrypted data
-64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can’t be too big.
+64k chunk size was chosen as the best performing chunk size (the authenticator takes too much time below this and the performance drops off due to cache effects above this). Note that these chunks are buffered in memory so they can't be too big.
This uses a 32 byte (256 bit key) key derived from the user password.
Examples
1 byte file will encrypt to
@@ -10293,12 +10297,12 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
Name encryption
File names are encrypted segment by segment - the path is broken up into /
separated strings and these are encrypted individually.
File segments are padded using PKCS#7 to a multiple of 16 bytes before encryption.
-They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper “A Parallelizable Enciphering Mode” by Halevi and Rogaway.
-This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can’t find it on the cloud storage system.
+They are then encrypted with EME using AES with 256 bit key. EME (ECB-Mix-ECB) is a wide-block encryption mode presented in the 2003 paper "A Parallelizable Enciphering Mode" by Halevi and Rogaway.
+This makes for deterministic encryption which is what we want - the same filename must encrypt to the same thing otherwise we can't find it on the cloud storage system.
This means that
- filenames with the same name will encrypt the same
-- filenames which start the same won’t have a common prefix
+- filenames which start the same won't have a common prefix
This uses a 32 byte key (256 bits) and a 16 byte (128 bits) IV both of which are derived from the user password.
After encryption they are written out using a modified version of standard base32
encoding as described in RFC4648. The standard encoding is modified in two ways:
@@ -10308,7 +10312,7 @@ rclone rc backend/command command=decode fs=crypt: encryptedfile1 [encryptedfile
base32
is used rather than the more efficient base64
so rclone can be used on case insensitive remotes (eg Windows, Amazon Drive).
Key derivation
-Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn’t supply a salt then rclone uses an internal one.
+Rclone uses scrypt
with parameters N=16384, r=8, p=1
with an optional user supplied salt (password2) to derive the 32+32+16 = 80 bytes of key material required. If the user doesn't supply a salt then rclone uses an internal one.
scrypt
makes it impractical to mount a dictionary attack on rclone encrypted data. For full protection against this you should always use a salt.
Dropbox
Paths are specified as remote:path
@@ -10362,7 +10366,7 @@ y/e/d> y
A leading /
for a Dropbox personal account will do nothing, but it will take an extra HTTP transaction so it should be avoided.
Dropbox supports modified times, but the only way to set a modification time is to re-upload the file.
-This means that if you uploaded your data with an older version of rclone which didn’t support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don’t want this to happen use --size-only
or --checksum
flag to stop it.
This means that if you uploaded your data with an older version of rclone which didn't support the v2 API and modified times, rclone will decide to upload all your old data to fix the modification times. If you don't want this to happen use --size-only
or --checksum
flag to stop it.
Dropbox supports its own hash type which is checked for all transfers.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to dropbox (Dropbox).
-Dropbox App Client Id Leave blank normally.
Dropbox App Client Secret Leave blank normally.
Here are the advanced options specific to dropbox (Dropbox).
-Upload chunk size. (< 150M).
Any files larger than this will be uploaded in chunks of this size.
Note that chunks are buffered in memory (one at a time) so rclone can deal with retries. Setting this larger will increase the speed slightly (at most 10% for 128MB in tests) at the cost of using more memory. It can be set smaller if you are tight on memory.
@@ -10444,7 +10448,7 @@ y/e/d> yImpersonate this user when using a business account.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that Dropbox is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
-There are some file names such as thumbs.db
which Dropbox can’t store. There is a full list of them in the “Ignored Files” section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won’t fail.
Note that Dropbox is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+There are some file names such as thumbs.db
which Dropbox can't store. There is a full list of them in the "Ignored Files" section of this document. Rclone will issue an error message File name disallowed - not uploading
if it attempts to upload one of those file names, but the sync won't fail.
If you have more than 10,000 files in a directory then rclone purge dropbox:dir
will return the error Failed to purge: There are too many files involved in this operation
. As a work-around do an rclone delete dropbox:dir
followed by an rclone rmdir dropbox:dir
.
When you use rclone with Dropbox in its default configuration you are using rclone’s App ID. This is shared between all the rclone users.
+When you use rclone with Dropbox in its default configuration you are using rclone's App ID. This is shared between all the rclone users.
Here is how to create your own Dropbox App ID for rclone:
Log into the Dropbox App console with your Dropbox Account (It need not to be the same account as the Dropbox you want to access)
Choose an API => Usually this should be Dropbox API
Choose the type of access you want to use => Full Dropbox
or App Folder
Name your App. The app name is global, so you can’t use rclone
for example
Name your App. The app name is global, so you can't use rclone
for example
Click the button Create App
Fill Redirect URIs
as http://localhost:53682/
Find the App key
and App secret
Use these values in rclone config to add a new remote or edit an existing remote.
FTP is the File Transfer Protocol. FTP support is provided using the github.com/jlaffaye/ftp package.
+Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
Here is an example of making an FTP configuration. First run
rclone config
This will guide you through an interactive setup process. An FTP remote only needs a host together with and a username and a password. With anonymous FTP server, you will need to use anonymous
as username and your email address as the password.
FTP supports implicit FTP over TLS servers (FTPS). This has to be enabled in the config for the remote. The default FTPS port is 990
so the port will likely have to be explicitly set in the config for the remote.
Here are the standard options specific to ftp (FTP Connection).
-FTP host to connect to
FTP username, leave blank for current username, $USER
FTP port, leave blank to use default (21)
FTP password
NB Input to this must be obscured - see rclone obscure.
Use FTP over TLS (Implicit)
Here are the advanced options specific to ftp (FTP Connection).
-Maximum number of FTP simultaneous connections, 0 for unlimited
Do not verify the TLS certificate of the server
Disable using EPSV even if server advertises support
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that since FTP isn’t HTTP based the following flags don’t work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn’t supported (but --contimeout
is).
Note that --bind
isn’t supported.
FTP could support server side move but doesn’t yet.
+Note that since FTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
Note that --bind
isn't supported.
FTP could support server side move but doesn't yet.
Note that the ftp backend does not support the ftp_proxy
environment variable yet.
Note that while implicit FTP over TLS is supported, explicit FTP over TLS is not.
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync /home/local/directory remote:bucket
You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don’t have actively logged-in users, for example build machines.
-To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User
permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account’s credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won’t use the browser based authentication flow. If you’d rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
You can set up rclone with Google Cloud Storage in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
+To get credentials for Google Cloud Platform IAM Service Accounts, please head to the Service Account section of the Google Developer Console. Service Accounts behave just like normal User
permissions in Google Cloud Storage ACLs, so you can limit their access (e.g. make them read only). After creating an account, a JSON file containing the Service Account's credentials will be downloaded onto your machines. These credentials are what rclone will use for authentication.
To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
If no other source of credentials is provided, rclone will fall back to Application Default Credentials this is useful both when you already have configured authentication for your developer account, or in production when running on a google compute host. Note that if running in docker, you may need to run additional commands on your google compute machine - see this page.
Note that in the case application default credentials are used, there is no need to explicitly configure a project number.
-This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
You can set custom upload headers with the --header-upload
flag. Google Cloud Storage supports the headers as described in the working with metadata documentation
Eg --header-upload "Content-Type text/potato"
Note that the last of these is for setting custom metadata in the form --header-upload "x-goog-meta-key: value"
Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the “mtime” key in RFC3339 format accurate to 1ns.
+Google google cloud storage stores md5sums natively and rclone stores modification times as metadata on the object, under the "mtime" key in RFC3339 format accurate to 1ns.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
-Google Application Client Id Leave blank normally.
Google Application Client Secret Leave blank normally.
Project number. Optional - needed only for list/create/delete buckets - see your developer console.
Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.
Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
Access Control List for new objects.
Access Control List for new buckets.
Access checks should use bucket-level IAM policies.
If you want to upload objects to a bucket with Bucket Policy Only set then you will need to set this.
When it is set, rclone:
@@ -10997,7 +11002,7 @@ y/e/d> yLocation for the newly created buckets.
The storage class to use when storing objects in Google Cloud Storage.
Here are the advanced options specific to google cloud storage (Google Cloud Storage (this is not Google Drive)).
-This sets the encoding for the backend.
See: the encoding section in the overview for more info.
The scope are
This is the default scope and allows full access to all files, except for the Application Data Folder (see below).
-Choose this one if you aren’t sure.
+Choose this one if you aren't sure.
This allows read only access to all files. Files may be listed and downloaded but not uploaded, renamed or deleted.
This can be useful if you are using rclone to backup data and you want to be sure confidential data on your drive is not visible to rclone.
Files created with this scope are visible in the web interface.
This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won’t be able to see rclone’s files from the web interface either.
+This gives rclone its own private area to store files. Rclone will not be able to see any other files on your drive and you won't be able to see rclone's files from the web interface either.
This allows read only access to file names only. It does not allow rclone to download or upload data, or rename or delete files or directories.
You can set the root_folder_id
for rclone. This is the directory (identified by its Folder ID
) that rclone considers to be the root of your drive.
Normally you will leave this blank and rclone will determine the correct root to use itself.
-However you can set this to restrict rclone to a specific folder hierarchy or to access data within the “Computers” tab on the drive web interface (where files from Google’s Backup and Sync desktop program go).
+However you can set this to restrict rclone to a specific folder hierarchy or to access data within the "Computers" tab on the drive web interface (where files from Google's Backup and Sync desktop program go).
In order to do this you will have to find the Folder ID
of the directory you wish rclone to display. This will be the last segment of the URL when you open the relevant folder in the drive web interface.
So if the folder you want rclone to use has a URL which looks like https://drive.google.com/drive/folders/1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
in the browser, then you use 1XyfxxxxxxxxxxxxxxxxxxxxxxxxxKHCh
as the root_folder_id
in the config.
NB folders under the “Computers” tab seem to be read only (drive gives a 500 error) when using rclone.
-There doesn’t appear to be an API to discover the folder IDs of the “Computers” tab - please contact us if you know otherwise!
-Note also that rclone can’t access any data under the “Backups” tab on the google drive web interface yet.
+NB folders under the "Computers" tab seem to be read only (drive gives a 500 error) when using rclone.
+There doesn't appear to be an API to discover the folder IDs of the "Computers" tab - please contact us if you know otherwise!
+Note also that rclone can't access any data under the "Backups" tab on the google drive web interface yet.
You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don’t have actively logged-in users, for example build machines.
-To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt during rclone config
and rclone won’t use the browser based authentication flow. If you’d rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
You can set up rclone with Google Drive in an unattended mode, i.e. not tied to a specific end-user Google account. This is useful when you want to synchronise files onto machines that don't have actively logged-in users, for example build machines.
+To use a Service Account instead of OAuth2 token flow, enter the path to your Service Account credentials at the service_account_file
prompt during rclone config
and rclone won't use the browser based authentication flow. If you'd rather stuff the contents of the credentials file into the rclone config file, you can set service_account_credentials
with the actual contents of the file instead, or set the equivalent environment variable.
Let’s say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual’s Drive account, who IS a member of the domain. We’ll call the domain example.com, and the user foo@example.com.
-There’s a few steps we need to go through to accomplish this:
+Let's say that you are the administrator of a Google Apps (old) or G-suite account. The goal is to store data on an individual's Drive account, who IS a member of the domain. We'll call the domain example.com, and the user foo@example.com.
+There's a few steps we need to go through to accomplish this:
https://www.googleapis.com/auth/drive
to grant access to Google Drive specifically.https://www.googleapis.com/auth/drive
to grant access to Google Drive specifically.rclone config
@@ -11285,7 +11290,7 @@ root_folder_id> # Can be left blank
service_account_file> /home/foo/myJSONfile.json # This is where the JSON file goes!
y/n> # Auto config, y
-rclone -v --drive-impersonate foo@example.com lsf gdrive:backup
This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
It does this by combining multiple list
calls into a single API request.
This works by combining many '%s' in parents
filters into one expression. To list the contents of directories a, b and c, the following requests will be send by the regular List
function:
Google drive stores modification times accurate to 1 ms.
Only Invalid UTF-8 bytes will be replaced, as they can’t be used in JSON strings.
+Only Invalid UTF-8 bytes will be replaced, as they can't be used in JSON strings.
In contrast to other backends, /
can also be used in names and .
or ..
are valid names.
Google drive stores revisions of files. When you upload a change to an existing file to google drive using rclone it will create a new revision of that file.
@@ -11362,7 +11367,7 @@ trashed=false and 'c' in parentsBy default rclone will send all files to the trash when deleting files. If deleting them permanently is required then use the --drive-use-trash=false
flag, or set the equivalent environment variable.
In March 2020 Google introduced a new feature in Google Drive called drive shortcuts (API). These will (by September 2020) replace the ability for files or folders to be in multiple folders at once.
-Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don’t break if the source is renamed or moved about.
+Shortcuts are files that link to other files on Google Drive somewhat like a symlink in unix, except they point to the underlying file data (eg the inode in unix terms) so they don't break if the source is renamed or moved about.
Be default rclone treats these as follows.
For shortcuts pointing to files:
Google documents can be exported from and uploaded to Google Drive.
When rclone downloads a Google doc it chooses a format to download depending upon the --drive-export-formats
setting. By default the export formats are docx,xlsx,pptx,svg
which are a sensible default for an editable document.
When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can’t be exported to a format on the formats list, then rclone will choose a format from the default list.
+When choosing a format, rclone runs down the list provided in order and chooses the first file format the doc can be exported as from the list. If the file can't be exported to a format on the formats list, then rclone will choose a format from the default list.
If you prefer an archive copy then you might use --drive-export-formats pdf
, or if you prefer openoffice/libreoffice formats you might use --drive-export-formats ods,odt,odp
.
Note that rclone adds the extension to the google doc, so if it is called My Spreadsheet
on google docs, it will be exported as My Spreadsheet.xlsx
or My Spreadsheet.pdf
etc.
When importing files into Google Drive, rclone will convert all files with an extension in --drive-import-formats
to their associated document type. rclone will not convert any files by default, since the conversion is lossy process.
Here are the standard options specific to drive (Google Drive).
-Google Application Client Id Setting your own is recommended. See https://rclone.org/drive/#making-your-own-client-id for how to create your own. If you leave this blank, it will use an internal key which is low performance.
Google Application Client Secret Setting your own is recommended.
Scope that rclone should use when requesting access from drive.
ID of the root folder Leave blank normally.
-Fill in to access “Computers” folders (see docs), or for rclone to use a non root folder as its starting point.
-Note that if this is blank, the first time rclone runs it will fill it in with the ID of the root folder.
+Fill in to access "Computers" folders (see docs), or for rclone to use a non root folder as its starting point.
Service Account Credentials JSON file path Leave blank normally. Needed only if you want use SA instead of interactive login.
Here are the advanced options specific to drive (Google Drive).
-Service Account Credentials JSON blob Leave blank normally. Needed only if you want use SA instead of interactive login.
ID of the Team Drive
Only consider files owned by the authenticated user.
Send files to the trash instead of deleting permanently. Defaults to true, namely sending files to the trash. Use --drive-use-trash=false
to delete files permanently instead.
Skip google documents in all listings. If given, gdocs practically become invisible to rclone.
Skip MD5 checksum on Google photos and videos only.
Use this if you get checksum errors when transferring Google photos or videos.
Setting this flag will cause Google photos and videos to return a blank MD5 checksum.
-Google photos are identified by being in the “photos” space.
+Google photos are identified by being in the "photos" space.
Corrupted checksums are caused by Google modifying the image/video but not updating the checksum.
Only show files that are shared with me.
-Instructs rclone to operate on your “Shared with me” folder (where Google Drive lets you access the files and folders others have shared with you).
-This works both with the “list” (lsd, lsl, etc) and the “copy” commands (copy, sync, etc), and with all other commands too.
+Instructs rclone to operate on your "Shared with me" folder (where Google Drive lets you access the files and folders others have shared with you).
+This works both with the "list" (lsd, lsl, etc) and the "copy" commands (copy, sync, etc), and with all other commands too.
Only show files that are in the trash. This will show trashed files in their original directory structure.
Deprecated: see export_formats
Comma separated list of preferred formats for downloading Google docs.
Comma separated list of preferred formats for uploading Google docs.
Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
+Allow the filetype to change when uploading Google docs (e.g. file.doc to file.docx). This will confuse sync and reupload every time.
Use file created date instead of modified date.,
Useful when downloading data and you want the creation date used in place of the last modified date.
WARNING: This flag may have some unexpected consequences.
-When uploading to your drive all files will be overwritten unless they haven’t been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the “–checksum” flag.
-This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the “Create a Google Photos folder” option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.
+When uploading to your drive all files will be overwritten unless they haven't been modified since their creation. And the inverse will occur while downloading. This side effect can be avoided by using the "--checksum" flag.
+This feature was implemented to retain photos capture date as recorded by google photos. You will first need to check the "Create a Google Photos folder" option in your google drive settings. You can then copy or move the photos locally and use the date the image was taken (created) set as the modification date.
Use date file was shared instead of modified date.
-Note that, as with “–drive-use-created-date”, this flag may have unexpected consequences when uploading/downloading files.
-If both this flag and “–drive-use-created-date” are set, the created date is used.
+Note that, as with "--drive-use-created-date", this flag may have unexpected consequences when uploading/downloading files.
+If both this flag and "--drive-use-created-date" are set, the created date is used.
Size of listing chunk 100-1000. 0 to disable.
Impersonate this user when using a service account.
-Note that if this is used then “root_folder_id” will be ignored.
Use alternate export URLs for google documents export.,
-If this option is set this instructs rclone to use an alternate set of export URLs for drive documents. Users have reported that the official export URLs can’t export large documents, whereas these unofficial ones can.
+If this option is set this instructs rclone to use an alternate set of export URLs for drive documents. Users have reported that the official export URLs can't export large documents, whereas these unofficial ones can.
See rclone issue #2243 for background, this google drive issue and this helpful post.
Cutoff for switching to chunked upload
Upload chunk size. Must a power of 2 >= 256k.
Making this larger will improve performance, but note that each chunk is buffered in memory one per transfer.
Reducing this will reduce memory usage but decrease performance.
@@ -11850,16 +11853,16 @@ trashed=false and 'c' in parentsSet to allow files which return cannotDownloadAbusiveFile to be downloaded.
-If downloading a file returns the error “This file has been identified as malware or spam and cannot be downloaded” with the error code “cannotDownloadAbusiveFile” then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.
+If downloading a file returns the error "This file has been identified as malware or spam and cannot be downloaded" with the error code "cannotDownloadAbusiveFile" then supply this flag to rclone to indicate you acknowledge the risks of downloading the file and rclone will download it anyway.
Keep new head revision of each file forever.
Show sizes as storage quota usage, not actual size.
Show the size of a file as the storage quota used. This is the current version plus any older versions that have been set to keep forever.
WARNING: This flag may have some unexpected consequences.
-It is not recommended to set this flag in your config - the recommended usage is using the flag form –drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only.
-If you do use this flag for syncing (not recommended) then you will need to use –ignore size also.
+It is not recommended to set this flag in your config - the recommended usage is using the flag form --drive-size-as-quota when doing rclone ls/lsl/lsf/lsjson/etc only.
+If you do use this flag for syncing (not recommended) then you will need to use --ignore size also.
If Object’s are greater, use drive v2 API to download.
+If Object's are greater, use drive v2 API to download.
Minimum time to sleep between API calls.
Number of API calls to allow without sleeping.
Allow server side operations (eg copy) to work across different drive configs.
-This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn’t enabled by default because it isn’t easy to tell if it will work between any two configurations.
+This can be useful if you wish to do a server side copy between two different Google drives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.
Disable drive using http2
There is currently an unsolved issue with the google drive backend and HTTP/2. HTTP/2 is therefore disabled by default for the drive backend but can be re-enabled here. When the issue is solved this flag will be removed.
See: https://github.com/rclone/rclone/issues/3631
@@ -11922,10 +11925,10 @@ trashed=false and 'c' in parentsMake upload limit errors be fatal
At the time of writing it is only possible to upload 750GB of data to Google Drive a day (this is an undocumented limit). When this limit is reached Google Drive produces a slightly different error message. When this flag is set it causes these errors to be fatal. These will stop the in-progress sync.
-Note that this detection is relying on error message strings which Google don’t document so it may break in the future.
+Note that this detection is relying on error message strings which Google don't document so it may break in the future.
See: https://github.com/rclone/rclone/issues/3857
If set skip shortcut files
Normally rclone dereferences shortcut files making them appear as if they are the original file (see the shortcuts section). If this flag is set then rclone will ignore shortcut files completely.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the “rclone backend” command for more info on how to pass options and arguments.
+See the "rclone backend" command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
Get command for fetching the drive config parameters
@@ -11967,8 +11970,8 @@ trashed=false and 'c' in parents rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o chunk_size]Options:
Set command for updating the drive config parameters
@@ -11979,8 +11982,8 @@ rclone rc backend/command command=get fs=drive: [-o service_account_file] [-o ch rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.json] [-o chunk_size=67108864]Options:
Create shortcuts from files or directories
@@ -11989,48 +11992,49 @@ rclone rc backend/command command=set fs=drive: [-o service_account_file=sa.jsonUsage:
rclone backend shortcut drive: source_item destination_shortcut
rclone backend shortcut drive: source_item -o target=drive2: destination_shortcut
-In the first example this creates a shortcut from the “source_item” which can be a file or a directory to the “destination_shortcut”. The “source_item” and the “destination_shortcut” should be relative paths from “drive:”
-In the second example this creates a shortcut from the “source_item” relative to “drive:” to the “destination_shortcut” relative to “drive2:”. This may fail with a permission error if the user authenticated with “drive2:” can’t read files from “drive:”.
+In the first example this creates a shortcut from the "source_item" which can be a file or a directory to the "destination_shortcut". The "source_item" and the "destination_shortcut" should be relative paths from "drive:"
+In the second example this creates a shortcut from the "source_item" relative to "drive:" to the "destination_shortcut" relative to "drive2:". This may fail with a permission error if the user authenticated with "drive2:" can't read files from "drive:".
Options:
Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MBytes/s but lots of small files can take a long time.
Server side copies are also subject to a separate rate limit. If you see User rate limit exceeded errors, wait at least 24 hours and retry. You can disable server side copies with --disable copy
to download and upload the files if you prefer.
Google docs will appear as size -1 in rclone ls
and as size 0 in anything which uses the VFS layer, eg rclone mount
, rclone serve
.
This is because rclone can’t find out the size of the Google docs without downloading them.
+This is because rclone can't find out the size of the Google docs without downloading them.
Google docs will transfer correctly with rclone sync
, rclone copy
etc as rclone knows to ignore the size when doing the transfer.
However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount
. If it doesn’t work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!
However an unfortunate consequence of this is that you may not be able to download Google docs using rclone mount
. If it doesn't work you will get a 0 sized file. If you try again the doc may gain its correct size and be downloadable. Whether it will work on not depends on the application accessing the mount and the OS you are running - experiment to find out if it does work for you!
Sometimes, for no reason I’ve been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.
+Sometimes, for no reason I've been able to track down, drive will duplicate a file that rclone uploads. Drive unlike all the other remotes can have duplicated files.
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
Note that this isn’t just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.
-Note that this isn't just a problem with rclone, even Google Photos on Android duplicates files on drive sometimes.
+The most likely cause of this is the duplicated file issue above - run rclone dedupe
and check your logs for duplicate object or directory messages.
This can also be caused by a delay/caching on google drive’s end when comparing directory listings. Specifically with team drives used in combination with –fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using –fast-list.
-Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using –fast-list both seem to be effective in preventing the problem.
+This can also be caused by a delay/caching on google drive's end when comparing directory listings. Specifically with team drives used in combination with --fast-list. Files that were uploaded recently may not appear on the directory list sent to rclone when using --fast-list.
+Waiting a moderate period of time between attempts (estimated to be approximately 1 hour) and/or not using --fast-list both seem to be effective in preventing the problem.
When you use rclone with Google drive in its default configuration you are using rclone’s client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.
+When you use rclone with Google drive in its default configuration you are using rclone's client_id. This is shared between all the rclone users. There is a global rate limit on the number of queries per second that each client_id can do set by Google. rclone already has a high quota and I will continue to make sure it is high enough by contacting Google.
It is strongly recommended to use your own client ID as the default rclone ID is heavily used. If you have multiple services running, it is recommended to use an API key for each service. The default Google quota is 10 transactions per second so it is recommended to stay under that number as if you use more than that, it will cause rclone to rate limit and make things slower.
Here is how to create your own Google Drive client ID for rclone:
Log into the Google API Console with your Google account. It doesn’t matter what Google account you use. (It need not be the same account as the Google Drive you want to access)
Log into the Google API Console with your Google account. It doesn't matter what Google account you use. (It need not be the same account as the Google Drive you want to access)
Select a project or create a new project.
Under “ENABLE APIS AND SERVICES” search for “Drive”, and enable the “Google Drive API”.
Click “Credentials” in the left-side panel (not “Create credentials”, which opens the wizard), then “Create credentials”
If you already configured an “Oauth Consent Screen”, then skip to the next step; if not, click on “CONFIGURE CONSENT SCREEN” button (near the top right corner of the right panel), then select “External” and click on “CREATE”; on the next screen, enter an “Application name” (“rclone” is OK) then click on “Save” (all other data is optional). Click again on “Credentials” on the left panel to go back to the “Credentials” screen.
Under "ENABLE APIS AND SERVICES" search for "Drive", and enable the "Google Drive API".
Click "Credentials" in the left-side panel (not "Create credentials", which opens the wizard), then "Create credentials"
If you already configured an "Oauth Consent Screen", then skip to the next step; if not, click on "CONFIGURE CONSENT SCREEN" button (near the top right corner of the right panel), then select "External" and click on "CREATE"; on the next screen, enter an "Application name" ("rclone" is OK) then click on "Save" (all other data is optional). Click again on "Credentials" on the left panel to go back to the "Credentials" screen.
(PS: if you are a GSuite user, you could also select “Internal” instead of “External” above, but this has not been tested/documented so far).
+(PS: if you are a GSuite user, you could also select "Internal" instead of "External" above, but this has not been tested/documented so far).
Click on the “+ CREATE CREDENTIALS” button at the top of the screen, then select “OAuth client ID”.
Choose an application type of “Desktop app” if you using a Google account or “Other” if you using a GSuite account and click “Create”. (the default name is fine)
Click on the "+ CREATE CREDENTIALS" button at the top of the screen, then select "OAuth client ID".
Choose an application type of "Desktop app" if you using a Google account or "Other" if you using a GSuite account and click "Create". (the default name is fine)
It will show you a client ID and client secret. Use these values in rclone config to add a new remote or edit an existing remote.
Be aware that, due to the “enhanced security” recently introduced by Google, you are theoretically expected to “submit your app for verification” and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it’s not such a big deal).
+Be aware that, due to the "enhanced security" recently introduced by Google, you are theoretically expected to "submit your app for verification" and then wait a few weeks(!) for their response; in practice, you can go right ahead and use the client ID and client secret with rclone, the only issue will be a very scary confirmation screen shown when you connect via your browser for rclone to be able to get its token-id (but as this only happens during the remote configuration, it's not such a big deal).
(Thanks to @balazer on github for these instructions.)
+Sometimes, creation of an OAuth consent in Google API Console fails due to an error message “The request failed because changes to one of the field of the resource is not supported”. As a convenient workaround, the necessary Google Drive API key can be created on the Python Quickstart page. Just push the Enable the Drive API button to receive the Client ID and Secret. Note that it will automatically create a new project in the API Console.
The rclone backend for Google Photos is a specialized backend for transferring photos and videos to and from Google Photos.
NB The Google Photos API which rclone uses has quite a few limitations, so please read the limitations section carefully to make sure it is suitable for your use.
@@ -12111,7 +12115,7 @@ y/e/d> yAs Google Photos is not a general purpose cloud storage system the backend is laid out to help you navigate it.
The directories under media
show different ways of categorizing the media. Each file will appear multiple times. So if you want to make a backup of your google photos you might choose to backup remote:media/by-month
. (NB remote:media/by-day
is rather slow at the moment so avoid for syncing.)
Note that all your photos and videos will appear somewhere under media
, but they may not appear under album
unless you’ve put them into albums.
Note that all your photos and videos will appear somewhere under media
, but they may not appear under album
unless you've put them into albums.
/
- upload
- file1.jpg
@@ -12159,7 +12163,7 @@ y/e/d> y
- file1.jpg
- file2.jpg
There are two writable parts of the tree, the upload
directory and sub directories of the album
directory.
The upload
directory is for uploading files you don’t want to put into albums. This will be empty to start with and will contain the files you’ve uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album
will work better.
The upload
directory is for uploading files you don't want to put into albums. This will be empty to start with and will contain the files you've uploaded for one rclone session only, becoming empty again when you restart rclone. The use case for this would be if you have a load of files you just want to once off dump into Google Photos. For repeated syncing, uploading to album
will work better.
Directories within the album
directory are also writeable and you may create new directories (albums) under album
. If you copy files with a directory hierarchy in there then rclone will create albums with the /
character in them. For example if you do
rclone copy /path/to/images remote:album/images
and the images directory contains
@@ -12188,23 +12192,23 @@ y/e/d> yThis means that you can use the album
path pretty much like a normal filesystem and it is a good target for repeated syncing.
The shared-album
directory shows albums shared with you or by you. This is similar to the Sharing tab in the Google Photos web interface.
Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn’t understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
-Note that all media items uploaded to Google Photos through the API are stored in full resolution at “original quality” and will count towards your storage quota in your Google Account. The API does not offer a way to upload in “high quality” mode..
+Only images and videos can be uploaded. If you attempt to upload non videos or images or formats that Google Photos doesn't understand, rclone will upload the file, then Google Photos will give an error when it is put turned into a media item.
+Note that all media items uploaded to Google Photos through the API are stored in full resolution at "original quality" and will count towards your storage quota in your Google Account. The API does not offer a way to upload in "high quality" mode..
When Images are downloaded this strips EXIF location (according to the docs and my tests). This is a limitation of the Google Photos API and is covered by bug #112096115.
-The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on “Google Photos” as a backup of your photos. You will not be able to use rclone to redownload original images. You could use ‘google takeout’ to recover the original photos as a last resort
+The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort
When videos are downloaded they are downloaded in a really compressed version of the video compared to downloading it via the Google Photos web interface. This is covered by bug #113672044.
If a file name is duplicated in a directory then rclone will add the file ID into its name. So two files called file.jpg
would then appear as file {123456}.jpg
and file {ABCDEF}.jpg
(the actual IDs are a lot longer alas!).
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload
then uploaded the same image to album/my_album
the filename of the image in album/my_album
will be what it was uploaded with initially, not what you uploaded it with to album
. In practise this shouldn’t cause too many problems.
If you upload the same image (with the same binary data) twice then Google Photos will deduplicate it. However it will retain the filename from the first upload which may confuse rclone. For example if you uploaded an image to upload
then uploaded the same image to album/my_album
the filename of the image in album/my_album
will be what it was uploaded with initially, not what you uploaded it with to album
. In practise this shouldn't cause too many problems.
The date shown of media in Google Photos is the creation date as determined by the EXIF information, or the upload date if that is not known.
This is not changeable by rclone and is not the modification date of the media on local disk. This means that rclone cannot use the dates from Google Photos for syncing purposes.
The Google Photos API does not return the size of media. This means that when syncing to Google Photos, rclone can only do a file existence check.
It is possible to read the size of the media, but this needs an extra HTTP HEAD request per media item so is very slow and uses up a lot of transactions. This can be enabled with the --gphotos-read-size
option or the read_size = true
config parameter.
If you want to use the backend with rclone mount
you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You’ll need to experiment to see if it works for you without the flag.
If you want to use the backend with rclone mount
you may need to enable this flag (depending on your OS and application using the photos) otherwise you may not be able to read media off the mount. You'll need to experiment to see if it works for you without the flag.
Rclone can only upload files to albums it created. This is a limitation of the Google Photos API.
Rclone can remove files it uploaded from albums it created only.
@@ -12215,7 +12219,7 @@ y/e/d> yThe Google Photos API does not support deleting albums - see bug #135714733.
Here are the standard options specific to google photos (Google Photos).
-Google Application Client Id Leave blank normally.
Google Application Client Secret Leave blank normally.
Set to make the Google Photos backend read only.
If you choose read only then rclone will only request read only access to your photos, otherwise rclone will request full access.
Here are the advanced options specific to google photos (Google Photos).
-Set to read the size of media items.
-Normally rclone does not read the size of media items since this takes another transaction. This isn’t necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.
+Normally rclone does not read the size of media items since this takes another transaction. This isn't necessary for syncing. However rclone mount needs to know the size of files in advance of reading them, so setting this flag when using rclone mount is recommended if you want to read the media.
Year limits the photos to be downloaded to those which are uploaded after the given year
The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn’t then please file an issue, or send a pull request!)
+The HTTP remote is a read only remote for reading files of a webserver. The webserver should provide file listings which rclone will read and turn into a remote. This has been tested with common webservers such as Apache/Nginx/Caddy and will likely work with file listings from most web servers. (If it doesn't then please file an issue, or send a pull request!)
Paths are specified as remote:
or remote:path/to/dir
.
Here is an example of how to make a remote called remote
. First run:
rclone config
@@ -12314,7 +12318,7 @@ e/n/d/r/c/s/q> q
Sync the remote directory
to /home/local/directory
, deleting any excess files.
rclone sync remote:directory /home/local/directory
This remote is read only - you can’t upload files to an HTTP server.
+This remote is read only - you can't upload files to an HTTP server.
Most HTTP servers store time accurate to 1 second.
rclone lsd --http-url https://beta.rclone.org :http:
Here are the standard options specific to http (http Connection).
-URL of http host to connect to
Here are the advanced options specific to http (http Connection).
-Set HTTP headers for all transactions
Use this to set additional HTTP headers for all transactions
The input format is comma separated list of key,value pairs. Standard CSV encoding may be used.
-For example to set a Cookie use ‘Cookie,name=value’, or ‘“Cookie”,“name=value”’.
-You can set multiple headers, eg ‘“Cookie”,“name=value”,“Authorization”,“xxx”’.
+For example to set a Cookie use 'Cookie,name=value', or '"Cookie","name=value"'.
+You can set multiple headers, eg '"Cookie","name=value","Authorization","xxx"'.
Set this if the site doesn’t end directories with /
+Set this if the site doesn't end directories with /
Use this if your target website does not use / on the end of directories.
A / on the end of a path is how rclone normally tells the difference between files and directories. If this flag is set, then rclone will treat all files with Content-Type: text/html as directories and read URLs from them rather than downloading them.
Note that this may cause rclone to confuse genuine HTML files with directories.
@@ -12368,8 +12372,8 @@ e/n/d/r/c/s/q> qDon’t use HEAD requests to find file sizes in dir listing
+Don't use HEAD requests to find file sizes in dir listing
If your site is being very slow to load then you can try this option. Normally rclone does a HEAD request for each potential file in a directory listing to:
If you set this option, rclone will not do the HEAD request. This will mean
some files that don’t exist may be in the listing
some files that don't exist may be in the listing
rclone copy /home/source remote:backup
If you want the directory to be visible in the official Hubic browser, you need to copy your files to the default
directory
rclone copy /home/source remote:default/backup
-This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object as X-Object-Meta-Mtime
as floating point since the epoch accurate to 1 ns.
Note that Hubic wraps the Swift backend, so most of the properties of are the same.
Here are the standard options specific to hubic (Hubic).
-Hubic Client Id Leave blank normally.
Hubic Client Secret Leave blank normally.
Here are the advanced options specific to hubic (Hubic).
-Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
Don’t chunk files during streaming upload.
+Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal copy operations.
@@ -12486,7 +12490,7 @@ y/e/d> yThis sets the encoding for the backend.
See: the encoding section in the overview for more info.
This uses the normal OpenStack Swift mechanism to refresh the Swift API credentials and ignores the expires field returned by the Hubic API.
-The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these.
+The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
Jottacloud is a cloud storage service provider from a Norwegian company, using its own datacenters in Norway.
In addition to the official service at jottacloud.com, there are also several whitelabel versions which should work with this backend.
@@ -12572,15 +12576,15 @@ y/e/d> yTo copy a local directory to an Jottacloud directory called backup
rclone copy /home/source remote:backup
The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you’ll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config.
-The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the “regular” mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.
-The official Jottacloud client registers a device for each computer you install it on, and then creates a mountpoint for each folder you select for Backup. The web interface uses a special device called Jotta for the Archive and Sync mountpoints. In most cases you'll want to use the Jotta/Archive device/mountpoint, however if you want to access files uploaded by any of the official clients rclone provides the option to select other devices and mountpoints during config.
+The built-in Jotta device may also contain several other mountpoints, such as: Latest, Links, Shared and Trash. These are special mountpoints with a different internal representation than the "regular" mountpoints. Rclone will only to a very limited degree support them. Generally you should avoid these, unless you know what you are doing.
+This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
Note that the implementation in Jottacloud always uses only a single API request to get the entire list, so for large folders this could lead to long wait time before the first results are shown.
Jottacloud allows modification times to be set on objects accurate to 1 second. These will be used to detect whether objects need syncing or not.
Jottacloud supports MD5 type hashes, so you can use the --checksum
flag.
Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR
environment variable points to) before it is uploaded. Small files will be cached in memory - see the –jottacloud-md5-memory-limit flag.
Note that Jottacloud requires the MD5 hash before upload so if the source does not have an MD5 checksum then the file will be cached temporarily on disk (wherever the TMPDIR
environment variable points to) before it is uploaded. Small files will be cached in memory - see the --jottacloud-md5-memory-limit flag.
In addition to the default restricted characters set the following characters are also replaced:
Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the –jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command.
+By default rclone will send all files to the trash when deleting files. They will be permanently deleted automatically after 30 days. You may bypass the trash and permanently delete files immediately by using the --jottacloud-hard-delete flag, or set the equivalent environment variable. Emptying the trash is supported by the cleanup command.
Jottacloud supports file versioning. When rclone uploads a new version of a file it creates a new version of it. Currently rclone only supports retrieving the current version but older versions can be accessed via the Jottacloud Website.
To view your current quota you can use the rclone about remote:
command which will display your usage limit (unless it is unlimited) and the current usage.
Here are the advanced options specific to jottacloud (Jottacloud).
-Files bigger than this will be cached on disk to calculate the MD5 if required.
Only show files that are in the trash. This will show trashed files in their original directory structure.
Delete files permanently rather than putting them into the trash.
Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link.
Files bigger than this can be resumed if the upload fail’s.
+Files bigger than this can be resumed if the upload fail's.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that Jottacloud is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
-There are quite a few characters that can’t be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
+Note that Jottacloud is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+There are quite a few characters that can't be in Jottacloud file names. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ? in it will be mapped to ? instead.
Jottacloud only supports filenames up to 255 characters in length.
Jottacloud exhibits some inconsistent behaviours regarding deleted files and folders which may cause Copy, Move and DirMove operations to previously deleted paths to fail. Emptying the trash should help in such cases.
@@ -12769,10 +12773,10 @@ y/e/d> y -Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
Here are the standard options specific to koofr (Koofr).
-Your Koofr user name
Your Koofr password for rclone (generate one at https://app.koofr.net/app/admin/preferences/password)
NB Input to this must be obscured - see rclone obscure.
Here are the advanced options specific to koofr (Koofr).
-The Koofr API endpoint to use
Mount ID of the mount to use. If omitted, the primary mount is used.
Does the backend support setting modification time. Set this to false if you use a mount ID that points to a Dropbox or Amazon Drive backend.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that Koofr is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
+Note that Koofr is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Mail.ru Cloud is a cloud storage provided by a Russian internet company Mail.Ru Group. The official desktop client is Disk-O:, available only on Windows. (Please note that official sites are in Russian)
Currently it is recommended to disable 2FA on Mail.ru accounts intended for rclone until it gets eventually implemented.
remote:directory/subdirectory
last modified time
property, directories don’tlast modified time
property, directories don'tSync /home/local/directory
to the remote path, deleting any excess files in the path.
rclone sync /home/local/directory remote:directory
Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as “Jan 1 1970”.
+Files support a modification time attribute with up to 1 second precision. Directories do not have a modification time, which is shown as "Jan 1 1970".
Hash sums use a custom Mail.ru algorithm based on SHA1. If file size is less than or equal to the SHA1 block size (20 bytes), its hash is simply its data right-padded with zero bytes. Hash sum of a larger file is computed as a SHA1 sum of the file data bytes concatenated with a decimal representation of the data length.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
File size limits depend on your account. A single file size is limited by 2G for a free account and unlimited for paid tariffs. Please refer to the Mail.ru site for the total uploaded size limits.
-Note that Mailru is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
+Note that Mailru is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
Here are the standard options specific to mailru (Mail.ru Cloud).
-User name (usually email)
Password
NB Input to this must be obscured - see rclone obscure.
Skip full upload if there is another file with same data hash. This feature is called “speedup” or “put by hash”. It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.
+Skip full upload if there is another file with same data hash. This feature is called "speedup" or "put by hash". It is especially efficient in case of generally available files like popular books, video or audio clips, because files are searched by hash in all accounts of all mailru users. Please note that rclone may need local memory and disk space to calculate content hash in advance and decide whether full upload is required. Also, if rclone does not know file size in advance (e.g. in case of streaming or partial uploads), it will not even try this optimization.
Here are the advanced options specific to mailru (Mail.ru Cloud).
-Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain ’*’ or ‘?’ meta characters.
+Comma separated list of file name patterns eligible for speedup (put by hash). Patterns are case insensitive and can contain '*' or '?' meta characters.
This option allows you to disable speedup (put by hash) for large files (because preliminary hashing can exhaust you RAM or disk space)
Files larger than the size given below will always be hashed on disk.
What should copy do if file checksum is mismatched or invalid
HTTP user agent used internally by client. Defaults to “rclone/VERSION” or “–user-agent” provided on command line.
+HTTP user agent used internally by client. Defaults to "rclone/VERSION" or "--user-agent" provided on command line.
Comma separated list of internal maintenance flags. This option must not be used by an ordinary user. It is intended only to facilitate remote troubleshooting of backend issues. Strict meaning of flags is not documented and not guaranteed to persist between releases. Quirks will be removed when the backend grows stable. Supported quirks: atomicmkdir binlist gzip insecure retry400
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Mega can have two files with exactly the same name and path (unlike a normal file system).
Duplicated files cause problems with the syncing and you will see messages in the log about duplicates.
Use rclone dedupe
to fix duplicated files.
Mega remotes seem to get blocked (reject logins) under “heavy use”. We haven’t worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands.
-For example, executing this command 90 times in a row rclone link remote:file
will cause the remote to become “blocked”. This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files… After more or less a week, the remote will remote accept rclone logins normally again.
Mega remotes seem to get blocked (reject logins) under "heavy use". We haven't worked out the exact blocking rules but it seems to be related to fast paced, successive rclone commands.
+For example, executing this command 90 times in a row rclone link remote:file
will cause the remote to become "blocked". This is not an abnormal situation, for example if you wish to get the public links of a directory with hundred of files... After more or less a week, the remote will remote accept rclone logins normally again.
You can mitigate this issue by mounting the remote it with rclone mount
. This will log-in when mounting and a log-out when unmounting only. You can also run rclone rcd
and then use rclone rc
to run the commands over the API to avoid logging in each time.
Rclone does not currently close mega sessions (you can see them in the web interface), however closing the sessions does not solve the issue.
-If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven’t identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing…
+If you space rclone commands by 3 seconds it will avoid blocking the remote. We haven't identified the exact blocking rules, so perhaps one could execute the command 80 times without waiting and avoid blocking by waiting 3 seconds, then continuing...
Note that this has been observed by trial and error and might not be set in stone.
-Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn’t compatible with the current stateless rclone approach.
+Other tools seem not to produce this blocking effect, as they use a different working approach (state-based, using sessionIDs instead of log-in) which isn't compatible with the current stateless rclone approach.
Note that once blocked, the use of other tools (such as megacmd) is not a sure workaround: following megacmd login times have been observed in succession for blocked remote: 7 minutes, 20 min, 30min, 30 min, 30min. Web access looks unaffected though.
Investigation is continuing in relation to workarounds based on timeouts, pacers, retrials and tpslimits - if you discover something relevant, please post on the forum.
So, if rclone was working nicely and suddenly you are unable to log-in and you are sure the user and the password are correct, likely you have got the remote blocked for a while.
Here are the standard options specific to mega (Mega).
-User name
Password.
NB Input to this must be obscured - see rclone obscure.
Here are the advanced options specific to mega (Mega).
-Output more debug from Mega.
If this flag is set (along with -vv) it will print further debugging information from the mega backend.
Delete files permanently rather than putting them into the trash.
Normally the mega backend will put all deletions into the trash rather than permanently deleting them. If you specify this then rclone will permanently delete objects instead.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn’t appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.
+This backend uses the go-mega go library which is an opensource go library implementing the Mega API. There doesn't appear to be any documentation for the mega protocol beyond the mega C++ SDK source code so there are likely quite a few errors still remaining in this library.
Mega allows duplicate files which may confuse rclone.
The memory backend is an in RAM backend. It does not persist its data - use the local backend for that.
@@ -13300,7 +13304,7 @@ y) Yes this is OK (default) e) Edit this remote d) Delete this remote y/e/d> y -Because the memory backend isn’t persistent it is most useful for testing or with an rclone server or rclone mount, eg
+Because the memory backend isn't persistent it is most useful for testing or with an rclone server or rclone mount, eg
rclone mount :memory: /mnt/tmp
rclone serve webdav :memory:
rclone serve sftp :memory:
@@ -13351,7 +13355,7 @@ y/e/d> y
rclone ls remote:container
Sync /home/local/directory
to the remote container, deleting any excess files in the container.
rclone sync /home/local/directory remote:container
-This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
The modified time is stored as metadata on the object with the mtime
key. It is stored using RFC3339 Format time with nanosecond precision. The metadata is supplied during directory listings so there is no overhead to using it.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
MD5 hashes are stored with blobs. However blobs that were uploaded in chunks only have an MD5 if the source remote was capable of MD5 hashes, eg the local disk.
You can also list the single container from the root. This will only show the container specified by the SAS URL.
$ rclone lsd azureblob:
container/
-Note that you can’t see or access any other containers - this will fail
+Note that you can't see or access any other containers - this will fail
rclone ls azureblob:othercontainer
Container level SAS URLs are useful for temporarily allowing third parties access to a single container or putting credentials into an untrusted environment such as a CI build server.
Rclone supports multipart uploads with Azure Blob storage. Files bigger than 256MB will be uploaded using chunked upload by default.
The files will be uploaded in parallel in 4MB chunks (by default). Note that these chunks are buffered in memory and there may be up to --transfers
of them being uploaded at once.
Files can’t be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M
.
Note that rclone doesn’t commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won’t allow more than that amount of uncommitted blocks.
+Files can't be split into more than 50,000 chunks so by default, so the largest file that can be uploaded with 4MB chunk size is 195GB. Above this rclone will double the chunk size until it creates less than 50,000 chunks. By default this will mean a maximum file size of 3.2TB can be uploaded. This can be raised to 5TB using --azureblob-chunk-size 100M
.
Note that rclone doesn't commit the block list until the end of the upload which means that there is a limit of 9.5TB of multipart uploads in progress as Azure won't allow more than that amount of uncommitted blocks.
Here are the standard options specific to azureblob (Microsoft Azure Blob Storage).
-Storage Account Name (leave blank to use SAS URL or Emulator)
Storage Account Key (leave blank to use SAS URL or Emulator)
SAS URL for container level access only (leave blank if using account/key or Emulator)
Uses local storage emulator if provided as ‘true’ (leave blank if using real azure storage endpoint)
+Uses local storage emulator if provided as 'true' (leave blank if using real azure storage endpoint)
Here are the advanced options specific to azureblob (Microsoft Azure Blob Storage).
-Endpoint for the service Leave blank normally.
Cutoff for switching to chunked upload (<= 256MB).
Upload chunk size (<= 100MB).
-Note that this is stored in memory and there may be up to “–transfers” chunks stored at once in memory.
+Note that this is stored in memory and there may be up to "--transfers" chunks stored at once in memory.
Size of blob list.
-This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. “List blobs” requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.
+This sets the number of blobs requested in each listing chunk. Default is the maximum, 5000. "List blobs" requests are permitted 2 minutes per megabyte to complete. If an operation is taking longer than 2 minutes per megabyte on average, it will time out ( source ). This can be used to limit the number of blobs items to return, to avoid the time out.
Access tier of blob: hot, cool or archive.
Archived blobs can be restored by setting access tier to hot or cool. Leave blank if you intend to use default access tier, which is set at account level
-If there is no “access tier” specified, rclone doesn’t apply any tier. rclone performs “Set Tier” operation on blobs while uploading, if objects are not modified, specifying “access tier” to new one will have no effect. If blobs are in “archive tier” at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to “Hot” or “Cool”.
+If there is no "access tier" specified, rclone doesn't apply any tier. rclone performs "Set Tier" operation on blobs while uploading, if objects are not modified, specifying "access tier" to new one will have no effect. If blobs are in "archive tier" at remote, trying to perform data transfer operations from remote will not be allowed. User should first restore by tiering blob to "Hot" or "Cool".
Don’t store MD5 checksum with object metadata.
+Don't store MD5 checksum with object metadata.
Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading.
How often internal memory buffer pools will be flushed. Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations. This option controls how often unused buffers will be removed from the pool.
Whether to use mmap buffers in internal memory pool.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
To copy a local directory to an OneDrive directory called backup
rclone copy /home/source remote:backup
You can use your own Client ID if the default (client_id
left blank) one doesn’t work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests.
You can use your own Client ID if the default (client_id
left blank) one doesn't work for you or you see lots of throttling. The default Client ID and Key is shared by all rclone users when performing requests.
If you are having problems with them (E.g., seeing a lot of throttling), you can get your own Client ID and Key by following the steps below:
New registration
.New registration
.Accounts in any organizational directory (Any Azure AD directory - Multitenant) and personal Microsoft accounts (e.g. Skype, Xbox)
, select Web
in Redirect URI
Enter http://localhost:53682/
and click Register. Copy and keep the Application (client) ID
under the app name for later use.manage
select Certificates & secrets
, click New client secret
. Copy and keep that secret for later use.manage
select API permissions
, click Add a permission
and select Microsoft Graph
then select delegated permissions
.Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Any files you delete with rclone will end up in the trash. Microsoft doesn’t provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft’s apps or via the OneDrive website.
+Any files you delete with rclone will end up in the trash. Microsoft doesn't provide an API to permanently delete files, nor to empty the trash, so you will have to do that with one of Microsoft's apps or via the OneDrive website.
Here are the standard options specific to onedrive (Microsoft OneDrive).
-Microsoft App Client Id Leave blank normally.
Microsoft App Client Secret Leave blank normally.
Here are the advanced options specific to onedrive (Microsoft OneDrive).
-Chunk size to upload files with - must be multiple of 320k (327,680 bytes).
Above this size files will be chunked - must be multiple of 320k (327,680 bytes) and should not exceed 250M (262,144,000 bytes) else you may encounter "Microsoft.SharePoint.Client.InvalidClientQueryException: The request message is too big." Note that the chunks will be buffered into memory.
The ID of the drive to use
The type of the drive ( personal | business | documentLibrary )
Set to make OneNote files show up in directory listings.
-By default rclone will hide OneNote files in directory listings because operations like “Open” and “Update” won’t work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.
+By default rclone will hide OneNote files in directory listings because operations like "Open" and "Update" won't work on them. But this behaviour may also prevent you from deleting them. If you want to delete OneNote files or otherwise want them to show up in directory listing, set this option.
Allow server side operations (eg copy) to work across different onedrive configs.
-This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn’t enabled by default because it isn’t easy to tell if it will work between any two configurations.
+This can be useful if you wish to do a server side copy between two different Onedrives. Note that this isn't enabled by default because it isn't easy to tell if it will work between any two configurations.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that OneDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
-There are quite a few characters that can’t be in OneDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
Note that OneDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+There are quite a few characters that can't be in OneDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
The largest allowed file sizes are 15GB for OneDrive for Business and 100GB for OneDrive Personal (Updated 19 May 2020). Source: https://support.office.com/en-us/article/upload-photos-and-files-to-onedrive-b00ad3fe-6643-4b16-9212-de00ef02b586
The copy
is the only rclone command affected by this as we copy the file and then afterwards set the modification time to match the source file.
Note: Starting October 2018, users will no longer be able to disable versioning by default. This is because Microsoft has brought an update to the mechanism. To change this new default setting, a PowerShell command is required to be run by a SharePoint admin. If you are an admin, you can run these commands in PowerShell to change that setting:
Install-Module -Name Microsoft.Online.SharePoint.PowerShell
(in case you haven’t installed this already)Install-Module -Name Microsoft.Online.SharePoint.PowerShell
(in case you haven't installed this already)Import-Module Microsoft.Online.SharePoint.PowerShell -DisableNameChecking
Connect-SPOService -Url https://YOURSITE-admin.sharepoint.com -Credential YOU@YOURSITE.COM
(replacing YOURSITE
, YOU
, YOURSITE.COM
with the actual values; this will prompt for your credentials)Set-SPOTenant -EnableMinimumVersionRequirement $False
Disconnect-SPOService
(to disconnect from the server)Below are the steps for normal users to disable versioning. If you don’t see the “No Versioning” option, make sure the above requirements are met.
+Below are the steps for normal users to disable versioning. If you don't see the "No Versioning" option, make sure the above requirements are met.
User Weropol has found a method to disable versioning on OneDrive
It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) silently modifies uploaded files, mainly Office files (.docx, .xlsx, etc.), causing file size and hash checks to fail. To use rclone with such affected files on Sharepoint, you may disable these checks with the following command line arguments:
--ignore-checksum --ignore-size
-It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return “item not found” errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR>
command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir
on backend mysharepoint
, you may use:
It is a known issue that Sharepoint (not OneDrive or OneDrive for Business) may return "item not found" errors when users try to replace or delete uploaded files; this seems to mainly affect Office files (.docx, .xlsx, etc.). As a workaround, you may use the --backup-dir <BACKUP_DIR>
command line argument so rclone moves the files to be replaced/deleted into a given backup directory (instead of directly replacing/deleting them). For example, to instruct rclone to move the files into the directory rclone-backup-dir
on backend mysharepoint
, you may use:
--backup-dir mysharepoint:rclone-backup-dir
Error: access_denied
Code: AADSTS65005
Description: Using application 'rclone' is currently not supported for your organization [YOUR_ORGANIZATION] because it is in an unmanaged state. An administrator needs to claim ownership of the company by DNS validation of [YOUR_ORGANIZATION] before the application rclone can be provisioned.
-This means that rclone can’t use the OneDrive for Business API with your account. You can’t do much about it, maybe write an email to your admins.
+This means that rclone can't use the OneDrive for Business API with your account. You can't do much about it, maybe write an email to your admins.
However, there are other ways to interact with your OneDrive account. Have a look at the webdav backend: https://rclone.org/webdav/#sharepoint
Error: invalid_grant
Code: AADSTS50076
Description: Due to a configuration change made by your administrator, or because you moved to a new location, you must use multi-factor authentication to access '...'.
-If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config
, and choose to edit your OneDrive backend. Then, you don’t need to actually make any changes until you reach this question: Already have a token - refresh?
. For this question, answer y
and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
If you see the error above after enabling multi-factor authentication for your account, you can fix it by refreshing your OAuth refresh token. To do that, run rclone config
, and choose to edit your OneDrive backend. Then, you don't need to actually make any changes until you reach this question: Already have a token - refresh?
. For this question, answer y
and go through the process to refresh your token, just like the first time the backend is configured. After this, rclone should work again for this backend.
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to opendrive (OpenDrive).
-Username
Password.
NB Input to this must be obscured - see rclone obscure.
Here are the advanced options specific to opendrive (OpenDrive).
-This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Files will be uploaded in chunks this size.
Note that these chunks are buffered in memory so increasing them will increase memory use.
Note that OpenDrive is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
-There are quite a few characters that can’t be in OpenDrive file names. These can’t occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
Note that OpenDrive is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+There are quite a few characters that can't be in OpenDrive file names. These can't occur on Windows platforms, but on non-Windows platforms they are common. Rclone will map these names to and from an identical looking unicode equivalent. For example if a file has a ?
in it will be mapped to ?
instead.
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, eg remote:bucket/path/to/dir
.
Here is an example of making an QingStor configuration. First run
@@ -14130,10 +14134,10 @@ y/e/d> yrclone ls remote:bucket
Sync /home/local/directory
to the remote bucket, deleting any excess files in the bucket.
rclone sync /home/local/directory remote:bucket
-This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don’t have an MD5SUM.
+rclone supports multipart uploads with QingStor which means that it can upload files bigger than 5GB. Note that files uploaded with multipart upload don't have an MD5SUM.
Note that incomplete multipart uploads older than 24 hours can be removed with rclone cleanup remote:bucket
just for one bucket rclone cleanup remote:
for all buckets. QingStor does not ever remove incomplete multipart uploads so it may be necessary to run this from time to time.
With QingStor you can list buckets (rclone lsd
) using any zone, but you can only access the content of a bucket from the zone it was created in. If you attempt to access a bucket from the wrong zone, you will get an error, incorrect zone, the bucket is not in 'XXX' zone
.
The control characters 0x00-0x1F and / are replaced as in the default restricted characters set. Note that 0x7F is not replaced.
-Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to qingstor (QingCloud Object Storage).
-Get QingStor credentials from runtime. Only applies if access_key_id and secret_access_key is blank.
QingStor Access Key ID Leave blank for anonymous access or runtime credentials.
QingStor Secret Access Key (password) Leave blank for anonymous access or runtime credentials.
Enter an endpoint URL to connection QingStor API. Leave blank will use the default value “https://qingstor.com:443”
+Enter an endpoint URL to connection QingStor API. Leave blank will use the default value "https://qingstor.com:443"
Zone to connect to. Default is “pek3a”.
+Zone to connect to. Default is "pek3a".
Here are the advanced options specific to qingstor (QingCloud Object Storage).
-Number of connection retries.
Cutoff for switching to chunked upload
Any files larger than this will be uploaded in chunks of chunk_size. The minimum is 0 and the maximum is 5GB.
Chunk size to use for uploading.
When uploading files larger than upload_cutoff they will be uploaded as multipart uploads using this chunk size.
-Note that “–qingstor-upload-concurrency” chunks of this size are buffered in memory per transfer.
+Note that "--qingstor-upload-concurrency" chunks of this size are buffered in memory per transfer.
If you are transferring large files over high speed links and you have enough memory, then increasing this will speed up the transfers.
Concurrency for multipart uploads.
This is the number of chunks of the same file that are uploaded concurrently.
NB if you set this to > 1 then the checksums of multpart uploads become corrupted (the uploads themselves are not corrupted though).
@@ -14269,7 +14273,7 @@ y/e/d> yThis sets the encoding for the backend.
See: the encoding section in the overview for more info.
When you run through the config, make sure you choose true
for env_auth
and leave everything else blank.
rclone will then set any empty config parameters from the environment using standard OpenStack environment variables. There is a list of the variables in the docs for the swift library.
If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack
commands to get a token). Then, you just need to pass the two configuration variables auth_token
and storage_url
. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.
If your OpenStack installation uses a non-standard authentication method that might not be yet supported by rclone or the underlying swift library, you can authenticate externally (e.g. calling manually the openstack
commands to get a token). Then, you just need to pass the two configuration variables auth_token
and storage_url
. If they are both provided, the other variables are ignored. rclone will not try to authenticate but instead assume it is already authenticated and use these two variables to access the OpenStack installation.
You can use rclone with swift without a config file, if desired, like this:
source openstack-credentials-file
export RCLONE_CONFIG_MYREMOTE_TYPE=swift
export RCLONE_CONFIG_MYREMOTE_ENV_AUTH=true
rclone lsd myremote:
-This remote supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details.
As noted below, the modified time is stored on metadata on the object. It is used by default for all operations that require checking the time a file was last updated. It allows rclone to treat the remote more like a true filesystem, but it is inefficient because it requires an extra API call to retrieve the metadata.
-For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is “dirty”. By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
For many operations, the time the object was last uploaded to the remote is sufficient to determine if it is "dirty". By using --update
along with --use-server-modtime
, you can avoid the extra API call and simply upload files whose local modtime is newer than the time it was last uploaded.
Here are the standard options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
-Get swift credentials from environment variables in standard OpenStack form.
User name to log in (OS_USERNAME).
API key or password (OS_PASSWORD).
Authentication URL for server (OS_AUTH_URL).
User ID to log in - optional - most swift systems use user and leave this blank (v3 auth) (OS_USER_ID).
User domain - optional (v3 auth) (OS_USER_DOMAIN_NAME)
Tenant name - optional for v1 auth, this or tenant_id required otherwise (OS_TENANT_NAME or OS_PROJECT_NAME)
Tenant ID - optional for v1 auth, this or tenant required otherwise (OS_TENANT_ID)
Tenant domain - optional (v3 auth) (OS_PROJECT_DOMAIN_NAME)
Region name - optional (OS_REGION_NAME)
Storage URL - optional (OS_STORAGE_URL)
Auth Token from alternate authentication - optional (OS_AUTH_TOKEN)
Application Credential ID (OS_APPLICATION_CREDENTIAL_ID)
Application Credential Name (OS_APPLICATION_CREDENTIAL_NAME)
Application Credential Secret (OS_APPLICATION_CREDENTIAL_SECRET)
AuthVersion - optional - set to (1,2,3) if your auth URL has no version (ST_AUTH_VERSION)
Endpoint type to choose from the service catalogue (OS_ENDPOINT_TYPE)
The storage policy to use when creating a new container
This applies the specified storage policy when creating a new container. The policy cannot be changed afterwards. The allowed configuration values and their meaning depend on your Swift storage provider.
Here are the advanced options specific to swift (OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)).
-Above this size files will be chunked into a _segments container.
Above this size files will be chunked into a _segments container. The default for this is 5GB which is its maximum value.
Don’t chunk files during streaming upload.
+Don't chunk files during streaming upload.
When doing streaming uploads (eg using rcat or mount) setting this flag will cause the swift backend to not upload chunked files.
This will limit the maximum upload size to 5GB. However non chunked files are easier to deal with and have an MD5SUM.
Rclone will still chunk files bigger than chunk_size when doing normal copy operations.
@@ -14661,7 +14665,7 @@ rclone lsd myremote:This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
The Swift API doesn’t return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won’t check or use the MD5SUM for these.
+The Swift API doesn't return a correct MD5SUM for segmented files (Dynamic or Static Large Objects) so rclone won't check or use the MD5SUM for these.
Due to an oddity of the underlying swift library, it gives a “Bad Request” error rather than a more sensible error when the authentication fails for Swift.
+Due to an oddity of the underlying swift library, it gives a "Bad Request" error rather than a more sensible error when the authentication fails for Swift.
So this most likely means your username / password is wrong. You can investigate further with the --dump-bodies
flag.
This may also be caused by specifying the region when you shouldn’t have (eg OVH).
-This may also be caused by specifying the region when you shouldn't have (eg OVH).
+This is most likely caused by forgetting to specify your tenant when setting up a swift remote.
Paths are specified as remote:path
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Deleted files will be moved to the trash. Your subscription level will determine how long items stay in the trash. rclone cleanup
can be used to empty the trash.
So if the folder you want rclone to use has a URL which looks like https://my.pcloud.com/#page=filemanager&folder=5xxxxxxxx8&tpl=foldergrid
in the browser, then you use 5xxxxxxxx8
as the root_folder_id
in the config.
Here are the standard options specific to pcloud (Pcloud).
-Pcloud App Client Id Leave blank normally.
Pcloud App Client Secret Leave blank normally.
Here are the advanced options specific to pcloud (Pcloud).
-This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Fill in for rclone to use a non root folder as its starting point.
Hostname to connect to.
+This is normally set when rclone initially does the oauth connection.
+Paths are specified as remote:path
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the standard options specific to premiumizeme (premiumize.me).
-API Key.
This is not normally used - use oauth instead.
Here are the advanced options specific to premiumizeme (premiumize.me).
-This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Note that premiumize.me is case insensitive so you can’t have a file called “Hello.doc” and one called “hello.doc”.
-premiumize.me file names can’t have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
Note that premiumize.me is case insensitive so you can't have a file called "Hello.doc" and one called "hello.doc".
+premiumize.me file names can't have the \
or "
characters in. rclone maps these to and from an identical looking unicode equivalents \
and "
premiumize.me only supports filenames up to 255 characters in length.
Paths are specified as remote:path
Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Here are the advanced options specific to putio (Put.io).
-This sets the encoding for the backend.
See: the encoding section in the overview for more info.
This is a backend for the Seafile storage service: - It works with both the free community edition or the professional edition. - Seafile versions 6.x and 7.x are all supported. - Encrypted libraries are also supported. - It supports 2FA enabled users
There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don’t specify a library during the configuration: Paths are specified as remote:library
. You may put subdirectories in too, eg remote:library/path/to/dir
. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir
. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
There are two distinct modes you can setup your remote: - you point your remote to the root of the server, meaning you don't specify a library during the configuration: Paths are specified as remote:library
. You may put subdirectories in too, eg remote:library/path/to/dir
. - you point your remote to a specific library during the configuration: Paths are specified as remote:path/to/dir
. This is the recommended mode when using encrypted libraries. (This mode is possibly slightly faster than the root mode)
Here is an example of making a seafile configuration for a user with no two-factor authentication. First run
rclone config
@@ -15096,7 +15109,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-This remote is called seafile
. It’s pointing to the root of your seafile server and can now be used like this:
This remote is called seafile
. It's pointing to the root of your seafile server and can now be used like this:
See all libraries
rclone lsd seafile:
Create a new library
@@ -15106,7 +15119,7 @@ y/e/d> ySync /home/local/directory
to the remote library, deleting any excess files in the library.
rclone sync /home/local/directory seafile:library
Here’s an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
+Here's an example of a configuration in library mode with a user that has the two-factor authentication enabled. Your 2FA code will be asked at the end of the configuration, and will attempt to authenticate you:
No remotes found - make a new one
n) New remote
s) Set configuration password
@@ -15174,7 +15187,7 @@ y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y
-You’ll notice your password is blank in the configuration. It’s because we only need the password to authenticate you once.
+You'll notice your password is blank in the configuration. It's because we only need the password to authenticate you once.
You specified My Library
during the configuration. The root of the remote is pointing at the root of the library My Library
:
See all files in the library:
rclone lsd seafile:
@@ -15184,7 +15197,7 @@ y/e/d> y
rclone ls seafile:directory
Sync /home/local/directory
to the remote library, deleting any excess files in the library.
rclone sync /home/local/directory seafile:
-Seafile version 7+ supports --fast-list
which allows you to use fewer transactions in exchange for more memory. See the rclone docs for more details. Please note this is not supported on seafile server version 6.x
In addition to the default restricted characters set the following characters are also replaced:
@@ -15214,7 +15227,7 @@ y/e/d> y -Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
Rclone supports generating share links for non-encrypted libraries only. They can either be for a file or a directory:
rclone link seafile:seafile-tutorial.doc
@@ -15226,10 +15239,10 @@ http://my.seafile.server/d/9ea2455f6f55478bbb0d/
Please note a share link is unique for each file or directory. If you run a link command on a file/dir that has already been shared, you will get the exact same link.
It has been actively tested using the seafile docker image of these versions: - 6.3.4 community edition - 7.0.5 community edition - 7.1.3 community edition
-Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven’t been tested and might not work properly.
+Versions below 6.0 are not supported. Versions between 6.0 and 6.3 haven't been tested and might not work properly.
Here are the standard options specific to seafile (seafile).
-URL of seafile host to connect to
User name (usually email address)
Password
NB Input to this must be obscured - see rclone obscure.
Two-factor authentication (‘true’ if the account has 2FA enabled)
+Two-factor authentication ('true' if the account has 2FA enabled)
Name of the library. Leave blank to access all non-encrypted libraries.
Library password (for encrypted libraries only). Leave blank if you pass it through the command line.
NB Input to this must be obscured - see rclone obscure.
Authentication token
Here are the advanced options specific to seafile (seafile).
-Should rclone create a library if it doesn’t exist
+Should rclone create a library if it doesn't exist
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
SFTP runs over SSH v2 and is installed as standard with most modern SSH installations.
-Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user’s home directory.
Paths are specified as remote:path
. If the path does not begin with a /
it is relative to the home directory of the user. An empty path remote:
refers to the user's home directory.
"Note that some SFTP servers will need the leading / - Synology is a good example of this. rsync.net, on the other hand, requires users to OMIT the leading /.
Here is an example of making an SFTP configuration. First run
rclone config
@@ -15385,11 +15398,11 @@ y/e/d> y
Key files should be PEM-encoded private key files. For instance /home/$USER/.ssh/id_rsa
. Only unencrypted OpenSSH or PEM encrypted files are supported.
The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line (‘’ or ‘’) separating lines. i.e.
-key_pem = —–BEGIN RSA PRIVATE KEY—–0gAMbMbaSsd—–END RSA PRIVATE KEY—–
+The key file can be specified in either an external file (key_file) or contained within the rclone config file (key_pem). If using key_pem in the config file, the entry should be on a single line with new line ('' or '') separating lines. i.e.
+key_pem = -----BEGIN RSA PRIVATE KEY-----0gAMbMbaSsd-----END RSA PRIVATE KEY-----
This will generate it correctly for key_pem for use in the config:
awk '{printf "%s\\n", $0}' < ~/.ssh/id_rsa
-If you don’t specify pass
, key_file
, or key_pem
then rclone will attempt to contact an ssh-agent.
If you don't specify pass
, key_file
, or key_pem
then rclone will attempt to contact an ssh-agent.
You can also specify key_use_agent
to force the usage of an ssh-agent. In this case key_file
or key_pem
can also be specified to force the usage of a specific key in the ssh-agent.
Using an ssh-agent is the only way to load encrypted OpenSSH keys at the moment.
If you set the --sftp-ask-password
option, rclone will prompt for a password when needed and no password has been configured.
Some SFTP servers disable setting/modifying the file modification time after upload (for example, certain configurations of ProFTPd with mod_sftp). If you are using one of these servers, you can set the option set_modtime = false
in your RClone backend configuration to disable this behaviour.
Here are the standard options specific to sftp (SSH/SFTP Connection).
-SSH host to connect to
SSH username, leave blank for current username, ncw
SSH port, leave blank to use default (22)
SSH password, leave blank to use ssh-agent.
NB Input to this must be obscured - see rclone obscure.
Raw PEM-encoded private key, If specified, will override key_file parameter.
Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
The passphrase to decrypt the PEM-encoded private key file.
-Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can’t be used.
+Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys in the new OpenSSH format can't be used.
NB Input to this must be obscured - see rclone obscure.
When set forces the usage of the ssh-agent.
-When key-file is also set, the “.pub” file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid Too many authentication failures for *username*
errors when the ssh-agent contains many keys.
When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is requested from the ssh-agent. This allows to avoid Too many authentication failures for *username*
errors when the ssh-agent contains many keys.
Enable the use of insecure ciphers and key exchange methods.
This enables the use of the following insecure ciphers and key exchange methods:
Disable the execution of SSH commands to determine if remote file hashing is available. Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
Here are the advanced options specific to sftp (SSH/SFTP Connection).
-Allow asking for SFTP password when needed.
If this is set and no password is supplied then rclone will: - ask for a password - not contact the ssh agent
Override path used by SSH connection.
This allows checksum calculation when SFTP and SSH paths are different. This issue affects among others Synology NAS boxes.
Shared folders can be found in directories representing volumes
rclone sync /home/local/directory remote:/directory --ssh-path-override /volume2/directory
-Home directory can be found in a shared folder called “home”
+Home directory can be found in a shared folder called "home"
rclone sync /home/local/directory remote:/home/directory --ssh-path-override /volume1/homes/USER/directory
Set the modified time on the remote if set.
The command used to read md5 hashes. Leave blank for autodetect.
The command used to read sha1 hashes. Leave blank for autodetect.
Set to skip any symlinks and any other non regular files.
SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote’s PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck
to true
to disable checksumming.
SFTP also supports about
if the same login has shell access and df
are in the remote’s PATH. about
will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about
will fail if it does not have shell access or if df
is not in the remote’s PATH.
Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can’t be calculated properly. For them using disable_hashcheck
is a good idea.
The only ssh agent supported under Windows is Putty’s pageant.
+SFTP supports checksums if the same login has shell access and md5sum
or sha1sum
as well as echo
are in the remote's PATH. This remote checksumming (file hashing) is recommended and enabled by default. Disabling the checksumming may be required if you are connecting to SFTP servers which are not under your control, and to which the execution of remote commands is prohibited. Set the configuration option disable_hashcheck
to true
to disable checksumming.
SFTP also supports about
if the same login has shell access and df
are in the remote's PATH. about
will return the total space, free space, and used space on the remote for the disk of the specified path on the remote or, if not set, the disk of the root on the remote. about
will fail if it does not have shell access or if df
is not in the remote's PATH.
Note that some SFTP servers (eg Synology) the paths are different for SSH and SFTP so the hashes can't be calculated properly. For them using disable_hashcheck
is a good idea.
The only ssh agent supported under Windows is Putty's pageant.
The Go SSH library disables the use of the aes128-cbc cipher by default, due to security concerns. This can be re-enabled on a per-connection basis by setting the use_insecure_cipher
setting in the configuration file to true
. Further details on the insecurity of this cipher can be found [in this paper] (http://www.isg.rhul.ac.uk/~kp/SandPfinal.pdf).
SFTP isn’t supported under plan9 until this issue is fixed.
-Note that since SFTP isn’t HTTP based the following flags don’t work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn’t supported (but --contimeout
is).
SFTP isn't supported under plan9 until this issue is fixed.
+Note that since SFTP isn't HTTP based the following flags don't work with it: --dump-headers
, --dump-bodies
, --dump-auth
Note that --timeout
isn't supported (but --contimeout
is).
C14 is supported through the SFTP backend.
- +rsync.net is supported through the SFTP backend.
-See rsync.net’s documentation of rclone examples.
+See rsync.net's documentation of rclone examples.
SugarSync is a cloud service that enables active synchronization of files across computers and other devices for file backup, access, syncing, and sharing.
The initial setup for SugarSync involves getting a token from SugarSync which you can do with rclone. rclone config
walks you through it.
Note that the config asks for your email and password but doesn’t store them, it only uses them to get the initial token.
+Note that the config asks for your email and password but doesn't store them, it only uses them to get the initial token.
Once configured you can then use rclone
like this,
List directories (sync folders) in top level of your SugarSync
rclone lsd remote:
-List all the files in your SugarSync folder “Test”
+List all the files in your SugarSync folder "Test"
rclone ls remote:Test
To copy a local directory to an SugarSync folder called backup
rclone copy /home/source remote:backup
Paths are specified as remote:path
Paths may be as deep as required, eg remote:directory/subdirectory
.
NB you can’t create files in the top level folder you have to create a folder, which rclone will create as a “Sync Folder” with SugarSync.
+NB you can't create files in the top level folder you have to create a folder, which rclone will create as a "Sync Folder" with SugarSync.
SugarSync does not support modification times or hashes, therefore syncing will default to --size-only
checking. Note that using --update
will work as rclone can read the time files were uploaded.
SugarSync replaces the default restricted characters set except for DEL.
-Invalid UTF-8 bytes will also be replaced, as they can’t be used in XML strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in XML strings.
Deleted files will be moved to the “Deleted items” folder by default.
+Deleted files will be moved to the "Deleted items" folder by default.
However you can supply the flag --sugarsync-hard-delete
or set the config parameter hard_delete = true
if you would like files to be deleted straight away.
Here are the standard options specific to sugarsync (Sugarsync).
-Sugarsync App ID.
-Leave blank to use rclone’s.
+Leave blank to use rclone's.
Sugarsync Access Key ID.
-Leave blank to use rclone’s.
+Leave blank to use rclone's.
Sugarsync Private Access Key
-Leave blank to use rclone’s.
+Leave blank to use rclone's.
Permanently delete files if true otherwise put them in the deleted files.
Here are the advanced options specific to sugarsync (Sugarsync).
-Sugarsync refresh token
Leave blank normally, will be auto configured by rclone.
Sugarsync authorization
Leave blank normally, will be auto configured by rclone.
Sugarsync authorization expiry
Leave blank normally, will be auto configured by rclone.
Sugarsync user
Leave blank normally, will be auto configured by rclone.
Sugarsync root id
Leave blank normally, will be auto configured by rclone.
Sugarsync deleted folder id
Leave blank normally, will be auto configured by rclone.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Paths are specified as remote:bucket
(or remote:
for the lsf
command.) You may put subdirectories in too, eg remote:bucket/path/to/dir
.
Once configured you can then use rclone
like this.
Use the mkdir
command to create new bucket, e.g. bucket
.
Use the mkdir
command to create new bucket, e.g. bucket
.
rclone mkdir remote:bucket
Use the lsf
command to list all buckets.
Use the copy
command to upload an object.
rclone copy --progress /home/local/directory/file.ext remote:bucket/path/to/dir/
-The --progress
flag is for displaying progress information. Remove it if you don’t need this information.
The --progress
flag is for displaying progress information. Remove it if you don't need this information.
Use a folder in the local path to upload all its objects.
rclone copy --progress /home/local/directory/ remote:bucket/path/to/dir/
Only modified files will be copied.
@@ -15895,7 +15908,7 @@ y/e/d> yUse the copy
command to download an object.
rclone copy --progress remote:bucket/path/to/dir/file.ext /home/local/directory/
-The --progress
flag is for displaying progress information. Remove it if you don’t need this information.
The --progress
flag is for displaying progress information. Remove it if you don't need this information.
Use a folder in the remote path to download all its objects.
rclone copy --progress remote:bucket/path/to/dir/ /home/local/directory/
Use the sync
command to sync the source to the destination, changing the destination only, deleting any excess files.
rclone sync --progress /home/local/directory/ remote:bucket/path/to/dir/
-The --progress
flag is for displaying progress information. Remove it if you don’t need this information.
The --progress
flag is for displaying progress information. Remove it if you don't need this information.
Since this can cause data loss, test first with the --dry-run
flag to see exactly what would be copied and deleted.
The sync can be done also from Tardigrade to the local file system.
rclone sync --progress remote:bucket/path/to/dir/ /home/local/directory/
@@ -15919,26 +15932,26 @@ y/e/d> y
rclone sync --progress s3:bucket/path/to/dir/ tardigrade:bucket/path/to/dir/
Here are the standard options specific to tardigrade (Tardigrade Decentralized Cloud Storage).
-Choose an authentication method.
Access Grant.
Satellite Address. Custom satellite address should match the format: <nodeid>@<address>:<port>
.
API Key.
Encryption Passphrase. To access existing objects enter passphrase used for uploading.
rclone copy C:\source remote:source
Here are the standard options specific to union (Union merges the contents of several upstream fs).
-List of space separated upstreams. Can be ‘upstreama:test/dir upstreamb:’, ‘“upstreama:test/space:ro dir” upstreamb:’, etc.
+List of space separated upstreams. Can be 'upstreama:test/dir upstreamb:', '"upstreama:test/space:ro dir" upstreamb:', etc.
Policy to choose upstream on ACTION category.
Policy to choose upstream on CREATE category.
Policy to choose upstream on SEARCH category.
Cache time of usage and free space (in seconds). This option is only useful when a path preserving policy is used.
Likewise plain WebDAV does not support hashes, however when used with Owncloud or Nextcloud rclone will support SHA1 and MD5 hashes. Depending on the exact version of Owncloud or Nextcloud hashes may appear on all objects, or only on objects which had a hash uploaded with them.
Here are the standard options specific to webdav (Webdav).
-URL of http host to connect to
Name of the Webdav site/service/software you are using
User name
Password.
NB Input to this must be obscured - see rclone obscure.
Bearer token instead of user/pass (eg a Macaroon)
Here are the advanced options specific to webdav (Webdav).
-Command to run to get a bearer token
This is configured in an identical way to Owncloud. Note that Nextcloud does not support streaming of files (rcat
) whereas Owncloud does. This may be fixed in the future.
Rclone can be used with Sharepoint provided by OneDrive for Business or Office365 Education Accounts. This feature is only needed for a few of these Accounts, mostly Office365 Education ones. These accounts are sometimes not verified by the domain owner github#1975
-This means that these accounts can’t be added using the official API (other Accounts should work with the “onedrive” option). However, it is possible to access them using webdav.
-To use a sharepoint remote with rclone, add it like this: First, you need to get your remote’s URL:
+This means that these accounts can't be added using the official API (other Accounts should work with the "onedrive" option). However, it is possible to access them using webdav.
+To use a sharepoint remote with rclone, add it like this: First, you need to get your remote's URL:
https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/_layouts/15/onedrive.aspx
You’ll only need this URL up to the email address. After that, you’ll most likely want to add “/Documents”. That subdirectory contains the actual data stored on your OneDrive.
+You'll only need this URL up to the email address. After that, you'll most likely want to add "/Documents". That subdirectory contains the actual data stored on your OneDrive.
Add the remote to rclone like this: Configure the url
as https://[YOUR-DOMAIN]-my.sharepoint.com/personal/[YOUR-EMAIL]/Documents
and use your normal account email and password for user
and pass
. If you have 2FA enabled, you have to generate an app password. Set the vendor
to sharepoint
.
Your config file should look like this:
[sharepoint]
@@ -16432,12 +16445,12 @@ vendor = other
user = YourEmailAddress
pass = encryptedpassword
As SharePoint does some special things with uploaded documents, you won’t be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer.
-For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the “Last Modified” datetime property to compare your documents:
+As SharePoint does some special things with uploaded documents, you won't be able to use the documents size or the documents hash to compare if a file has been changed since the upload / which file is newer.
+For Rclone calls copying files (especially Office files such as .docx, .xlsx, etc.) from/to SharePoint (like copy, sync, etc.), you should append these flags to ensure Rclone uses the "Last Modified" datetime property to compare your documents:
--ignore-size --ignore-checksum --update
dCache is a storage system that supports many protocols and authentication/authorisation schemes. For WebDAV clients, it allows users to authenticate with username and password (BASIC), X.509, Kerberos, and various bearer tokens, including Macaroons and OpenID-Connect access tokens.
-Configure as normal using the other
type. Don’t enter a username or password, instead enter your Macaroon as the bearer_token
.
Configure as normal using the other
type. Don't enter a username or password, instead enter your Macaroon as the bearer_token
.
The config will end up looking something like this.
[dcache]
type = webdav
@@ -16456,7 +16469,7 @@ eyJraWQ[...]QFXDt0
paul@celebrimbor:~$
Note Before the oidc-token
command will work, the refresh token must be loaded into the oidc agent. This is done with the oidc-add
command (e.g., oidc-add XDC
). This is typically done once per login session. Full details on this and how to register oidc-agent with your OIDC Provider are provided in the oidc-agent documentation.
The rclone bearer_token_command
configuration option is used to fetch the access token from oidc-agent.
Configure as a normal WebDAV endpoint, using the ‘other’ vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., oidc-agent XDC
).
Configure as a normal WebDAV endpoint, using the 'other' vendor, leaving the username and password empty. When prompted, choose to edit the advanced config and enter the command to get a bearer token (e.g., oidc-agent XDC
).
The following example config shows a WebDAV endpoint that uses oidc-agent to supply an access token from the XDC OIDC Provider.
[dcache]
type = webdav
@@ -16527,12 +16540,12 @@ y/e/d> y
To view your current quota you can use the rclone about remote:
command which will display your usage limit (quota) and the current usage.
The default restricted characters set are replaced.
-Invalid UTF-8 bytes will also be replaced, as they can’t be used in JSON strings.
+Invalid UTF-8 bytes will also be replaced, as they can't be used in JSON strings.
When uploading very large files (bigger than about 5GB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you’ll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
When uploading very large files (bigger than about 5GB) you will need to increase the --timeout
parameter. This is because Yandex pauses (perhaps to calculate the MD5SUM for the entire file) before returning confirmation that the file has been uploaded. The default handling of timeouts in rclone is to assume a 5 minute pause is an error and close the connection - you'll see net/http: timeout awaiting response headers
errors in the logs if this is happening. Setting the timeout to twice the max size of file in GB should be enough, so if you want to upload a 30GB file set a timeout of 2 * 30 = 60m
, that is --timeout 60m
.
Here are the standard options specific to yandex (Yandex Disk).
-Yandex Client Id Leave blank normally.
Yandex Client Secret Leave blank normally.
Here are the advanced options specific to yandex (Yandex Disk).
-Remove existing public link to file/folder with link command rather than creating. Default is false, meaning link command will create or retrieve public link.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Rclone reads and writes the modified time using an accuracy determined by the OS. Typically this is 1ns on Linux, 10 ns on Windows and 1 Second on OS X.
Filenames should be encoded in UTF-8 on disk. This is the normal case for Windows and OS X.
-There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the convmv
tool to convert the filesystem to UTF-8. This tool is available in most distributions’ package managers.
There is a bit more uncertainty in the Linux world, but new distributions will have UTF-8 encoded files names. If you are using an old Linux filesystem with non UTF-8 file names (eg latin1) then you can use the convmv
tool to convert the filesystem to UTF-8. This tool is available in most distributions' package managers.
If an invalid (non-UTF8) filename is read, the invalid characters will be replaced with a quoted representation of the invalid bytes. The name gro\xdf
will be transferred as gro‛DF
. rclone
will emit a debug message in this case (use -v
to see), eg
Local file system at .: Replacing invalid UTF-8 characters in "gro\xdf"
Invalid UTF-8 bytes will also be replaced, as they can’t be converted to UTF-16.
+Invalid UTF-8 bytes will also be replaced, as they can't be converted to UTF-16.
Rclone handles long paths automatically, by converting all paths to long UNC paths which allows paths up to 32,767 characters.
This is why you will see that your paths, for instance c:\files
is converted to the UNC path \\?\c:\files
in the output, and \\server\share
is converted to \\?\UNC\server\share
.
Normally rclone will ignore symlinks or junction points (which behave like symlinks under Windows).
-If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a ‘.rclonelink’ suffix in the remote storage.
+If you supply this flag then rclone will copy symbolic links from the local storage, and store them as text files, with a '.rclonelink' suffix in the remote storage.
The text file will contain the target of the symbolic link (see example).
This flag applies to all commands.
For example, supposing you have a directory structure like this
@@ -16888,9 +16901,9 @@ nounc = true /tmp/a ├── file1 -> ./file4 └── file2 -> /home/user/file3 -Copying the entire directory with ‘-l’
+Copying the entire directory with '-l'
$ rclone copyto -l /tmp/a/file1 remote:/tmp/a/
-The remote files are created with a ‘.rclonelink’ suffix
+The remote files are created with a '.rclonelink' suffix
$ rclone ls remote:/tmp/a
5 file1.rclonelink
14 file2.rclonelink
@@ -16900,14 +16913,14 @@ nounc = true
$ rclone cat remote:/tmp/a/file2.rclonelink
/home/user/file3
-Copying them back with ‘-l’
+Copying them back with '-l'
$ rclone copyto -l remote:/tmp/a/ /tmp/b/
$ tree /tmp/b
/tmp/b
├── file1 -> ./file4
└── file2 -> /home/user/file3
-However, if copied back without ‘-l’
+However, if copied back without '-l'
$ rclone copyto remote:/tmp/a/ /tmp/b/
$ tree /tmp/b
@@ -16915,7 +16928,7 @@ $ tree /tmp/b
├── file1.rclonelink
└── file2.rclonelink
Note that this flag is incompatible with -copy-links
/ -L
.
Normally rclone will recurse through filesystems as mounted.
However if you set --one-file-system
or -x
this tells rclone to stay in the filesystem specified by the root and not to recurse into different file systems.
For example if you have a directory hierarchy like this
@@ -16936,10 +16949,10 @@ $ tree /tmp/b 0 file1 0 file2NB Rclone (like most unix tools such as du
, rsync
and tar
) treats a bind mount to the same device as being on the same filesystem.
NB This flag is only available on Unix based systems. On systems where it isn’t supported (eg Windows) it will be ignored.
+NB This flag is only available on Unix based systems. On systems where it isn't supported (eg Windows) it will be ignored.
Here are the standard options specific to local (Local Disk).
-Disable UNC (long path names) conversion on Windows
Here are the advanced options specific to local (Local Disk).
-Follow symlinks and copy the pointed to item.
Translate symlinks to/from regular files with a ‘.rclonelink’ extension
+Translate symlinks to/from regular files with a '.rclonelink' extension
Don’t warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.
+Don't warn about skipped symlinks. This flag disables warning messages on skipped symlinks or junction points, as you explicitly acknowledge that they should be skipped.
Don’t apply unicode normalization to paths and filenames (Deprecated)
+Don't apply unicode normalization to paths and filenames (Deprecated)
This flag is deprecated now. Rclone no longer normalizes unicode file names, but it compares them with unicode normalization in the sync routine instead.
Don’t check to see if the files change during upload
-Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts “can’t copy - source file is being updated” if the file changes during upload.
+Don't check to see if the files change during upload
+Normally rclone checks the size and modification time of files as they are being uploaded and aborts with a message which starts "can't copy - source file is being updated" if the file changes during upload.
However on some file systems this modification time check may fail (eg Glusterfs #2206) so this check can be disabled with this flag.
Don’t cross filesystem boundaries (unix/macOS only).
+Don't cross filesystem boundaries (unix/macOS only).
Force the filesystem to report itself as case sensitive.
Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.
Force the filesystem to report itself as case insensitive
Normally the local backend declares itself as case insensitive on Windows/macOS and case sensitive for everything else. Use this flag to override the default choice.
Disable sparse files for multi-thread downloads
On Windows platforms rclone will make sparse files when doing multi-thread downloads. This avoids long pauses on large files where the OS zeros the file. However sparse files may be undesirable as they cause disk fragmentation and can be slow to work with.
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
Run them with
rclone backend COMMAND remote:
The help below will explain what arguments each command takes.
-See the “rclone backend” command for more info on how to pass options and arguments.
+See the "rclone backend" command for more info on how to pass options and arguments.
These can be run on a running backend using the rc command backend/command.
A null operation for testing backend commands
@@ -17056,10 +17069,55 @@ $ tree /tmp/bThis is a test command which has some options you can try to change the output.
Options:
--delete-before
(Nick Craig-Wood)--delete-before
(Nick Craig-Wood)--local-no-sparse
flag for disabling sparse files (Nick Craig-Wood)rclone backend noop
for testing purposes (Nick Craig-Wood)--fast-list
and --drive-shared-with-me
(Nick Craig-Wood)--drive-shared-with-me
(Nick Craig-Wood)--drive-stop-on-upload-limit
to respond to teamDriveFileLimitExceeded
. (harry)--header-upload
and --header-download
(Tim Gallant)--header-upload
and --header-download
(Tim Gallant)session.New()
with session.NewSession()
(Lars Lehtonen)--s3-disable-checksum
(Nick Craig-Wood)--order-by
flag to order transfers (Nick Craig-Wood)--vfs-cache-mode writes
(Nick Craig-Wood)--sftp-skip-links
to skip symlinks and non regular files (Nick Craig-Wood)DropboxHash
and CRC-32
(Nick Craig-Wood)--update
/-u
not transfer files that haven’t changed (Nick Craig-Wood)--update
/-u
not transfer files that haven't changed (Nick Craig-Wood)--files-from without --no-traverse
doing a recursive scan (Nick Craig-Wood)--progress
work in git bash on Windows (Nick Craig-Wood)--size-only
and --ignore-size
together. (Nick Craig-Wood)--size-only
and --ignore-size
together. (Nick Craig-Wood)--files-from
is in use (Michele Caci)--ignore-checksum
(Nick Craig-Wood)--ignore-checksum
(Nick Craig-Wood)--size-only
mode (Nick Craig-Wood)--no-traverse
(buengese)--local-case-sensitive
and --local-case-insensitive
(Nick Craig-Wood)--backup-dir
(Nick Craig-Wood)--ignore-checksum
is in effect, don’t calculate checksum (Nick Craig-Wood)--ignore-checksum
is in effect, don't calculate checksum (Nick Craig-Wood)--rc-serve
(Nick Craig-Wood)--s3-use-accelerate-endpoint
(Nick Craig-Wood)--fast-list
for listing operations where it won’t use more memory (Nick Craig-Wood)
+--fast-list
for listing operations where it won't use more memory (Nick Craig-Wood)
ListR
dedupe
, serve restic
lsf
, ls
, lsl
, lsjson
, lsd
, md5sum
, sha1sum
, hashsum
, size
, delete
, cat
, settier
--files-only
and --dirs-only
flags (calistri)rclone link
(Nick Craig-Wood)--dir-perms
and --file-perms
flags to set default permissions (Nick Craig-Wood)--dry-run
set (Nick Craig-Wood)--fast-list
flag--files-from
and non-existent files (Nick Craig-Wood)--files-from
only read the objects specified and don’t scan directories (Nick Craig-Wood)
+--files-from
only read the objects specified and don't scan directories (Nick Craig-Wood)
--ignore-case
flag (Nick Craig-Wood)--json
flag for structured JSON input (Nick Craig-Wood)--progress
update the stats correctly at the end (Nick Craig-Wood)--dry-run
(Nick Craig-Wood)--dry-run
(Nick Craig-Wood)--config
(albertony)--progress
on windows (Nick Craig-Wood)--config
(albertony)--progress
on windows (Nick Craig-Wood)--files-from
work-around
+--files-from
work-around
--absolute
flag to add a leading / onto path names--csv
flag for compliant CSV output--drive-acknowledge-abuse
to download flagged files--drive-alternate-export
to fix large doc export.
and ..
from directory listingrc
: enable the remote control of a running rclone
--backup-dir
don’t delete files if we can’t set their modtime
+--backup-dir
don't delete files if we can't set their modtime
--backup-dir
serve http
: fix serving files with : in - fixes--exclude-if-present
to ignore directories which it doesn’t have permission for (Iakov Davydov)--exclude-if-present
to ignore directories which it doesn't have permission for (Iakov Davydov)--no-traverse
flag because it is obsoletededupe
- implement merging of duplicate directoriescheck
and cryptcheck
made more consistent and use less memorycleanup
for remaining remotes (thanks ishuah)--immutable
for ensuring that files don’t change (thanks Jacob McNamee)--immutable
for ensuring that files don't change (thanks Jacob McNamee)--user-agent
option (thanks Alex McGrath Kraak)--disable
flag to disable optional features--bind
flag for choosing the local addr on outgoing connectionsrclone mount
to limit external apps--stats
flagrclone check
shows count of hashes that couldn’t be checkedrclone check
shows count of hashes that couldn't be checkedrclone listremotes
commandAuthorization:
lines from --dump-headers
outputrclone check
on crypted file systems-q
-no-seek
flag to disableX-Bz-Test-Mode
header.X-Bz-Test-Mode
header.--max-size 0b
b
suffix so we can specify bytes in –bwlimit, –min-size etcb
suffix so we can specify bytes in --bwlimit, --min-size etc--size-only
flag.--size-only
.--dry-run
set--dry-run
setmove
command--log-file
delete
command to wait until all finished - fixes missing deletes.more than one upload using auth token
--dry-run
!--drive-use-trash
flag so rclone trashes instead of deletesRclone doesn’t currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
+Rclone doesn't currently preserve the timestamps of directories. This is because rclone only really considers objects when syncing.
Currently rclone loads each directory entirely into memory before using it. Since each Rclone object takes 0.5k-1k of memory this can take a very long time and use an extremely large amount of memory.
Millions of files in a directory tend caused by software writing cloud storage (eg S3 buckets).
Bucket based remotes (eg S3/GCS/Swift/B2) do not have a concept of directories. Rclone therefore cannot create directories in them which means that empty directories on a bucket based remote will tend to disappear.
-Some software creates empty keys ending in /
as directory markers. Rclone doesn’t do this as it potentially creates more objects and costs more. It may do in future (probably with a flag).
Some software creates empty keys ending in /
as directory markers. Rclone doesn't do this as it potentially creates more objects and costs more. It may do in future (probably with a flag).
Bugs are stored in rclone’s GitHub project:
+Bugs are stored in rclone's GitHub project:
You can use rclone from multiple places at the same time if you choose different subdirectory for the output, eg
Server A> rclone sync /tmp/whatever remote:ServerA
Server B> rclone sync /tmp/whatever remote:ServerB
-If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other’s files, eg
+If you sync to the same directory then you should use rclone copy otherwise the two instances of rclone may delete each other's files, eg
Server A> rclone copy /tmp/whatever remote:Backup
Server B> rclone copy /tmp/whatever remote:Backup
The file names you upload from Server A and Server B should be different in this case, otherwise some file systems (eg Drive) may make duplicates.
-Rclone stores each file you transfer as a native object on the remote cloud storage system. This means that you can see the files you upload as expected using alternative access methods (eg using the Google Drive web interface). There is a 1:1 mapping between files on your hard disk and objects created in the cloud storage system.
-Cloud storage systems (at least none I’ve come across yet) don’t support partially uploading an object. You can’t take an existing object, and change some bytes in the middle of it.
+Cloud storage systems (at least none I've come across yet) don't support partially uploading an object. You can't take an existing object, and change some bytes in the middle of it.
It would be possible to make a sync system which stored binary diffs instead of whole objects like rclone does, but that would break the 1:1 mapping of files on your hard disk to objects in the remote cloud storage system.
All the cloud storage systems support partial downloads of content, so it would be possible to make partial downloads work. However to make this work efficiently this would require storing a significant amount of metadata, which breaks the desired 1:1 mapping of files to objects.
The NO_PROXY
allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance “foo.com” also matches “bar.foo.com”.
The NO_PROXY
allows you to disable the proxy for specific hosts. Hosts must be comma separated, and can contain domains or parts. For instance "foo.com" also matches "bar.foo.com".
e.g.
export no_proxy=localhost,127.0.0.0/8,my.host.name
export NO_PROXY=$no_proxy
Note that the ftp backend does not support ftp_proxy
yet.
This means that rclone
can’t file the SSL root certificates. Likely you are running rclone
on a NAS with a cut-down Linux OS, or possibly on Solaris.
This means that rclone
can't file the SSL root certificates. Likely you are running rclone
on a NAS with a cut-down Linux OS, or possibly on Solaris.
Rclone (via the Go runtime) tries to load the root certificates from these places on Linux.
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
@@ -20950,13 +21008,13 @@ export NO_PROXY=$no_proxy
curl -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
ntpclient -s -h pool.ntp.org
The two environment variables SSL_CERT_FILE
and SSL_CERT_DIR
, mentioned in the x509 package, provide an additional way to provide the SSL root certificates.
Note that you may need to add the --insecure
option to the curl
command line if it doesn’t work without.
Note that you may need to add the --insecure
option to the curl
command line if it doesn't work without.
curl --insecure -o /etc/ssl/certs/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt
Likely this means that you are running rclone on Linux version not supported by the go runtime, ie earlier than version 2.6.23.
See the system requirements section in the go install docs for full details.
This is caused by uploading these files from a Windows computer which hasn’t got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions’ file formats
+This is caused by uploading these files from a Windows computer which hasn't got the Microsoft Office suite installed. The easiest way to fix is to install the Word viewer and the Microsoft Office Compatibility Pack for Word, Excel, and PowerPoint 2007 and later versions' file formats
This happens when rclone cannot resolve a domain. Please check that your DNS setup is generally working, e.g.
# both should print a long list of possible IP addresses @@ -20965,7 +21023,7 @@ dig www.googleapis.com @8.8.8.8 # resolve with Google's DNS server
If you are usingsystemd-resolved
(default on Arch Linux), ensure it is at version 233 or higher. Previous releases contain a bug which causes not all domains to be resolved properly.Additionally with the
GODEBUG=netdns=
environment variable the Go resolver decision can be influenced. This also allows to resolve certain issues with DNS resolution. See the name resolution section in the go docs.The total size reported in the stats for a sync is wrong and keeps changing
-It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the –max-backlog flag.
+It is likely you have more than 10,000 files that need to be synced. By default rclone only gets 10,000 files ahead in a sync so as not to use up too much memory. You can change this default with the --max-backlog flag.
Rclone is using too much memory or appears to have a memory leak
Rclone is written in Go which uses a garbage collector. The default settings for the garbage collector mean that it runs when the heap size has doubled.
However it is possible to tune the garbage collector to use less memory by setting GOGC to a lower value, say
@@ -21173,7 +21231,7 @@ THE SOFTWARE.export GOGC=20
. This will make the garbage collector work harder, reducing memory size at the expense of CPU usage.
The project’s repository is located at:
+The project's repository is located at:
Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don’t email me requests for help - those are better directed to the forum. Thanks!
+Or if all else fails or you want to ask something private or confidential email Nick Craig-Wood. Please don't email me requests for help - those are better directed to the forum. Thanks!